{"value":"在 [*Unreal Engine 像素流送在 g4dn 上实现容器化部署实践(一)*](https://aws.amazon.com/cn/blogs/china/practice-of-container-deployment-of-unreal-engine-pixel-streaming-on-g4dn-i/)中我们讲解了在 g4dn 上如何编译 Unreal Engine 4 项目, 构建 docker 镜像,还讲解了如何通过 docker-compose 部署了 UE4 像素流送演示项目,接下来我们将会在 [Amazon Elastic Kubernetes Service](https://aws.amazon.com/cn/eks/?trk=cndc-detail)(EKS) 上尝试部署一套可以弹性伸缩的像素流送平台。\n\n本文分为如下几个部分:\n\n1. 架构说明\n2. 创建 [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) 集群,配置工作节点, 配置 Amazon Load Balancer Controller\n3. 编写需要的 yaml 配置文件,包括 TURN/STUN, Matchmaker, Streamer , Enovy 等多个组件\n4. 测试弹性伸缩功能\n### **1. 架构说明**\nEpic Games 官方提供了一个多用户/多游戏像素流送的参考架构(见下图), 通过一个 Matchmaker 匹配服务,将用户请求连接到不同的Signaling Server(Web Server), 不过官方提供的 Matchmaker 是直接返回 Singaling Server 的 IP 和端口,并没有实现统一接入,需要自行修改 Matchmaker 实现统一端口接入。\n\nEpic Games 官方多用户/多游戏像素流送参考架构图\n\n![image.png](https://dev-media.amazoncloud.cn/de294cf6e4304f6a8b9b6a5d3a37dfd0_image.png)\n\n根据 [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail)/Kubernetes 的特性进行了重构\n\n- 分别使用 CPU 和 GPU, 2种类型的工作节点, 通过节点亲和/污点容忍,CPU 类型节点运行 Matchmaker, STUN/TRUN, Envoy 路由服务, GPU 类型节点(g4dn)运行 UE4 项目/Streamer。\n- 平台通过统一 Ingress 入口提供对外服务, 使用了 Enovy 作为中间路由。\n- Matchmaker 进行了改造,不用进行端口跳转。\n\n[Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) 参考架构图如下:\n\n![image.png](https://dev-media.amazoncloud.cn/7dcd61c86ec64b2eb7d6bb0214cac2ab_image.png)\n\n### **2. 创建 [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) 集群**\n#### **2.1 创建 [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) 集群**\n我们将同时会创建2个工作节点组,1节点组个使用 m5.large (CPU负载),主要用来部署 STUN/TURN, Envoy, Matchmaker, player 等服务器, 另外一个节点组使用 g4dn.xlarge,主要是用来部署 UE4 像素流送项目,实现 WebRTC 流推送。\n\n集群配置文件(cluster.yaml),当前默认 kubernetes 集群版本为1.21\n\n```\\napiVersion: eksctl.io/v1alpha5\\nkind: ClusterConfig\\n\\nmetadata:\\n name: ue4-pixelsteraming-eks\\n region: us-east-1\\n\\nnodeGroups:\\n - name: ng-cpu-group01\\n instanceType: m5.large\\n desiredCapacity: 1\\n minSize: 1\\n maxSize: 4\\n labels: \\n app.pixel/turn: true\\n app.pixel/envoy: true\\n ssh:\\n allow: true\\n publicKeyName: <请更换成自己的私钥对名字>\\n - name: ng-gpu-group01\\n instanceType: g4dn.xlarge\\n desiredCapacity: 1\\n minSize: 1\\n maxSize: 4\\n labels: \\n app.pixel/streamer: true\\n taints:\\n app.pixel/streamer: \\"true:NoSchedule\\"\\n ssh:\\n allow: true\\n publicKeyName: <请更换成自己的私钥对名字>\\n```\n\n创建集群\n\n```eksctl create cluster -c cluster.yaml```\n\n#### **2.1 部署 Amazon Load Balancer Controller (ALB ingress需要)**\n#创建 ingress 使用的角色,策略, service account\n\n```\\neksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=ue4-pixelsteraming-eks --approve\\n\\neksctl create iamserviceaccount \\\\\\n --cluster=ue4-pixelsteraming-eks \\\\\\n --namespace=kube-system \\\\\\n --name=aws-load-balancer-controller \\\\\\n --attach-policy-arn=arn:aws:iam::<12位账号IP>:policy/AWSLoadBalancerControllerIAMPolicy \\\\\\n --override-existing-serviceaccounts \\\\\\n --approve\\n```\n\n#安装cert-manager\n\n```\\nkubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.yaml\\n```\n\n#下载 ALB 2.2.1 安装配置文件\n\n```\\ncurl -OL https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.1/docs/install/v2_2_1_full.yaml\\n```\n\n编辑 v2_2_1_full.yaml,将–cluster-name=your-cluster-name 修改为–cluster-name=eksworshop\n\n![image.png](https://dev-media.amazoncloud.cn/08c0563f48ca424ab1cd34989e63a1ca_image.png)\n\n修改 v2_2_1_full.yaml 后安装 Amazon-load-balancer-controller\n\n```kubectl apply -f v2_2_1_full.yaml```\n\n系统会自动在 kube-system 下部署 Amazon-load-balancer-controller。\n\n```\\neks:~/environment/ue4-on-eks/deploy \$ kc get deploy -n kube-system\\nNAME READY UP-TO-DATE AVAILABLE AGE\\naws-load-balancer-controller 1/1 1 1 5d19h\\n```\n\n**3. 编写需要的 yaml 文件**\n根据步骤1我们设计的架构,准备所需要的 yaml 文件, 笔者已经将所有文件放在UE4-PixelStreaming-Amazon-EKS仓库中的deploy目录中。\n\n```git clone https://github.com/stevensu1977/UE4-PixelStreaming-AWS-EKS```\n\n#### **3.1 创建 namespace,配置权限**\n我们将使用部署一个带有 kubectl 命令行工具的 pod 来为了获得 TURN 外网地址,所以我们需要 kubernetes API Server 访问权限\n\n创建 Namespace\n\n```\\napiVersion: v1\\nkind: Namespace\\nmetadata:\\n labels:\\n app.kubernetes.io/component: unrealengine\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n name: ue4\\n```\n\n创建了2个 service account 并且通过 ClusterRole, ClusterRoleBinding 进行正确授权\n\n```\\n#service account 摘要\\napiVersion: v1\\nkind: ServiceAccount\\nmetadata:\\n labels:\\n app.kubernetes.io/component: unrealengine\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n name: stream-svc-account\\n namespace: ue4\\n---\\napiVersion: v1\\nkind: ServiceAccount\\nmetadata:\\n labels:\\n app.kubernetes.io/component: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n name: turn-svc-account\\n namespace: ue4\\n```\n\n#### **3.2 STUN/TURN server**\n我们使用了 coturn 作为 STUN/TURN server ,通过它来解决内网 Streamer (游戏实例)数据传输的问题 , 通过标签”app.pixel/turn=true” 将 cotton 以 Dameonset 的方式部署到指定的 EC2 节点。\n\n```\\napiVersion: apps/v1\\nkind: DaemonSet\\nmetadata:\\n labels:\\n app.kubernetes.io/component: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n app: turn\\n name: turn\\n namespace: ue4\\nspec:\\n selector:\\n matchLabels:\\n app.kubernetes.io/name: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n app: turn\\n template:\\n metadata:\\n annotations:\\n sidecar.istio.io/inject: \\"false\\"\\n labels:\\n app: turn\\n app.kubernetes.io/name: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n version: 0.0.1\\n spec:\\n affinity:\\n nodeAffinity:\\n requiredDuringSchedulingIgnoredDuringExecution:\\n nodeSelectorTerms:\\n - matchExpressions:\\n - key: app.pixel/turn\\n operator: In\\n values:\\n - \\"true\\"\\n containers:\\n - env:\\n - name: INTERNAL_IP\\n valueFrom:\\n fieldRef:\\n fieldPath: status.podIP\\n - name: TURN_PORT\\n value: \\"3478\\"\\n - name: TURN_MIN_PORT\\n value: \\"49152\\"\\n - name: TURN_MAX_PORT\\n value: \\"65535\\"\\n - name: TURN_REALM\\n value: app.pixel\\n - name: TURN_USER\\n valueFrom:\\n secretKeyRef:\\n key: username\\n name: turn-secret\\n - name: TURN_PASS\\n valueFrom:\\n secretKeyRef:\\n key: password\\n name: turn-secret\\n image: ghcr.io/stevensu1977/ue4-pixelstreaming/turnserver\\n imagePullPolicy: Always\\n name: turn\\n ports:\\n - containerPort: 3478\\n hostPort: 3478\\n name: turn-udp\\n protocol: UDP\\n - containerPort: 3478\\n hostPort: 3478\\n name: turn-tcp\\n protocol: TCP\\n hostNetwork: true\\n serviceAccountName: turn-svc-account\\n terminationGracePeriodSeconds: 10\\n```\n\n#### **3.3 matchmaker, player, streamer**\nmatchmaker 负载将空闲的 streamer 发送给客户,player 提供的是静态网页/javascript , streamer 使用了 Epic Games 提供的一个ARPG演示项目。\n\n#### **3.4 ingress , envoy-router**\n我们会创建一个 ALB Ingress, 它由 Amazon load balancer controller 完成 ALB 资源的创建\n\n```\\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata:\\n labels:\\n app.kubernetes.io/component: unrealengine\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n namespace: ue4\\n name: pixelstreaming-ingress\\n annotations:\\n alb.ingress.kubernetes.io/scheme: internet-facing\\n alb.ingress.kubernetes.io/target-type: ip\\nspec:\\n ingressClassName: alb\\n rules:\\n - http:\\n paths:\\n - path: /\\n pathType: Prefix\\n backend:\\n service:\\n name: envoy-router\\n port:\\n number: 80\\n```\n\n另外我们会部署一个 envoy 网关,通过它将不同路由分别映射到player, matchmaker 服务上去。\n\n```\\n#envoy yaml文件摘要\\n\\napiVersion: apps/v1\\nkind: Deployment\\nmetadata:\\n labels:\\n app.kubernetes.io/component: routing\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n app: envoy-router\\n name: envoy-router\\n namespace: ue4\\nspec:\\n selector:\\n matchLabels:\\n app: envoy-router\\n app.kubernetes.io/component: routing\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n template:\\n metadata:\\n labels:\\n app: envoy-router\\n app.kubernetes.io/component: routing\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n spec:\\n affinity:\\n nodeAffinity:\\n requiredDuringSchedulingIgnoredDuringExecution:\\n nodeSelectorTerms:\\n - matchExpressions:\\n - key: app.pixel/envoy\\n operator: In\\n values:\\n - \\"true\\"\\n containers:\\n - image: envoyproxy/envoy:v1.21.1\\n imagePullPolicy: IfNotPresent\\n name: envoy-router\\n ports:\\n - containerPort: 11000\\n name: http\\n - containerPort: 12000\\n name: api\\n resources:\\n limits:\\n cpu: 200m\\n memory: 128Mi\\n requests:\\n cpu: 100m\\n memory: 64Mi\\n securityContext:\\n capabilities:\\n add:\\n - NET_BIND_SERVICE\\n - CHOWN\\n - SETGID\\n - SETUID\\n drop:\\n - all\\n readOnlyRootFilesystem: true\\n runAsNonRoot: true\\n runAsUser: 10001\\n volumeMounts:\\n - mountPath: /etc/envoy\\n name: config\\n volumes:\\n - configMap:\\n name: envoy-routing-config\\n name: config\\n \\n#envoy 路由规则\\nstatic_resources:\\n listeners:\\n - name: listener_0\\n address:\\n socket_address:\\n protocol: TCP\\n address: 0.0.0.0\\n port_value: 11000\\n \\n\\n filter_chains:\\n - filters:\\n - name: envoy.filters.network.http_connection_manager\\n typed_config:\\n \\"@type\\": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\\n stat_prefix: ingress_http\\n upgrade_configs:\\n - upgrade_type: websocket\\n access_log:\\n - name: envoy.access_loggers.stdout\\n typed_config:\\n \\"@type\\": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog\\n route_config:\\n name: local_route\\n virtual_hosts:\\n - name: local_service\\n domains: [\\"*\\"]\\n routes:\\n - match: { prefix: \\"/matchmaker\\" }\\n route: \\n cluster: service_matchmaker\\n - match: { prefix: \\"/ws\\" }\\n route: \\n cluster: service_matchmaker\\n - match: { prefix: \\"/\\" }\\n route: \\n cluster: service_player\\n http_filters:\\n - name: envoy.filters.http.router\\n\\n clusters:\\n - name: service_matchmaker\\n connect_timeout: 1s\\n type: LOGICAL_DNS\\n dns_lookup_family: V4_ONLY\\n lb_policy: ROUND_ROBIN\\n load_assignment:\\n cluster_name: service_envoyproxy_io\\n endpoints:\\n - lb_endpoints:\\n - endpoint:\\n address:\\n socket_address:\\n address: matchmaker\\n port_value: 3000\\n - name: service_player\\n connect_timeout: 1s\\n type: LOGICAL_DNS\\n dns_lookup_family: V4_ONLY\\n lb_policy: ROUND_ROBIN\\n load_assignment:\\n cluster_name: service_envoyproxy_io\\n endpoints:\\n - lb_endpoints:\\n - endpoint:\\n address:\\n socket_address:\\n address: player\\n port_value: 80\\n```\n\n### **4. 部署测试**\n所有的yaml内容都在 deploy 目录下面 ,我们直接进行部署\n\n```kubectl apply -f ./deploy```\n\n#### **4.1 验证部署是否成功**\n检查 Ingress , Service\n\n```\\nkubectl get ingress\\nkubectl get svc\\n```\n\n![image.png](https://dev-media.amazoncloud.cn/e5fcdfafa9a3401cbdd42abc6bc54484_image.png)\n\n检查应用\n\n![image.png](https://dev-media.amazoncloud.cn/4ed4f10ec0a248f897431f55b2fec860_image.png)\n\n#### **4.2 测试演示项目**\n获取访问地址\n\n```\\neks:~/environment/ue4-on-eks \$ kubectl get ingress -o json | jq .items[].status.loadBalancer.ingress[].hostname\\n#参考输出\\n\\"k8s-ue4-pixelstr-1111111111111-1111111111111.us-east-1.elb.amazonaws.com\\"\\n```\n\n![image.png](https://dev-media.amazoncloud.cn/ffc7b2110bfd4937a839b989c7d0c94d_image.png)\n\n![image.png](https://dev-media.amazoncloud.cn/92bf2eab2b7f48e782e0db95e440297f_image.png)\n\n#### **4.3 弹性伸缩测试**\n我们部署的应用 stream 默认只有1个副本,当我们已经有1个浏览器连接 ingress, matchmaker 会把当前空闲的游戏实例发送给客户建立 WebRTC 连接,我们开启第二个浏览器(Firefox)访问该地址的时候它会显示”Waiting for available streamer”\n\n![image.png](https://dev-media.amazoncloud.cn/2a2b3c5172ed4a6fb0f613a8425cf423_image.png)\n\n这个时候我们只需要增加 stream 的副本,matchmaker 就会通过 websocket 发送新的游戏连接信息,firefox 浏览器也会从”Waiting for available streamer” 状态变为游戏连接状态。\n\n```\\nkubectl get deploy stream\\nkubectl scale deploy stream --replicas=2\\n```\n\n![image.png](https://dev-media.amazoncloud.cn/9bf8908b17ca4e3e87e5037ac39f6093_image.png)\n\n这个时候我们可以看到 firefox , chrome 浏览器使用同一个 ALB 地址但是分别连接到2个独立的 UE4 游戏, 进行各自的游戏,说明通过副本数扩展可以实现演示游戏的弹性伸缩,我们也可以继续增加应用 stream 的副本数量来支持更多的客户访问。\n\n![image.png](https://dev-media.amazoncloud.cn/68eaaa711af84d9a8d1f17041d8332f3_image.png)\n\n有兴趣的读者还可以自行研究与 Kubernetes HPA 组件结合来实现 UE4 像素流送的自动弹性伸缩。\n\n#### **5.总结**\n本文详细地介绍了如何将 Unreal Engine 4 游戏打包部署在 [Amazon Elastic Kubernetes Service](https://aws.amazon.com/cn/eks/?trk=cndc-detail)(EKS)服务上,通过 Kubernetes 内置的功能和 [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)g4dn 实例打造一个支持多用户可以弹性伸缩的 UE4 游戏像素流送平台。\n\n### **参考文档、资源**\n- [*Unreal Engine 像素流送在 g4dn 上实现容器化部署实践(一)*](https://aws.amazon.com/cn/blogs/china/practice-of-container-deployment-of-unreal-engine-pixel-streaming-on-g4dn-i/)\n- [*Unreal Engine 官方像素流送文档*](https://docs.unrealengine.com/4.27/en-US/SharingAndReleasing/PixelStreaming/)\n- [*部署、配置 Amazon Loadbalacner Controller*](https://github.com/aws-samples/eks-workshop-greater-china/blob/master/global/2021_GCR_MAD_Day/lab2-eks/%E6%AD%A5%E9%AA%A43-%E9%83%A8%E7%BD%B2%E9%85%8D%E7%BD%AEaws-load-balancer-controller%262048%E6%B8%B8%E6%88%8F.md)\n#### **本篇作者**\n\n![image.png](https://dev-media.amazoncloud.cn/6ff2056bf3614ff49ead7a8495cbea9e_image.png)\n\n**粟伟**\nAmazon 资深解决方案架构师,专注游戏行业, 开源项目爱好者,致力于云原生应用推广、落地。具有15年以上的信息技术行业专业经验,担任过高级软件工程师,系统架构师等职位,在加入 Amazon 之前曾就职于Bea, Oracle, IBM 等公司。\n\n![image.png](https://dev-media.amazoncloud.cn/ab09346dc8f34bb89adbcde8cd3a497f_image.png)\n\n**白鹤**\n教授级高级工程师,亚马逊云科技媒体行业资深解决方案架构师,重点从事融合媒体系统、内容制作平台、超高清编码云原生能力等方面架构设计工作,在围绕媒体数字化转型的多个领域有丰富的实践经验。\n\n\n![image.png](https://dev-media.amazoncloud.cn/ac771af5fc3f41158d3bce474547af62_image.png)\n\n**贺杨**\nAmazon 解决方案架构师,具备17年IT专业服务经验,工作中担任过研发、开发经理、解决方案架构师等多种角色。在加入 Amazon 前,拥有多年外企研发和售前架构经验,在传统企业架构和中间件解决方案有深入的理解和丰富的实践经验","render":"<p>在 <a href=\\"https://aws.amazon.com/cn/blogs/china/practice-of-container-deployment-of-unreal-engine-pixel-streaming-on-g4dn-i/\\" target=\\"_blank\\"><em>Unreal Engine 像素流送在 g4dn 上实现容器化部署实践(一)</em></a>中我们讲解了在 g4dn 上如何编译 Unreal Engine 4 项目, 构建 docker 镜像,还讲解了如何通过 docker-compose 部署了 UE4 像素流送演示项目,接下来我们将会在 Amazon Elastic Kubernetes Service(EKS) 上尝试部署一套可以弹性伸缩的像素流送平台。</p>\n<p>本文分为如下几个部分:</p>\n<ol>\\n<li>架构说明</li>\n<li>创建 Amazon EKS 集群,配置工作节点, 配置 Amazon Load Balancer Controller</li>\n<li>编写需要的 yaml 配置文件,包括 TURN/STUN, Matchmaker, Streamer , Enovy 等多个组件</li>\n<li>测试弹性伸缩功能</li>\n</ol>\\n<h3><a id=\\"1__8\\"></a><strong>1. 架构说明</strong></h3>\\n<p>Epic Games 官方提供了一个多用户/多游戏像素流送的参考架构(见下图), 通过一个 Matchmaker 匹配服务,将用户请求连接到不同的Signaling Server(Web Server), 不过官方提供的 Matchmaker 是直接返回 Singaling Server 的 IP 和端口,并没有实现统一接入,需要自行修改 Matchmaker 实现统一端口接入。</p>\n<p>Epic Games 官方多用户/多游戏像素流送参考架构图</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/de294cf6e4304f6a8b9b6a5d3a37dfd0_image.png\\" alt=\\"image.png\\" /></p>\n<p>根据 Amazon EKS/Kubernetes 的特性进行了重构</p>\n<ul>\\n<li>分别使用 CPU 和 GPU, 2种类型的工作节点, 通过节点亲和/污点容忍,CPU 类型节点运行 Matchmaker, STUN/TRUN, Envoy 路由服务, GPU 类型节点(g4dn)运行 UE4 项目/Streamer。</li>\n<li>平台通过统一 Ingress 入口提供对外服务, 使用了 Enovy 作为中间路由。</li>\n<li>Matchmaker 进行了改造,不用进行端口跳转。</li>\n</ul>\\n<p>Amazon EKS 参考架构图如下:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7dcd61c86ec64b2eb7d6bb0214cac2ab_image.png\\" alt=\\"image.png\\" /></p>\n<h3><a id=\\"2__Amazon_EKS__25\\"></a><strong>2. 创建 Amazon EKS 集群</strong></h3>\\n<h4><a id=\\"21__Amazon_EKS__26\\"></a><strong>2.1 创建 Amazon EKS 集群</strong></h4>\\n<p>我们将同时会创建2个工作节点组,1节点组个使用 m5.large (CPU负载),主要用来部署 STUN/TURN, Envoy, Matchmaker, player 等服务器, 另外一个节点组使用 g4dn.xlarge,主要是用来部署 UE4 像素流送项目,实现 WebRTC 流推送。</p>\n<p>集群配置文件(cluster.yaml),当前默认 kubernetes 集群版本为1.21</p>\n<pre><code class=\\"lang-\\">apiVersion: eksctl.io/v1alpha5\\nkind: ClusterConfig\\n\\nmetadata:\\n name: ue4-pixelsteraming-eks\\n region: us-east-1\\n\\nnodeGroups:\\n - name: ng-cpu-group01\\n instanceType: m5.large\\n desiredCapacity: 1\\n minSize: 1\\n maxSize: 4\\n labels: \\n app.pixel/turn: true\\n app.pixel/envoy: true\\n ssh:\\n allow: true\\n publicKeyName: <请更换成自己的私钥对名字>\\n - name: ng-gpu-group01\\n instanceType: g4dn.xlarge\\n desiredCapacity: 1\\n minSize: 1\\n maxSize: 4\\n labels: \\n app.pixel/streamer: true\\n taints:\\n app.pixel/streamer: "true:NoSchedule"\\n ssh:\\n allow: true\\n publicKeyName: <请更换成自己的私钥对名字>\\n</code></pre>\\n<p>创建集群</p>\n<p><code>eksctl create cluster -c cluster.yaml</code></p>\\n<h4><a id=\\"21__Amazon_Load_Balancer_Controller_ALB_ingress_69\\"></a><strong>2.1 部署 Amazon Load Balancer Controller (ALB ingress需要)</strong></h4>\\n<p>#创建 ingress 使用的角色,策略, service account</p>\n<pre><code class=\\"lang-\\">eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=ue4-pixelsteraming-eks --approve\\n\\neksctl create iamserviceaccount \\\\\\n --cluster=ue4-pixelsteraming-eks \\\\\\n --namespace=kube-system \\\\\\n --name=aws-load-balancer-controller \\\\\\n --attach-policy-arn=arn:aws:iam::<12位账号IP>:policy/AWSLoadBalancerControllerIAMPolicy \\\\\\n --override-existing-serviceaccounts \\\\\\n --approve\\n</code></pre>\\n<p>#安装cert-manager</p>\n<pre><code class=\\"lang-\\">kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.yaml\\n</code></pre>\\n<p>#下载 ALB 2.2.1 安装配置文件</p>\n<pre><code class=\\"lang-\\">curl -OL https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.1/docs/install/v2_2_1_full.yaml\\n</code></pre>\\n<p>编辑 v2_2_1_full.yaml,将–cluster-name=your-cluster-name 修改为–cluster-name=eksworshop</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/08c0563f48ca424ab1cd34989e63a1ca_image.png\\" alt=\\"image.png\\" /></p>\n<p>修改 v2_2_1_full.yaml 后安装 Amazon-load-balancer-controller</p>\n<p><code>kubectl apply -f v2_2_1_full.yaml</code></p>\\n<p>系统会自动在 kube-system 下部署 Amazon-load-balancer-controller。</p>\n<pre><code class=\\"lang-\\">eks:~/environment/ue4-on-eks/deploy \$ kc get deploy -n kube-system\\nNAME READY UP-TO-DATE AVAILABLE AGE\\naws-load-balancer-controller 1/1 1 1 5d19h\\n</code></pre>\\n<p><strong>3. 编写需要的 yaml 文件</strong><br />\\n根据步骤1我们设计的架构,准备所需要的 yaml 文件, 笔者已经将所有文件放在UE4-PixelStreaming-Amazon-EKS仓库中的deploy目录中。</p>\n<p><code>git clone https://github.com/stevensu1977/UE4-PixelStreaming-AWS-EKS</code></p>\\n<h4><a id=\\"31__namespace_117\\"></a><strong>3.1 创建 namespace,配置权限</strong></h4>\\n<p>我们将使用部署一个带有 kubectl 命令行工具的 pod 来为了获得 TURN 外网地址,所以我们需要 kubernetes API Server 访问权限</p>\n<p>创建 Namespace</p>\n<pre><code class=\\"lang-\\">apiVersion: v1\\nkind: Namespace\\nmetadata:\\n labels:\\n app.kubernetes.io/component: unrealengine\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n name: ue4\\n</code></pre>\\n<p>创建了2个 service account 并且通过 ClusterRole, ClusterRoleBinding 进行正确授权</p>\n<pre><code class=\\"lang-\\">#service account 摘要\\napiVersion: v1\\nkind: ServiceAccount\\nmetadata:\\n labels:\\n app.kubernetes.io/component: unrealengine\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n name: stream-svc-account\\n namespace: ue4\\n---\\napiVersion: v1\\nkind: ServiceAccount\\nmetadata:\\n labels:\\n app.kubernetes.io/component: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n name: turn-svc-account\\n namespace: ue4\\n</code></pre>\\n<h4><a id=\\"32_STUNTURN_server_158\\"></a><strong>3.2 STUN/TURN server</strong></h4>\\n<p>我们使用了 coturn 作为 STUN/TURN server ,通过它来解决内网 Streamer (游戏实例)数据传输的问题 , 通过标签”app.pixel/turn=true” 将 cotton 以 Dameonset 的方式部署到指定的 EC2 节点。</p>\n<pre><code class=\\"lang-\\">apiVersion: apps/v1\\nkind: DaemonSet\\nmetadata:\\n labels:\\n app.kubernetes.io/component: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n app: turn\\n name: turn\\n namespace: ue4\\nspec:\\n selector:\\n matchLabels:\\n app.kubernetes.io/name: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n app: turn\\n template:\\n metadata:\\n annotations:\\n sidecar.istio.io/inject: "false"\\n labels:\\n app: turn\\n app.kubernetes.io/name: turn\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n version: 0.0.1\\n spec:\\n affinity:\\n nodeAffinity:\\n requiredDuringSchedulingIgnoredDuringExecution:\\n nodeSelectorTerms:\\n - matchExpressions:\\n - key: app.pixel/turn\\n operator: In\\n values:\\n - "true"\\n containers:\\n - env:\\n - name: INTERNAL_IP\\n valueFrom:\\n fieldRef:\\n fieldPath: status.podIP\\n - name: TURN_PORT\\n value: "3478"\\n - name: TURN_MIN_PORT\\n value: "49152"\\n - name: TURN_MAX_PORT\\n value: "65535"\\n - name: TURN_REALM\\n value: app.pixel\\n - name: TURN_USER\\n valueFrom:\\n secretKeyRef:\\n key: username\\n name: turn-secret\\n - name: TURN_PASS\\n valueFrom:\\n secretKeyRef:\\n key: password\\n name: turn-secret\\n image: ghcr.io/stevensu1977/ue4-pixelstreaming/turnserver\\n imagePullPolicy: Always\\n name: turn\\n ports:\\n - containerPort: 3478\\n hostPort: 3478\\n name: turn-udp\\n protocol: UDP\\n - containerPort: 3478\\n hostPort: 3478\\n name: turn-tcp\\n protocol: TCP\\n hostNetwork: true\\n serviceAccountName: turn-svc-account\\n terminationGracePeriodSeconds: 10\\n</code></pre>\\n<h4><a id=\\"33_matchmaker_player_streamer_240\\"></a><strong>3.3 matchmaker, player, streamer</strong></h4>\\n<p>matchmaker 负载将空闲的 streamer 发送给客户,player 提供的是静态网页/javascript , streamer 使用了 Epic Games 提供的一个ARPG演示项目。</p>\n<h4><a id=\\"34_ingress__envoyrouter_243\\"></a><strong>3.4 ingress , envoy-router</strong></h4>\\n<p>我们会创建一个 ALB Ingress, 它由 Amazon load balancer controller 完成 ALB 资源的创建</p>\n<pre><code class=\\"lang-\\">apiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata:\\n labels:\\n app.kubernetes.io/component: unrealengine\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n namespace: ue4\\n name: pixelstreaming-ingress\\n annotations:\\n alb.ingress.kubernetes.io/scheme: internet-facing\\n alb.ingress.kubernetes.io/target-type: ip\\nspec:\\n ingressClassName: alb\\n rules:\\n - http:\\n paths:\\n - path: /\\n pathType: Prefix\\n backend:\\n service:\\n name: envoy-router\\n port:\\n number: 80\\n</code></pre>\\n<p>另外我们会部署一个 envoy 网关,通过它将不同路由分别映射到player, matchmaker 服务上去。</p>\n<pre><code class=\\"lang-\\">#envoy yaml文件摘要\\n\\napiVersion: apps/v1\\nkind: Deployment\\nmetadata:\\n labels:\\n app.kubernetes.io/component: routing\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n app: envoy-router\\n name: envoy-router\\n namespace: ue4\\nspec:\\n selector:\\n matchLabels:\\n app: envoy-router\\n app.kubernetes.io/component: routing\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n template:\\n metadata:\\n labels:\\n app: envoy-router\\n app.kubernetes.io/component: routing\\n app.kubernetes.io/part-of: ue4-on-eks\\n app.kubernetes.io/version: 0.0.1\\n spec:\\n affinity:\\n nodeAffinity:\\n requiredDuringSchedulingIgnoredDuringExecution:\\n nodeSelectorTerms:\\n - matchExpressions:\\n - key: app.pixel/envoy\\n operator: In\\n values:\\n - "true"\\n containers:\\n - image: envoyproxy/envoy:v1.21.1\\n imagePullPolicy: IfNotPresent\\n name: envoy-router\\n ports:\\n - containerPort: 11000\\n name: http\\n - containerPort: 12000\\n name: api\\n resources:\\n limits:\\n cpu: 200m\\n memory: 128Mi\\n requests:\\n cpu: 100m\\n memory: 64Mi\\n securityContext:\\n capabilities:\\n add:\\n - NET_BIND_SERVICE\\n - CHOWN\\n - SETGID\\n - SETUID\\n drop:\\n - all\\n readOnlyRootFilesystem: true\\n runAsNonRoot: true\\n runAsUser: 10001\\n volumeMounts:\\n - mountPath: /etc/envoy\\n name: config\\n volumes:\\n - configMap:\\n name: envoy-routing-config\\n name: config\\n \\n#envoy 路由规则\\nstatic_resources:\\n listeners:\\n - name: listener_0\\n address:\\n socket_address:\\n protocol: TCP\\n address: 0.0.0.0\\n port_value: 11000\\n \\n\\n filter_chains:\\n - filters:\\n - name: envoy.filters.network.http_connection_manager\\n typed_config:\\n "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\\n stat_prefix: ingress_http\\n upgrade_configs:\\n - upgrade_type: websocket\\n access_log:\\n - name: envoy.access_loggers.stdout\\n typed_config:\\n "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog\\n route_config:\\n name: local_route\\n virtual_hosts:\\n - name: local_service\\n domains: ["*"]\\n routes:\\n - match: { prefix: "/matchmaker" }\\n route: \\n cluster: service_matchmaker\\n - match: { prefix: "/ws" }\\n route: \\n cluster: service_matchmaker\\n - match: { prefix: "/" }\\n route: \\n cluster: service_player\\n http_filters:\\n - name: envoy.filters.http.router\\n\\n clusters:\\n - name: service_matchmaker\\n connect_timeout: 1s\\n type: LOGICAL_DNS\\n dns_lookup_family: V4_ONLY\\n lb_policy: ROUND_ROBIN\\n load_assignment:\\n cluster_name: service_envoyproxy_io\\n endpoints:\\n - lb_endpoints:\\n - endpoint:\\n address:\\n socket_address:\\n address: matchmaker\\n port_value: 3000\\n - name: service_player\\n connect_timeout: 1s\\n type: LOGICAL_DNS\\n dns_lookup_family: V4_ONLY\\n lb_policy: ROUND_ROBIN\\n load_assignment:\\n cluster_name: service_envoyproxy_io\\n endpoints:\\n - lb_endpoints:\\n - endpoint:\\n address:\\n socket_address:\\n address: player\\n port_value: 80\\n</code></pre>\\n<h3><a id=\\"4__420\\"></a><strong>4. 部署测试</strong></h3>\\n<p>所有的yaml内容都在 deploy 目录下面 ,我们直接进行部署</p>\n<p><code>kubectl apply -f ./deploy</code></p>\\n<h4><a id=\\"41__425\\"></a><strong>4.1 验证部署是否成功</strong></h4>\\n<p>检查 Ingress , Service</p>\n<pre><code class=\\"lang-\\">kubectl get ingress\\nkubectl get svc\\n</code></pre>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/e5fcdfafa9a3401cbdd42abc6bc54484_image.png\\" alt=\\"image.png\\" /></p>\n<p>检查应用</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/4ed4f10ec0a248f897431f55b2fec860_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"42__439\\"></a><strong>4.2 测试演示项目</strong></h4>\\n<p>获取访问地址</p>\n<pre><code class=\\"lang-\\">eks:~/environment/ue4-on-eks \$ kubectl get ingress -o json | jq .items[].status.loadBalancer.ingress[].hostname\\n#参考输出\\n"k8s-ue4-pixelstr-1111111111111-1111111111111.us-east-1.elb.amazonaws.com"\\n</code></pre>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/ffc7b2110bfd4937a839b989c7d0c94d_image.png\\" alt=\\"image.png\\" /></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/92bf2eab2b7f48e782e0db95e440297f_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"43__452\\"></a><strong>4.3 弹性伸缩测试</strong></h4>\\n<p>我们部署的应用 stream 默认只有1个副本,当我们已经有1个浏览器连接 ingress, matchmaker 会把当前空闲的游戏实例发送给客户建立 WebRTC 连接,我们开启第二个浏览器(Firefox)访问该地址的时候它会显示”Waiting for available streamer”</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/2a2b3c5172ed4a6fb0f613a8425cf423_image.png\\" alt=\\"image.png\\" /></p>\n<p>这个时候我们只需要增加 stream 的副本,matchmaker 就会通过 websocket 发送新的游戏连接信息,firefox 浏览器也会从”Waiting for available streamer” 状态变为游戏连接状态。</p>\n<pre><code class=\\"lang-\\">kubectl get deploy stream\\nkubectl scale deploy stream --replicas=2\\n</code></pre>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/9bf8908b17ca4e3e87e5037ac39f6093_image.png\\" alt=\\"image.png\\" /></p>\n<p>这个时候我们可以看到 firefox , chrome 浏览器使用同一个 ALB 地址但是分别连接到2个独立的 UE4 游戏, 进行各自的游戏,说明通过副本数扩展可以实现演示游戏的弹性伸缩,我们也可以继续增加应用 stream 的副本数量来支持更多的客户访问。</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/68eaaa711af84d9a8d1f17041d8332f3_image.png\\" alt=\\"image.png\\" /></p>\n<p>有兴趣的读者还可以自行研究与 Kubernetes HPA 组件结合来实现 UE4 像素流送的自动弹性伸缩。</p>\n<h4><a id=\\"5_472\\"></a><strong>5.总结</strong></h4>\\n<p>本文详细地介绍了如何将 Unreal Engine 4 游戏打包部署在 Amazon Elastic Kubernetes Service(EKS)服务上,通过 Kubernetes 内置的功能和 Amazon EC2 g4dn 实例打造一个支持多用户可以弹性伸缩的 UE4 游戏像素流送平台。</p>\n<h3><a id=\\"_475\\"></a><strong>参考文档、资源</strong></h3>\\n<ul>\\n<li><a href=\\"https://aws.amazon.com/cn/blogs/china/practice-of-container-deployment-of-unreal-engine-pixel-streaming-on-g4dn-i/\\" target=\\"_blank\\"><em>Unreal Engine 像素流送在 g4dn 上实现容器化部署实践(一)</em></a></li>\n<li><a href=\\"https://docs.unrealengine.com/4.27/en-US/SharingAndReleasing/PixelStreaming/\\" target=\\"_blank\\"><em>Unreal Engine 官方像素流送文档</em></a></li>\n<li><a href=\\"https://github.com/aws-samples/eks-workshop-greater-china/blob/master/global/2021_GCR_MAD_Day/lab2-eks/%E6%AD%A5%E9%AA%A43-%E9%83%A8%E7%BD%B2%E9%85%8D%E7%BD%AEaws-load-balancer-controller%262048%E6%B8%B8%E6%88%8F.md\\" target=\\"_blank\\"><em>部署、配置 Amazon Loadbalacner Controller</em></a></li>\n</ul>\\n<h4><a id=\\"_479\\"></a><strong>本篇作者</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/6ff2056bf3614ff49ead7a8495cbea9e_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>粟伟</strong><br />\\nAmazon 资深解决方案架构师,专注游戏行业, 开源项目爱好者,致力于云原生应用推广、落地。具有15年以上的信息技术行业专业经验,担任过高级软件工程师,系统架构师等职位,在加入 Amazon 之前曾就职于Bea, Oracle, IBM 等公司。</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ab09346dc8f34bb89adbcde8cd3a497f_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>白鹤</strong><br />\\n教授级高级工程师,亚马逊云科技媒体行业资深解决方案架构师,重点从事融合媒体系统、内容制作平台、超高清编码云原生能力等方面架构设计工作,在围绕媒体数字化转型的多个领域有丰富的实践经验。</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ac771af5fc3f41158d3bce474547af62_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>贺杨</strong><br />\\nAmazon 解决方案架构师,具备17年IT专业服务经验,工作中担任过研发、开发经理、解决方案架构师等多种角色。在加入 Amazon 前,拥有多年外企研发和售前架构经验,在传统企业架构和中间件解决方案有深入的理解和丰富的实践经验</p>\n"}