We use machine learning technology to do auto-translation. Click "English" on top navigation bar to check Chinese version.
Fire up your Unreal Engine-based game on all Graviton cores
This post is written by Yahav Biran, Principal Solutions Architect, and Matt Trescot, Games, SA Leader – Americas
Historically, the art of creating and running complex game servers locked developers in to a single CPU architecture, typically Intel/AMD. Our developers tell us it’s hard to introduce different CPU architectures once game servers are built for a given processor. In this article we’ll show you how to build an Unreal Engine game with full support for the
Dedicated game servers need to support multiple players at a predictable tick-rate per CPU core; also, player events such as 3D calculations need to be batched back to the connected players at the same tick-rate for a fun and fair game experience. Hence, it requires that CPU cores be fully allocated to the game session to avoid context switching caused by simultaneous multi-threading. As a result, game server operators asked us how to
We show 42% cost savings with the Graviton instance for the same performance, 22% more throughput and a 20% lower price than on x86 cores with an Unreal Engine-based game,
We set out to load both servers to their CPU capacity, so we used 30 shooter bots (

Figure 1 – Bots spawning during load test on both CPU types
The game server started at 12:30 and we added 30 bots with 5 players for each session up to 12:40 where both servers (upper graph – lyra-X86 and lyra-Graviton) were at their CPU capacity. We let the game run for 10 game sessions, 11 minutes each, and observed game server outbound traffic (middle graph – pod_network_tx_bytes) and memory allocation (bottom graph).

Figure 2 – Resources usage during the simulation
The simulation demonstrated that connected clients had a close-to-real-game experience of the game based on steady outbound traffic rates and memory consumption on both servers. The CPU usage of the X86 instance (lyra-X86) was 97% and 75% was consumed by the Graviton instance (lyra-Graviton).
The remainder of this post explains how you can take advantage of Graviton’s cost performance advantages. We will start with the game image build process that supports both CPU architectures. Next, we will describe how to deploy the game, play it, and observe the results.
How we built a multi-platform game image
The following code and configuration excerpts have been edited to better fit the blog format. The full sample code is published on the

Figure 3 – Game continuous integration
We use Amazon Web Services CodePipeline to automate the game build process. This starts with building a Docker image from the game code and assets in
The build process needed to create two different images because Amazon Graviton instances use ARM RISC (Reduced Instruction Set Computer), and Intel uses CISC (Complex Instruction Set Computer). We modified the existing continuous integration with Docker’s multi-platform images so the game image references both Graviton and x86 processors to simplify configuration in the continuous delivery system.
The first step (1/StartBuild) is to compile the package and game binaries using
docker build
.
Avoid use of platform specific package names like amd64 and aarch64 to simplify the build scripts. For example, instead of using:
ARG ARCH=aarch64 or amd64
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux${ARCH}.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
Use:
RUN pip install awscli
Next, we build (2/Pull code & build) the image by reusing the docker build step for x86 and run it on a Graviton-based instance (see the
Note that our example uses the CodeBuild images for Graviton and x86 to build the same code and config. The extra CodeBuild step (BuildARMAssets) runs at the same time as the original step (BuildAMDAssets), keeping the total build time 12 minutes for fresh build and 5 minutes when using ECR cache for continuous incremental builds.

Figure 4 – CodePipeline that pulls code and build two images with single image URI in ECR
How we deployed the game
We deployed the Graviton and X86 variants of the game on
We used a multi-platform docker image to ensure both k8s deployment variants run the same code and configuration. We used the
Finally, we used the pod lifecycle preStop hook to clean-up the service when the pod terminates.
image: $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/lyra:lyra_starter_game
imagePullPolicy: Always
command: ["/usr/local/lyra_starter_game/LyraServer.sh"]
lifecycle:
postStart:
exec:
command:["/usr/local/lyra_starter_game/create_node_port_svc.sh"]
preStop:
exec:
command:["kubectl delete svc `kubectl get svc|grep $POD_NAME | awk '{print $1}'`"]
Try it yourself by following the tutorial and deploy the Amazon EKS Container Insights using the quick start instructions.
Play and observe the game server performance
The last step is to play the game is to discover the game server endpoints and connect the game clients with the Lyra Game Starter server.
In the example below, our game server endpoints are:
34.216.42.162:32384, 35.82.31.15:32019
[$]kubectl get svc | grep NodePort| awk '{print $1,$5}'
lyramd64-6bf8cdd4db-ts7hq-svc-34-216-42-162 7777:32384/UDP
lyrarm64-5568b7bc6c-wtrqx-svc-35-82-31-15 7777:32019/UDP
If you haven’t done so in the build phase, compile and package the client binaries for your favorite OS and connect the game servers. On Windows:
./Binaries/Win64/<PROJECT_NAME>Client.exe 34.216.42.162:32384 -WINDOWED -ResX=800 -ResY=450
./Binaries/Win64/<PROJECT_NAME>Client.exe 35.82.31.15:32019 -WINDOWED -ResX=800 -ResY=450
The -WINDOWED, -ResX=<HORIZONTAL_RESOLUTION>
,and – ResY=<VERTICAL_RESOLUTION>
command-line are set here for convenience. This enables you to see both client windows on the same screen for testing purposes.
At this point you will be in two game sessions opened so get ready to shoot 30 bots 😀
Conclusions
Graviton instances provide the scalability, performance, and cost effectiveness needed to provide an excellent gaming experience for a large number of players.
The mentioned AWS GenAI Services service names relating to generative AI are only available or previewed in the Global Regions. Amazon Web Services China promotes AWS GenAI Services relating to generative AI solely for China-to-global business purposes and/or advanced technology introduction.