We use machine learning technology to do auto-translation. Click "English" on top navigation bar to check Chinese version.
Acoustic gains 10X throughput rates modernizing their Send Engine on Amazon Web Services
It is critical for marketing technology campaign platforms to send rich content, dynamically sourced, from their customer’s datasets to drive hyper personalized and relevant messaging to customers. The ability to seamlessly scale during multiple critical times of the year to meet high customer data volume demand, in real-time, can be resource challenging. Acoustic’s modernized Send Engine does this with undeniable gains in throughput rates.
About Acoustic
Acoustic is a global marketing and customer engagement technology company committed to connecting the dots from campaign to conversion. Acoustic leverages in-depth behavioral insights to uncover every interaction throughout the customer journey, enabling hyper-personalized digital experiences. They completed their journey to the Amazon Web Services (Amazon Web Services) Cloud by
In this blog post, we’ll review how Acoustic modernized their strategic Send Engine application leveraging Amazon Web Services cloud-native services and built a scalable, cost effective and resilient solution. The Send Engine is Acoustic’s omni-channel application for personalizing, sending and tracking emails, text messages, mobile pushes and WhatsApp messages to Acoustic’s clients’ customers. By modernizing, Acoustic not only can scale with their client demand needs, but additionally can rapidly integrate with future channels where Acoustic’s clients want to reach their audiences.
Send Engine: Previous Architecture
Acoustic’s previous Send Engine technology served its purpose well, but as data volumes grew, the need for hyper-personalized messaging and flexibility of channel integration evolved, requiring a new architecture. The application was created around a Java code base accessing a traditional relational SQL datastore. Since the same datastore was used for all campaign capabilities, (including segmentation, audience management, and marketing automation), there was a limit on the number of connections and code instances that could access the database. This created a fundamental constraint for the send execution. The maximum throughput observed was ~5-20 million transactions per hour based on the load of the datastore, which was not meeting the growing business demand.
Key Drivers to modernize the Send Engine
In 2021/2022, Acoustic undertook a technology transformation of their Send Engine application, driving to support scaling for volatile customer load patterns with optimized operations. Acoustic’s Send Engine application was not technically feasible to update and doing so would have been cost prohibitive from a perspective of managing non-scalable infrastructure capacity resources. Acoustic needed to take advantage of easily scalable services where they could control cost-to-performance profiles depending on the workloads. This required a re-envisioning of their Send Engine application to take advantage of the available Amazon Web Services cloud-native solutions and services.
Send Engine: New architecture
In order to address the constraints from the prior architecture, Acoustic moved to a cloud-native, event-based architecture to provide flexibility for scaling of real-time events. Acoustic leveraged Amazon Web Services services including
Acoustic implemented a Kinesis stream-based auto-scaler (Diagram-1). The system calculates the number of records to be processed coming from Amazon S3 and inputs to ElastiCache. Lambda then checks the calculated count (leveraging
Implementation
Diagram-3 illustrates the transient state auto scaling of shards over time. The stream starts with a minimum scaled down setting of five shards. When a 34M send message is evaluated, auto-scaler scales up an additional two shards to handle the demand. Consecutively, to continue handling the existing 34M, and additional 15M send messages in a new interval, the auto-scaler scales up an additional four shards. At the end of the period, when the combined messages count goes down to 26M messages the streams get scaled down to three shards.
Now that the Kinesis streams can scale up and down, Acoustic needed to adjust the Amazon EKS pods to the number of shards. In order to scale the Amazon EKS pods quickly with the current number of shards, a KEDA event-based pod scaler was introduced which used the Kinesis shard number as a metric.
To achieve real savings on the Amazon EKS side, there was also a need to scale the
With new patterns in place, the maximum throughput increased 10X to >100 million transactions per hour. Among the new data access patterns implemented, Acoustic leveraged Amazon S3 and
Conclusion
In this blog post, we outlined how Acoustic’s modernization pathway leveraged Amazon Web Services cloud native services and developed patterns to build a scalable and resilient Send Engine solution to meet business volatility and growth. Adopting new patterns and consolidating platform technologies that can easily be leveraged across product lines accelerated Acoustic’s business priorities and agility with the cloud.
If you want to get started with application modernization, see
Further Reading
-
Auto scaling Amazon Kinesis Data Streams using Amazon CloudWatch and Amazon Web Services Lambda -
Introducing Karpenter – An Open-Source High-Performance Kubernetes Cluster Autoscaler
The mentioned AWS GenAI Services service names relating to generative AI are only available or previewed in the Global Regions. Amazon Web Services China promotes AWS GenAI Services relating to generative AI solely for China-to-global business purposes and/or advanced technology introduction.