- Products›
- Amazon Aurora
Amazon Aurora FAQs
General
Open allPerformance
Open allCloudWatch Database Insights is a monitoring and metrics solution that simplifies and enhances database troubleshooting. It automates telemetry collection, including metrics, logs, and traces, eliminating the need for manual setup and configuration. By consolidating this telemetry into Amazon CloudWatch, CloudWatch Database Insights provides a unified view of database performance and health.
Key benefits of CloudWatch Database Insights include:
Effortless Telemetry Collection: Automatically gathers database metrics, logs, and traces, minimizing setup time.
Curated Insights: Provides pre-built dashboards, alarms, and insights for monitoring and optimizing database performance, with minimal configuration needed to get started.
Unified CloudWatch View: Combines telemetry from multiple databases into one view for simplified monitoring.
AI/ML Capabilities: Uses AI/ML to detect anomalies, reducing manual troubleshooting efforts.
Application Context Monitoring: Allows users to correlate database performance with application performance.
Fleet and Instance-Level Views: Offers both high-level fleet monitoring and detailed instance views for root cause analysis.
Seamless Amazon integration: Integrates with Amazon CloudWatch Application Signals and Amazon X-Ray, enabling comprehensive observability experience
RDS Performance Insights is a database performance tuning and monitoring feature which allows customers to assess the load on their database and determine when and where to take action. CloudWatch Database Insights is a new database observability feature that inherits all the capabilities of Performance Insights along with fleet-level monitoring, integration with application performance monitoring, and correlation of database metrics with logs and events.
Billing
Open allAurora offers you the flexibility to optimize your database spend by choosing between two configuration options based on your price-performance and price-predictability needs. The two configuration options are Aurora Standard and Aurora I/O-Optimized. Neither option requires upfront I/O or storage provisioning and both can scale I/O operations to support your most demanding applications.
Aurora Standard is a database cluster configuration that offers cost-effective pricing for the vast majority of applications with low to moderate I/O usage. With Aurora Standard, you pay for database instances, storage, and pay-per-request I/O.
Aurora I/O-Optimized is a database cluster configuration that delivers improved price performance for I/O-intensive applications such as payment processing systems, ecommerce systems, and financial applications. Also, if your I/O spend exceeds 25% of your total Aurora database spend, you can save up to 40% on costs for I/O-intensive workloads with Aurora I/O-Optimized. Aurora I/O-Optimized offers predictable pricing for all applications as there are no charges for read and write I/O operations, making this configuration ideal for workloads with high I/O variability.
Aurora I/O-Optimized is the ideal choice when you need predictable costs for any application. It delivers improved price performance for I/O-intensive applications, which require a high write throughput or run analytical queries processing large amounts of data. For customers with an I/O spend that exceeds 25% of their Aurora bill, you can save up to 40% on costs for I/O-intensive workloads with Aurora I/O-Optimized.
Hardware and Scaling
Open allYou can scale the compute resources allocated to your DB Instance in the Amazon Web Services Management Console by selecting the desired DB Instance and clicking the Modify button. Memory and CPU resources are modified by changing your DB Instance class.
When you modify your DB Instance class, your requested changes will be applied during your specified maintenance window. Alternatively, you can use the "Apply Immediately" flag to apply your scaling requests immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Bear in mind that any other pending system changes will also be applied.
Backup and Restore
Open allHigh Availability and Replication
Open allAmazon Aurora MySQL and Amazon Aurora PostgreSQL support Amazon Aurora Replicas, which share the same underlying volume as the primary instance in the same Amazon Web Services China Region. Updates made by the primary are visible to all Amazon Aurora Replicas. With Amazon Aurora MySQL, you can also create cross-region MySQL Read Replicas based on MySQL’s binlog-based replication engine. In MySQL Read Replicas, data from your primary instance is replayed on your replica as transactions. For most use cases, including read scaling and high availability, we recommend using Amazon Aurora Replicas.
You have the flexibility to mix and match these two replica types based on your application needs:
Feature |
Amazon Aurora Replicas |
MySQL Replicas |
|---|---|---|
Number of replicas |
Up to 15 |
Up to 5 |
Replication type |
Asynchronous (milliseconds) |
Asynchronous (seconds) |
Performance impact on primary |
Low |
High |
Replica location |
In-region |
Cross-region |
Act as failover target |
Yes (no data loss) |
Yes (potentially minutes of data loss) |
Automated failover |
Yes |
No |
Support for user-defined replication delay |
No |
Yes |
Support for different data or schema vs. primary |
No |
Yes |
Yes, you can set up cross-region Aurora replicas using either physical or logical replication.Physical replication, called Aurora Global Database, uses dedicated infrastructure that leaves your databases entirely available to serve your application. It's available for both Aurora MySQL and Aurora PostgreSQL. For low-latency global reads and disaster recovery, we recommend using Aurora Global Database.Aurora supports native logical replication in each database engine (binlog for MySQL and PostgreSQL replication slots for PostgreSQL), so you can replicate to Aurora and non-Aurora databases, even across regions.Aurora MySQL also offers an easy-to-use logical cross-region read replica feature that supports Amazon Web Services China Regions, including the Amazon Web Services China (Beijing) Region, operated by Sinnet and the Amazon Web Services China (Ningxia) Region, operated by NWCD. It is based on single threaded MySQL binlog replication, so the replication lag will be influenced by the change/apply rate and delays in network communication between the specific regions selected.
You can add Amazon Aurora Replicas. Aurora Replicas in the same Amazon Web Services China Region share the same underlying storage as the primary instance. Any Aurora Replica can be promoted to become primary without any data loss and therefore can be used for enhancing fault tolerance in the event of a primary DB Instance failure. To increase database availability, simply create 1 to 15 replicas, in any of 3 AZs, and Amazon RDS will automatically include them in failover primary selection in the event of a database outage.
You can use Aurora Global Database if you want your database to span Amazon Web Services China Regions. This will replicate your data with no impact on database performance, and provide disaster recovery from region-wide outages.
Amazon RDS will automatically detect a problem with your primary instance and trigger a failover. If you are using the Cluster Endpoint, your read/write connections will be automatically redirected to an Amazon Aurora Replica that will be promoted to primary.
In addition, the read traffic that your Aurora Replicas were serving will be briefly interrupted. If you are using the Cluster Reader Endpoint to direct your read traffic to the Aurora Replica, the read only connections will be directed to the newly promoted Aurora Replica until the old primary node is recovered as a replica.
Since Amazon Aurora Replicas share the same data volume as the primary instance in the same Amazon Web Services China Region, there is virtually no replication lag. We typically observe lag times in the 10s of milliseconds. For MySQL Read Replicas, the replication lag can grow indefinitely based on change/apply rate as well as delays in network communication. However, under typical conditions, under a minute of replication lag is common.
Cross-region replicas using logical replication will be influenced by the change/apply rate and delays in network communication between the specific regions selected. Cross-region replicas using Aurora Global Database will have a typical lag of under a second.
Yes, you can set up binlog replication between an Aurora MySQL instance and an external MySQL database. The other database can run on Amazon RDS, or as a self-managed database on Amazon Web Services, or completely outside of Amazon Web Services.
If you're running Aurora MySQL 5.7, consider setting up GTID-based binlog replication. This will provide complete consistency so your replication won’t miss transactions or generate conflicts, even after failover or downtime.
Amazon Aurora Global Database is a feature that allows a single Amazon Aurora database to span Amazon Web Services China Regions. It replicates your data with no impact on database performance, enables fast local reads in each region with typical latency of less than a second, and provides disaster recovery from region-wide outages. In the unlikely event of a regional degradation or outage, a secondary region can be promoted to full read/write capabilities in less than 1 minute.
This feature is available for Aurora MySQL and Aurora PostgreSQL.
No. If your primary region becomes unavailable, you can manually remove a secondary region from an Aurora Global Database and promote it to take full reads and writes. You will also need to point your application to the newly promoted region.
Security
Open allServerless
Open allAurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora. With Aurora Serverless, you can run your database in the cloud without managing database capacity. Manually managing database capacity can be time consuming and lead to inefficient use of database resources. With Aurora Serverless, you create a database, specify the desired database capacity range, and connect your application. Aurora automatically adjusts the capacity within the range specified based on your application’s needs.
You pay on a per-second basis for the database capacity you use when the database is active. Learn more about Aurora Serverless and get started in a few steps in the Amazon RDS Management Console.
Parallel Query
Open allFaster performance: Parallel Query can speed up analytical queries by up to 2 orders of magnitude.
Operational simplicity and data freshness: you can issue a query directly over the current transactional data in your Aurora cluster.
Transactional and analytical workloads on the same database: Parallel Query allows Aurora to maintain high transaction throughput alongside concurrent analytical queries.
No, IO costs for your query are metered at the storage layer, and will be the same or larger with Parallel Query turned on. Your benefit is the improvement in query performance.
There are two reasons for potentially higher IO costs with Parallel Query. First, even if some of the data in a table is in the buffer pool, PQ requires all data to be scanned at the storage layer, incurring IO. Second, a side effect of avoiding contention in the buffer pool is that running a PQ query does not warm up the buffer pool. As a result, consecutive runs of the same PQ query will incur the full IO cost.
Zero-ETL integrations
Open allYou should use Amazon Aurora zero-ETL integration with Amazon Redshift when you need near real-time access to transactional data. This integration allows you to take advantage of Amazon Redshift ML with straightforward SQL commands.
Aurora zero-ETL integration with Amazon Redshift is available on the Aurora MySQL-Compatible Edition for Aurora MySQL 3.05.2 version (compatible with MySQL 8.0.32) and higher. Aurora zero-ETL integration with Amazon Redshift is available on the Aurora PostgreSQL-Compatible Edition for Aurora PostgreSQL 16.4 version and higher.
Aurora zero-ETL integration with Amazon Redshift removes the need for you to build and maintain complex data pipelines. You can consolidate data from multiple tables from various Aurora database clusters to a single Amazon Redshift database cluster and run near real-time analytics and ML using Amazon Redshift on petabytes of transactional data from Aurora. You can select the databases and tables to be replicated from Aurora to Amazon Redshift. Based on your analytics needs, data filtering of specific databases and tables helps you selectively bring data into Amazon Redshift.
Aurora zero-ETL integration with Amazon Redshift is compatible with Aurora Serverless v2. When using both Aurora Serverless v2 and Amazon Redshift Serverless you can generate near real-time analytics on transactional data without having to manage any infrastructure for data pipelines.
You can get started by using the Amazon RDS console to create the zero-ETL integration by specifying the Aurora source and Amazon Redshift destination. Once the integration has been created, the Aurora database will be replicated to Amazon Redshift and you can start querying the data once initial seeding is completed. For more information, read the getting started guide for Aurora zero-ETL integrations with Amazon Redshift.
Ongoing processing of data changes by zero-ETL integration is offered at no additional charge. You pay for existing Amazon RDS and Amazon Redshift resources used to create and process the change data generated as part of a zero-ETL integration. These resources could include:
Additional I/O and storage used by enabling enhanced binlog
Snapshot export costs for the initial data export to seed your Amazon Redshift databases
Additional Amazon Redshift storage for storing replicated data
Additional Amazon Redshift compute for processing data replication
Cross-AZ data transfer costs for moving data from source to target
For more information, visit the Aurora pricing page.
Yes, you can manage and automate the configuration and deployment of resources needed for an Aurora zero-ETL integration with Amazon Redshift using Amazon CloudFormation. For more information, visit CloudFormation templates with the zero-ETL integration.
Trusted Language Extensions for PostgreSQL
Open allTrusted Language Extensions (TLE) for PostgreSQL enables developers to build high performance PostgreSQL extensions and run them safely on Amazon Aurora and Amazon RDS. In doing so, TLE improves your time to market and removes the burden placed on database administrators to certify custom and third-party code for use in production database workloads. You can move forward as soon as you decide an extension meets your needs. With TLE, independent software vendors (ISVs) can provide new PostgreSQL extensions to customers running on Aurora and Amazon RDS.
PostgreSQL extensions are executed in the same process space for high performance. However, extensions might have software defects that can crash the database.
TLE for PostgreSQL offers multiple layers of protection to mitigate this risk. TLE is designed to limit access to system resources. The rds_superuser role can determine who is permitted to install specific extensions. However, these changes can only be made through the TLE API. TLE is designed to limit the impact of an extension defect to a single database connection. In addition to these safeguards, TLE is designed to provide DBAs in the rds_superuser role fine-grained, online control over who can install extensions and they can create a permissions model for running them. Only users with sufficient privileges will be able to run and create using the “CREATE EXTENSION” command on a TLE extension. DBAs can also allow-list “PostgreSQL hooks” required for more sophisticated extensions that modify the database’s internal behavior and typically require elevated privilege.
TLE for PostgreSQL is available for Amazon Aurora PostgreSQL-Compatible Edition and Amazon RDS on PostgreSQL on versions 14.5 and higher. TLE is implemented as a PostgreSQL extension itself and you can activate it from the rds_superuser role similar to other extensions supported on Aurora and Amazon RDS.
You can run TLE for PostgreSQL in PostgreSQL 14.5 or higher in Amazon Aurora and Amazon RDS.
TLE for PostgreSQL is currently available in all Amazon Web Services Regions, including Amazon Web Services China (Beijing) Region, operated by Sinnet, and Amazon Web Services China (Ningxia) Region, operated by NWCD.
TLE for PostgreSQL is available to Aurora and Amazon RDS customers at no additional cost.
Aurora and Amazon RDS support a curated set of over 85 PostgreSQL extensions. Amazon Web Services manages the security risks for each of these extensions under the Amazon Web Services shared responsibility model. The extension that implements TLE for PostgreSQL is included in this set. Extensions that you write or that you obtain from third-party sources and install in TLE are considered part of your application code. You are responsible for the security of your applications that use TLE extensions.
You can build developer functions, such as bitmap compression and differential privacy (such as publicly accessible statistical queries that protect privacy of individuals).
TLE for PostgreSQL currently supports JavaScript, PL/pgSQL, Perl, and SQL.
Once the rds_superuser role activates TLE for PostgreSQL, you can deploy TLE extensions using the SQL CREATE EXTENSION command from any PostgreSQL client, such as psql. This is similar to how you would create a user-defined function written in a procedural language, such as PL/pgSQL or PL/Perl. You can control which users have permission to deploy TLE extensions and use specific extensions.
TLE for PostgreSQL accesses your PostgreSQL database exclusively through the TLE API. The TLE supported trusted languages include all functions of the PostgreSQL server programming interface (SPI) and support for PostgreSQL hooks, including the check password hook.
You can learn more about the TLE for PostgreSQL project on the official TLE GitHub page.
Amazon RDS Blue/Green Deployments
Open allAmazon RDS Blue/Green Deployments are available in Amazon Aurora MySQL Compatible Edition, Amazon RDS for MySQL, and Amazon RDS for MariaDB.
Amazon RDS Blue/Green Deployments are available in Amazon Aurora MySQL-Compatible Edition versions 5.6 and higher, RDS for MySQL versions 5.7 and higher, and RDS for versions MariaDB 10.2 and higher. Learn more about available versions in the Aurora, RDS for MySQL, and RDS for MariaDB documentation.
Amazon RDS Blue/Green Deployments are available in all Amazon Web Services Regions, including Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.
You will incur the same price for running your workloads on green instances as you do for blue instances. The cost of running on blue and green instances include our current standard pricing for db.instances, cost of storage, cost of read/write I/Os, and any enabled features, such as cost of backups and Amazon RDS Performance Insights. Effectively, you are paying approximately 2x the cost of running workloads on db.instance for the lifespan of the blue-green-deployment.
For example: You have an RDS for MySQL 5.7 database running on two r5.2xlarge db.instances, a primary database instance and a read replica, in Amazon Web Services China (Ningxia) Region region with a Multi-AZ (MAZ) configuration. Each of the r5.2xlarge db.instances is configured for 20 GiB General Purpose Amazon Elastic Block Storage (EBS). You create a clone of the blue instance topology using Amazon RDS Blue/Green Deployments, run it for 15 days (360 hours), and then delete the blue instances after a successful switchover. The blue instances cost ¥4802.85 for 15 days at an on-demand rate of ¥13.34/hr (Instance + EBS cost). The total cost to you for using Blue/Green Deployments for those 15 days is ¥9605.7, which is 2x the cost of running blue instances for that time period.
Amazon RDS Blue/Green Deployments help you make safer, simpler, and faster database changes, such as major or minor version upgrades, schema changes, instance scaling, engine parameter changes, and maintenance updates.
In Amazon RDS Blue/Green Deployments, the blue environment is your current production environment. The green environment is your staging environment that will become your new production environment after switchover.
When Amazon RDS Blue/Green Deployments initiate a switchover, they blocks writes to both the blue and green environments, until switchover is complete. During switchover, the staging environment, or green environment, catches up with the production system, ensuring data is consistent between the staging and production environment. Once the production and staging environment are in complete sync, Blue/Green Deployments promote the staging environment as the new production environment by redirecting traffic to the newly promoted production environment. Blue/Green Deployments are designed to enable writes on the green environment after switchover is complete, ensuring zero data loss during the switchover process.
Amazon RDS Blue/Green Deployments do not delete your old production environment. If needed, you can access it for additional validations and performance/regression testing. If you no longer need the old production environment, you can delete it. Standard billing charges apply on old production instances until you delete them.
Amazon RDS Blue/Green Deployments switchover guardrails block writes on your blue and green environments until your green environment catches up before switching over. Blue/Green Deployments also perform health checks of your primary and replicas in your blue and green environments. They also perform replication health checks, for example, to see if replication has stopped or if there are errors. They detect long running transactions between your blue and green environments. You can specify your maximum tolerable downtime, as low as 30 seconds, and if you have an ongoing transaction that exceeds this your switchover will time out.
No, Amazon RDS Blue/Green Deployments do not support Global Databases, Amazon RDS Proxy, cross-Region read replicas, or cascaded read replicas.
No, at this time you cannot use Amazon RDS Blue/Green Deployments to rollback changes.