Skip to main content

Amazon Lambda FAQs

General

Open all

Amazon Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other Amazon Web Services services or call it directly from any web or mobile app.

Serverless computing allows you to build and run applications and services without thinking about servers. With serverless computing, your application still runs on servers, but all the server management is done by Amazon Web Services. At the core of serverless computing is Amazon Lambda, which lets you run your code without provisioning or managing servers.

Please see our documentation for a complete list of event sources.

Amazon Web Services offers a set of compute services to meet a range of needs.

Amazon EC2 offers flexibility, with a wide range of instance types and the option to customize the operating system, network and security settings, and the entire software stack, allowing you to easily move existing applications to the cloud. With Amazon EC2 you are responsible for provisioning capacity, monitoring fleet health and performance, and designing for fault tolerance and scalability. Amazon Elastic Beanstalk offers an easy-to-use service for deploying and scaling web applications in which you retain ownership and full control over the underlying EC2 instances.

Amazon Lambda makes it easy to execute code in response to events, such as changes to Amazon S3 buckets, updates to an Amazon DynamoDB table, or custom events generated by your applications or devices. With Lambda you do not have to provision your own instances; Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying your code, running a web service front end, and monitoring and logging your code. Amazon Lambda provides easy scaling and high availability to your code without additional effort on your part.

Amazon Lambda makes it easy to execute code in response to events, such as changes to Amazon S3 buckets, updates to an Amazon DynamoDB table, or custom events generated by your applications or devices. With Lambda you do not have to provision your own instances; Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying your code, running a web service front end, and monitoring and logging your code. Amazon Lambda provides easy scaling and high availability to your code without additional effort on your part.

Amazon Lambda offers an easy way to accomplish many activities in the cloud. For example, you can use Amazon Lambda to build mobile back-ends that retrieve and transform data from Amazon DynamoDB, handlers that compress or transform objects as they are uploaded to Amazon S3, auditing and reporting of API calls made to any Amazon Web Service, and server-less processing of streaming data using Amazon Kinesis.

Amazon Lambda supports code written in Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (.NET Core). Your code can include existing libraries, even native ones. Please read our documentation on using Node.js , Python , Java , and C# .

No. Amazon Lambda operates the compute infrastructure on your behalf, allowing it to perform health checks, apply security patches, and do other routine maintenance.

Each Amazon Lambda function runs in its own isolated environment, with its own resources and file system view. Amazon Lambda uses the same techniques as Amazon EC2 to provide security and separation at the infrastructure and execution levels.

Amazon Lambda stores code in Amazon S3 and encrypts it at rest. Amazon Lambda performs additional integrity checks while your code is in use.

Amazon Lambda functions

Open all

The code you run on Amazon Lambda is uploaded as a “Lambda function”. Each function has associated configuration information, such as its name, description, entry point, and resource requirements. The code must be written in a “stateless” style i.e. it should assume there is no affinity to the underlying compute infrastructure. Local file system access, child processes, and similar artifacts may not extend beyond the lifetime of the request, and any persistent state should be stored in Amazon S3, Amazon DynamoDB, or another Internet-available storage service. Lambda functions can include libraries, even native ones.

To improve performance, Amazon  Lambda may choose to retain an instance of your function and reuse it to serve a subsequent request, rather than creating a new copy. Your code should not assume that this will always happen.

Each Lambda function receives 512MB of non-persistent disk space in its own /tmp directory.

You can configure each Lambda function with its own ephemeral storage between 512MB and 10,240MB, in 1MB increments. The ephemeral storage is available in each function’s /tmp directory.

Each function has access to 512MB of storage at no additional cost. When configuring your functions with more than 512MB of ephemeral storage, you will be charged based on the amount of storage you configure, and how long your function runs, metered in 1ms increments. To learn more, see Amazon Lambda Pricing .

Keeping functions stateless enables Amazon Lambda to rapidly launch as many copies of the function as needed to scale to the rate of incoming events. While Amazon Lambda’s programming model is stateless, your code can access stateful data by calling other web services, such as Amazon S3 or Amazon DynamoDB.

Yes. Amazon Lambda allows you to use normal language and operating system features, such as creating additional threads and processes. Resources allocated to the Lambda function, including memory, execution time, disk, and network use, must be shared among all the threads/processes it uses. You can launch processes using any language supported by Amazon Linux.

Lambda attempts to impose as few restrictions as possible on normal language and operating system activities, but there are a few activities that are disabled: Inbound network connections are blocked by Amazon Lambda, and for outbound connections only TCP/IP sockets are supported, and ptrace (debugging) system calls are blocked. TCP port 25 traffic is also blocked as an anti-spam measure.

If you are using Node.js or Python, you can author the code for your function using the inline editor in the Amazon Lambda console. Go to the console to get started . You can also package the code (and any dependent libraries) as a ZIP and upload it using the Amazon Lambda console from your local environment or specify an Amazon S3 location where the ZIP file is located. Uploads must be no larger than 50MB (compressed). You can use the Amazon Eclipse plugin to author and deploy Lambda functions in Java. You can use the Visual Studio plugin to author and deploy Lambda functions in C#, and Node.js.

You can package the code (and any dependent libraries) as a ZIP and upload it using the Amazon CLI from your local environment, or specify an Amazon S3 location where the ZIP file is located. Uploads must be no larger than 50MB (compressed). Visit the Lambda Getting Started guide to get started.

You can easily list, delete, update, and monitor your Lambda functions using the dashboard in the Amazon Lambda console. You can also use the Amazon CLI and Amazon SDK to manage your Lambda functions. Visit the Lambda Developers Guide to learn more.

Amazon Lambda automatically monitors Lambda functions on your behalf, reporting real-time metrics through Amazon CloudWatch, including total requests, latency, error rates, and throttled requests. You can view statistics for each of your Lambda functions via the Amazon CloudWatch console or through the Amazon Lambda console. You can also call third-party monitoring APIs in your Lambda function. Visit Troubleshooting CloudWatch metrics to learn more. Standard charges for Amazon Lambda apply to use Lambda’s built-in metrics.

Amazon Lambda automatically integrates with Amazon CloudWatch logs, creating a log group for each Lambda function and providing basic application lifecycle event log entries, including logging the resources consumed for each use of that function. You can easily insert additional logging statements into your code. You can also call third-party logging APIs in your Lambda function. Visit Troubleshooting Lambda functions to learn more. Amazon CloudWatch Logs rates will apply.

You do not have to scale your Lambda functions – Amazon Lambda scales them automatically on your behalf. Every time an event notification is received for your function, Amazon Lambda quickly locates free capacity within its compute fleet and runs your code. Since your code is stateless, Amazon Lambda can start as many copies of your function as needed without lengthy deployment and configuration delays. There are no fundamental limits to scaling a function. Amazon Lambda will dynamically allocate capacity to match the rate of incoming events.

In the Amazon Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory and half as much CPU power as choosing 512MB of memory. You can allocate any amount of memory to your function between 128MB and 10,240MB, in 1MB increments.

All calls made to Amazon Lambda must complete execution within 900 seconds. The default timeout is 3 seconds, but you can set the timeout to any value between 1 and 900 seconds.

Yes. By default, each Amazon Lambda function has a single, current version of the code. Clients of your Lambda function can call a specific version or get the latest implementation. Please read out documentation on versioning Lambda functions .

Deployment times may vary with the size of your code, but Amazon Lambda functions are typically ready to call within seconds of upload.

Yes. you can include your own copy of a library (including the Amazon SDK) in order to use a different version than the default one provided by Amazon Lambda.

Customers running memory or compute intensive workloads can now powerup their functions. Larger memory functions help multithreaded applications run faster, making them ideal for data and computationally intensive applications like machine learning, batch and ETL jobs, financial modelling, genomics, HPC, and media processing.

You can configure each Lambda function with its own ephemeral storage between 512MB and 10,240MB, in 1MB increments by using the Amazon Lambda console, Amazon Lambda API, or Amazon CloudFormation template during function creation or update.

Yes. All data stored in ephemeral storage is encrypted at rest with a key.

You can use Amazon CloudWatch Lambda Insight metrics to monitor your ephemeral storage usage. To learn more, see the Amazon CloudWatch Lambda Insights documentation .

If your application needs durable, persistent storage, consider using Amazon S3 or Amazon EFS. If your application requires storing data needed by code in a single function invocation, consider using Amazon Lambda ephemeral storage as a transient cache.

Yes. However, if your application needs persistent storage, consider using Amazon EFS or Amazon S3. When you enable Provisioned Concurrency for your function, your function's initialization code runs during allocation and every few hours, as running instances of your function are recycled. You can see the initialization time in logs and traces after an instance processes a request. However, initialization is billed even if the instance never processes a request. This Provisioned Concurrency initialization behavior may affect how your function interacts with data you store in ephemeral storage, even when your function isn’t processing requests.

Using Amazon Lambda to process Amazon events

Open all

An event source is an Amazon Web Services service or developer-created application that produces events that trigger an Amazon Lambda function to run. Some services publish these events to Lambda by invoking the cloud function directly (for example, Amazon S3). Lambda can also poll resources in other services that do not publish events to Lambda. For example, Lambda can pull records from a Kinesis stream and execute a Lambda function for each message in the stream.

Many other services, such as Amazon CloudTrail, can act as event sources simply by logging to Amazon S3 and using S3 bucket notifications to trigger Amazon Lambda functions.

Please see our documentation for a complete list of event sources.

Events are passed to a Lambda function as an event input parameter. For event sources where events arrive in batches, such as Amazon Kinesis and Amazon DynamoDB Streams, the event parameter may contain multiple events in a single call, based on the batch size you request.To learn more about Amazon S3 event notifications visit Configuring Notifications for Amazon S3 Events . To learn more about Amazon DynamoDB Streams visit the DynamoDB Stream Developers Guide . To learn more about invoking Lambda functions using Amazon SNS, visit the Amazon SNS Developers Guide . For more information on Amazon CloudTrail logs and auditing API calls across Amazon Web Services services, see Amazon CloudTrail.

From the Amazon Lambda console, you can select a function and associate it with notifications from an Amazon S3 bucket. Alternatively, you can use the Amazon S3 console and configure the bucket’s notifications to send to your Amazon Lambda function. This same functionality is also available through the Amazon SDK and CLI.

You can trigger a Lambda function on DynamoDB table updates by subscribing your Lambda function to the DynamoDB Stream associated with the table. You can associate a DynamoDB Stream with a Lambda function using the Amazon DynamoDB console, the Amazon Lambda console or Lambda’s registerEventSource API.

From the Amazon Lambda console, you can select a Lambda function and associate it with an Amazon Kinesis stream owned by the same account. This same functionality is also available through the Amazon SDK and CLI.

The Amazon Kinesis and DynamoDB Streams records sent to your Amazon Lambda function are strictly serialized, per shard. This means that if you put two records in the same shard, Lambda guarantees that your Lambda function will be successfully invoked with the first record before it is invoked with the second record. If the invocation for one record times out, is throttled, or encounters any other error, Lambda will retry until it succeeds (or the record reaches its 24-hour expiration) before moving on to the next record. The ordering of records across different shards is not guaranteed, and processing of each shard happens in parallel.

From the Amazon Lambda console, you can select a Lambda function and associate it with an Amazon SNS topic. This same functionality is also available through the Amazon SDK and CLI.

First, configure the alarm to send Amazon SNS notifications. Then from the Amazon Lambda console, select a Lambda function and associate it with that Amazon SNS topic. See the Amazon CloudWatch Developer Guide for more on setting up Amazon CloudWatch alarms.

You can invoke a Lambda function using a custom event through Amazon Lambda’s invoke API. Only the function’s owner or another Amazon account that the owner has granted permission can invoke the function. Visit the Lambda Developers Guide to learn more.

Amazon Lambda is designed to process events within milliseconds. Latency will be higher immediately after a Lambda function is created, updated, or if it has not been used recently.

You can invoke a Lambda function over HTTPS by defining a custom RESTful API using Amazon API Gateway. This gives you an endpoint for your function which can respond to REST calls like GET, PUT and POST. Read more about using Amazon Lambda with Amazon API Gateway.

When called through the Amazon Mobile SDK, Amazon Lambda functions automatically gain insight into the device and application that made the call through the ‘context’ object.

For Amazon S3 bucket notifications and custom events, Amazon Lambda will attempt execution of your function three times in the event of an error condition in your code or if you exceed a service or resource limit. For ordered event sources that Amazon Lambda polls on your behalf, such as Amazon DynamoDB Streams and Amazon Kinesis streams, Lambda will continue attempting execution in the event of a developer code error until the data expires. You can monitor progress through the Amazon Kinesis and Amazon DynamoDB consoles and through the Amazon CloudWatch metrics that Amazon Lambda generates for your function. You can also set Amazon CloudWatch alarms based on error or execution throttling rates.

Using Amazon Lambda to build applications

Open all

Lambda-based applications (also referred to as serverless applications) are composed of functions triggered by events. A typical serverless application consists of one or more functions triggered by events such as object uploads to Amazon S3, Amazon SNS notifications, or API actions. These functions can stand alone or leverage other resources such as DynamoDB tables or Amazon S3 buckets. The most basic serverless application is simply a function.

You can deploy and manage your serverless applications using the Amazon Serverless Application Model (Amazon SAM). Amazon SAM is a specification that prescribes the rules for expressing serverless applications on Amazon Web Services. This specification aligns with the syntax used by Amazon CloudFormation today and is supported natively within Amazon CloudFormation as a set of resource types (referred to as "serverless resources"). These resources make it easier for Amazon Web Services customers to use CloudFormation to configure and deploy serverless applications, using existing CloudFormation APIs.

To get started, visit the Amazon Lambda console and download one of our blueprints. The file you download will contain an Amazon SAM file (which defines the Amazon resources in your application), and a .ZIP file (which includes your function’s code). You can then use Amazon CloudFormation commands to package and deploy the serverless application that you just downloaded. For more details, visit our documentation .

The specification is open sourced under Apache 2.0, which allows you and others to adopt and incorporate Amazon SAM into build, deployment, monitoring and management tools with a commercial-friendly license. You can access the Amazon SAM repository on GitHub here .

Provisioned Concurrency

Open all

Provisioned Concurrency gives you greater control over the performance of your serverless applications. When enabled, Provisioned Concurrency is designed to keep functions initialized and hyper-ready to respond in double-digit milliseconds.

You can set concurrency on your function through the Amazon Web Services Management Console, the Lambda API, the Amazon CLI, and Amazon CloudFormation. The simplest way to benefit from Provisioned Concurrency is by using Amazon Auto Scaling. You can use Application Auto Scaling to configure schedules, or have Auto Scaling automatically adjust the level of Provisioned Concurrency in real time as demand changes. To learn more about Provisioned Concurrency, see the documentation .

You don’t need to make any changes to your code to use Provisioned Concurrency. It works seamlessly with all existing functions and runtimes. There is no change to the invocation and execution model of Lambda when using Provisioned Concurrency.

Provisioned Concurrency adds a pricing dimension, of ‘Provisioned Concurrency’ , for keeping functions initialized. When enabled, you only pay for the amount of concurrency that you configure and for the period of time that you configure it. When Provisioned Concurrency is enabled for your function and you execute it, you also pay for Requests and execution Duration . To learn more about the pricing of Provisioned Concurrency, see Amazon Lambda Pricing .

Provisioned Concurrency is ideal for building latency sensitive applications, such as web or mobile backends, synchronously invoked APIs, and interactive microservices. You can easily configure the appropriate amount of concurrency based on your application's unique demand. You can increase the amount of concurrency during times of high demand and lower it, or turn it off completely, when demand decreases.

If the concurrency of a function reaches the configured level, subsequent invocations of the function have the latency and scale characteristics of regular Lambda functions. You can restrict your function to only scale up to the configured level. Doing so prevents the function from exceeding the configured level of Provisioned Concurrency. This is a mechanism to prevent undesired variability in your application when demand exceeds the anticipated amount.

Scalability and availability

Open all

Amazon Lambda is designed to use replication and redundancy to provide high availability for both the service itself and for the Lambda functions it operates. There are no maintenance windows or scheduled downtimes for either.

Yes. When you update a Lambda function, there will be a brief window of time, typically less than a minute, when requests could be served by either the old or the new version of your function.

No. Amazon Lambda is designed to run many instances of your functions in parallel. However, Amazon  Lambda has a default safety throttle for number of concurrent executions per account per region (visit here for info on default safety throttle limits). If you wish to submit a request to increase the throttle limit you can visit our Support Center , click “Create case”, and file a service limit increase request.

On exceeding the throttle limit, Amazon Lambda functions being invoked synchronously will return a throttling error (429 error code). Lambda functions being invoked asynchronously can absorb reasonable bursts of traffic for approximately 15-30 minutes, after which incoming events will be rejected as throttled. In case the Lambda function is being invoked in response to Amazon S3 events, events rejected by Amazon Lambda may be retained and retried by S3 for 24 hours. Events from Amazon Kinesis streams and Amazon DynamoDB streams are retried until the Lambda function succeeds or the data expires. Amazon Kinesis and Amazon DynamoDB Streams retain data for 24 hours.

No, the default limit only applies at an account level.

On failure, Lambda functions being invoked synchronously will respond with an exception. Lambda functions being invoked asynchronously are retried at least 3 times. Events from Amazon Kinesis streams and Amazon DynamoDB streams are retried until the Lambda function succeeds or the data expires. Kinesis and DynamoDB Streams retain data for a minimum of 24 hours.

On exceeding the retry policy for asynchronous invocations, you can configure a “dead letter queue” (DLQ) into which the event will be placed; in the absence of a configured DLQ the event may be rejected. On exceeding the retry policy for stream based invocations, the data would have already expired and therefore rejected.

You can configure an Amazon SQS queue or an Amazon SNS topic as your dead letter queue.

Security and access control

Open all

You grant permissions to your Lambda function to access other resources using an IAM role. Amazon Lambda assumes the role while executing your Lambda function, so you always retain full, secure control of exactly which Amazon resources it can use. Visit Setting up Amazon Lambda to learn more about roles.

When you configure an Amazon S3 bucket to send messages to an Amazon Lambda function a resource policy rule will a be created that grants access. Visit the Lambda Developer's Guide to learn more about resource policies and access controls for Lambda functions.

Access controls are managed through the Lambda function’s role. The role you assign to your Lambda function also determines which resource(s) Amazon Lambda can poll on its behalf. Visit the Lambda Developer's Guide to learn more.

Yes. You can access resources behind Amazon VPC.

To enable VPC support, you need to specify one or more subnets in a single VPC and a security group as part of your function configuration. To disable VPC support, you need to update the function configuration and specify an empty list for the subnet and security group. You can change these settings using the Amazon APIs, CLI, or Amazon Lambda Management Console.

No. Lambda functions provide access only to a single VPC. If multiple subnets are specified, they must all be in the same VPC. You can connect to other VPCs by peering your VPCs.

Lambda functions configured to access resources in a particular VPC will not have access to the internet as a default configuration. If you need access to external endpoints, you will need to create a NAT in your VPC to forward this traffic and configure your security group to allow this outbound traffic.

Amazon Lambda functions in Java

Open all

You can use standard tools like Maven or Gradle to compile your Lambda function. Your build process should mimic the same build process you would use to compile any Java code that depends on the Amazon SDK. Run your Java compiler tool on your source files and include the Amazon SDK 1.9 or later with transitive dependencies on your classpath. For more details, see our documentation .

Lambda provides the Amazon Linux build of openjdk 1.8.

Amazon Lambda functions in Node.js

Open all

Yes. You can use NPM packages as well as custom packages.

Yes. Lambda’s built-in sandbox lets you run batch (“shell”) scripts, other language runtimes, utility routines, and executables.

Yes. Any statically linked native module can be included in the ZIP file you upload, as well as dynamically linked modules compiled with an rpath pointing to your Lambda function root directory.

Yes. You can use Node.js' child_process command to execute a binary that you've included in your function or any executable from Amazon Linux that is visible to your function. Alternatively several NPM packages exist that wrap command line binaries such as node-ffmpeg.

To deploy a Lambda function written in Node.js, simply package your Javascript code and dependent libraries as a ZIP. You can upload the ZIP from your local environment, or specify an Amazon S3 location where the ZIP file is located. For more details, see our documentation .

Amazon Lambda functions in Python

Open all

Yes. You can use pip to install any Python packages needed.

Amazon Lambda functions in C#

Open all

You can create a C# Lambda function using the Visual Studio IDE by selecting "Publish to Amazon Lambda" in the Solution Explorer. Alternatively, you can directly run the "dotnet lambda publish" command from the dotnet CLI which has the [# Lambda CLI tools patch] installed, which creates a ZIP of your C# source code along with all NuGet dependencies as well as your own published DLL assemblies, and automatically uploads it to Amazon Lambda using the runtime parameter “dotnetcore1.0”

Other topics

Open all

You can view the list of supported versions here .

No. Amazon Lambda offers a single version of the operating system and language runtime to all users of the service.

Amazon Lambda is integrated with Amazon CloudTrail. Amazon CloudTrail can record and deliver log files to your Amazon S3 bucket describing the API usage of your account.

Amazon EFS for Amazon Lambda

Open all

With Amazon Elastic File System for Amazon Lambda, customers can securely read, write and persist large volumes of data at virtually any scale. Previously, developers added code to their functions to download data from S3 or databases to local temporary storage, limited to 512MB. With EFS for Lambda, developers don't need to write code to download data to temporary storage in order to process it.

Developers can easily connect an existing EFS file system to a mount point in a Lambda function by using the console, CLI or SDK. When the function is first configured, the file system is automatically mounted and made available to function code. You can learn more in the documentation.

Yes. Mount targets for Amazon EFS are associated with a subnets in a VPC. The Amazon Lambda function needs to be configured to access that VPC. 

Using EFS for Lambda is ideal for building machine learning applications or loading large reference files or models, processing or backing up large amounts of data, hosting web content, or developing internal build systems. Customers can also use Lambda access to EFS for keeping state between invocations within a stateful microservice architecture, or sharing files between serverless applications and instance or container based applications.

Yes. Data encryption in transit uses industry standard Transport Layer Security (TLS) 1.2 to encrypt data sent between Amazon Lambda functions and the Amazon EFS file systems.

Customers can provision Amazon EFS to encrypt data at rest. Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the Amazon Key Management Service (KMS), eliminating the need to build and maintain a secure key management infrastructure.

There is no additional charge for using EFS for Lambda. Customers pay the standard price for Amazon Lambda and for Amazon EFS. When using Lambda and EFS in the same availability zone, customers are not charged for data transfer. However, if they use VPC peering for Cross-Account access, they will incur data transfer charges. To learn more, please see Pricing .

Lambda Extensions

Open all

Amazon Lambda Extensions lets you integrate Lambda with your favorite tools for monitoring, observability, security, and governance. Extensions enable you and your preferred tooling vendors to plug into Lambda’s lifecycle and integrate more deeply into the Lambda execution environment.

Extensions are companion processes which run within Lambda’s execution environment which is where your function code is executed. In addition, they can run outside of the function invocation - i.e. they start before the function is initialized, run in parallel with the function, can run after the function execution is complete, and can also run before the Lambda service shuts down the execution environment.

You can deploy extensions, using Layers, on one or more Lambda functions using the Console, CLI, or Infrastructure as Code tools such as CloudFormation, the Amazon Serverless Application Model, and Terraform. To get started, visit the documentation .

You can use extensions with the following runtimes: .NET Core 3.1 (C#/PowerShell) (dotnetcore3.1), Custom runtime (provided), Custom runtime on Amazon Linux 2 (provided.al2), Java 11 (Corretto) (java11), Java 8 (Corretto) (java8.al2), Node.js 12.x (nodejs12.x), Node.js 10.x (nodejs10.x), Python 3.8 (python3.8), Python 3.7 (python3.7), Ruby 2.7 (ruby2.7), Ruby 2.5 (ruby2.5). Lambda Extensions and the functions they’re extending can use different runtimes.

Yes, the total unzipped size of the function and all Extensions cannot exceed the unzipped deployment package size limit of 250 MB.

Extensions may impact the performance of your function because they share resources such as CPU, memory, and storage with the function, and because extensions are initialized before function code. For example, if an extension performs compute intensive operations, you may see your function’s execution duration increase because the extension and your function code share the same CPU resources.

You can use the ExtensionDurationOverhead metric to measure the extra time the extension takes after the function execution, and, you can use the MaxMemoryUsed metric to measure the increase in memory used. To understand the impact of a specific extension, you can also use the Duration metric. To learn more, visit the Lambda developer documentation .

Extensions share the same billing model as Lambda functions. When using Lambda functions with extensions, you pay for requests served and the combined compute time used to run your code and all extensions, in 1ms increments. You will be charged for compute time as per existing Lambda duration pricing. To learn more, see Amazon Lambda pricing .

The Lambda lifecycle is made up of three distinct phases: ‘init’, when Amazon Lambda initializes the function, dependencies, and extensions; ‘invoke’, when Lambda executes function and extension code in response to triggers; and ‘shut down’, after function execution has completed, but extension code could still be executing, and which can last up to two seconds. You will be charged for compute time used to run your extension code during all three phases of the Lambda lifecycle. To learn more about the Lambda lifecycle, see the documentation on the Lambda Execution Environment .

There is no additional cost for installing extensions, although third party offerings may be chargeable. See third party vendor website for details.

Yes, by using the Amazon Lambda Runtime Extensions API. Visit the documentation to learn more.

Provisioned Concurrency keeps functions initialized and ready to respond in double-digit milliseconds. When enabled, Provisioned Concurrency will also initialize extensions and keep them ready to execute alongside function code.

Yes, Amazon Lambda supports the Advanced Vector Extensions 2 (AVX2) instruction set. To learn more about how to compile your application code to target this instruction set for improved performance, visit the  Amazon Lambda developer documentation .

Because Extensions are executed within the same environment as a Lambda function, they have access to the same resources as the function and permissions are shared between the function and the extension, therefore they share credentials, role, and environment variables. Extensions have read-only access to function code, and can read and write in /tmp.

Learn more about Amazon Lambda pricing

Visit the pricing page

Intended Usage and Restrictions

Your use of this service is subject to the Amazon Web Services Customer Agreement.