The lambda function runs in an ephemeral environment. It spins up on demand, lasts a brief time, and then is taken down. Lambda Service manages to create and tear down these environments for your function. You don't have control over these environments.
Invocation Requests
=> AWS Lambda Service
=> 1. Create an Execution Environment to run the function
2. Download code into the environment and initializes runtime
3. Download packages, dependencies
4. Initialize the global variable
5. Initialized temp space
=> Lambda runs your function starting from the Handler method
When an invocation is complete, Lambda can reuse that initialized environment to fulfill the next request, if the next request comes in close behind the first one, in which case the second request skips all of the initialization steps and goes directly to running the handler method.
If you pay attention to the cloud watch log, you can notice big differences in the duration between cold start and warm start lambda. Refer following which is for a simple function that returns a list of strings written in dotnet6.
First Invocation
INIT_START Runtime Version: dotnet:6.v13 Runtime Version
Duration: 24293.23 ms Billed Duration: 24294 ms Memory Size: 500 MB Max Memory Used: 90 MB
Second Invocation
Duration: 9.91 ms Billed Duration: 10 ms Memory Size: 500 MB Max Memory Used: 90 MB
Here are some of the guidelines which you can take as a developer to mitigate cold start
- Provisioned concurrency - Setting this will keep the desired number of environments always warm. Request beyond the provisioned concurrency (spillover invocation) uses the on-demand pool, which has to go through the cold start steps outlined above. This has a cost implication. You may want to analyze the calling pattern and update provisioned concurrency accordingly to minimize the cost.
- Deployment package - Minimize your deployment package size to its runtime necessities. This reduces the amount of time it takes for your deployment package to download and unpack ahead of invocation. This is particularly important for functions authored in compiled languages. Framework/languages which support AOT compilation and tree shaking can have an automated way to reduce the deployment package.
- AOT - In .Net a language-specific compiler converts the source code to the intermediate language. This intermediate language is then converted into machine code by the JIT compiler. This machine code is specific to the computer environment that the JIT compiler runs on. The JIT compiler requires less memory usage as only the methods that are required at run-time are compiled into machine code by the JIT Compiler. Code optimization based on statistical analysis can be performed by the JIT compiler while the code is running. But on the other hand, JIT compiler requires more startup time while the application is executed initially. To minimize this you can take advantage of AOT support with .Net7. Using this you publish self-contained app AOT compiled for a specific environments such as Linux x64 or Windows x64. This can help reduce the cold start.
- SnapStart-When you publish a function version, Lambda takes a snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.