Writing Lambda Functions
Simfra runs Lambda functions in real Docker containers using AWS-compatible runtime images. Your function code works exactly as it would on AWS - no code changes, no endpoint overrides, no test-only branches.
Prerequisites
- Simfra running with
SIMFRA_DOCKER=true - Docker daemon accessible
- AWS CLI configured to point at Simfra (see Endpoint Discovery)
How It Works
When you invoke a Lambda function, Simfra:
- Pulls the runtime image (e.g.,
public.ecr.aws/lambda/python:3.12) - Mounts your function code into the container
- Injects
AWS_ENDPOINT_URL, region, and credentials as environment variables - Starts the container and forwards the invocation payload to the Lambda Runtime Interface Emulator
- Returns the function's response
The container stays warm for subsequent invocations (default: 300 seconds, configurable via SIMFRA_LAMBDA_KEEP_ALIVE).
Zero Code Changes
Simfra injects these environment variables into every Lambda container:
| Variable | Source |
|---|---|
AWS_ENDPOINT_URL |
Simfra gateway endpoint |
AWS_REGION |
Function's configured region |
AWS_DEFAULT_REGION |
Function's configured region |
AWS_ACCESS_KEY_ID |
Execution role credentials (via STS) |
AWS_SECRET_ACCESS_KEY |
Execution role credentials (via STS) |
AWS_SESSION_TOKEN |
Execution role session token |
AWS_LAMBDA_FUNCTION_NAME |
Function name |
AWS_LAMBDA_FUNCTION_VERSION |
Invoked version |
AWS_LAMBDA_FUNCTION_MEMORY_SIZE |
Configured memory |
AWS_LAMBDA_LOG_GROUP_NAME |
/aws/lambda/<function-name> |
AWS_LAMBDA_LOG_STREAM_NAME |
Date-based log stream |
Since all AWS SDKs read AWS_ENDPOINT_URL, any SDK call your function makes - S3, DynamoDB, SQS, SNS, STS - routes to Simfra automatically.
Supported Runtimes
Simfra supports all Lambda runtimes via their official container images:
| Runtime | Image |
|---|---|
| Python 3.9--3.13 | public.ecr.aws/lambda/python:<version> |
| Node.js 18--22 | public.ecr.aws/lambda/nodejs:<version> |
| Go (provided.al2023) | public.ecr.aws/lambda/provided:al2023 |
| Java 11, 17, 21 | public.ecr.aws/lambda/java:<version> |
| .NET 6, 8 | public.ecr.aws/lambda/dotnet:<version> |
| Ruby 3.2, 3.3 | public.ecr.aws/lambda/ruby:<version> |
| Custom (provided.al2023) | public.ecr.aws/lambda/provided:al2023 |
Override the registry with SIMFRA_LAMBDA_IMAGE_REGISTRY for air-gapped environments.
Creating a Function
Zip deployment
# Create a simple Python function
cat > lambda_function.py << 'PYEOF'
import json
import boto3
def handler(event, context):
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(event['table_name'])
table.put_item(Item={
'id': event['id'],
'message': event['message']
})
s3 = boto3.client('s3')
s3.put_object(
Bucket=event['bucket'],
Key=f"records/{event['id']}.json",
Body=json.dumps(event)
)
return {'statusCode': 200, 'body': json.dumps({'id': event['id']})}
PYEOF
zip function.zip lambda_function.py
# Create the execution role
aws iam create-role \
--role-name lambda-exec \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach permissions
aws iam attach-role-policy \
--role-name lambda-exec \
--policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
aws iam attach-role-policy \
--role-name lambda-exec \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
# Create the function
aws lambda create-function \
--function-name my-processor \
--runtime python3.12 \
--handler lambda_function.handler \
--role arn:aws:iam::000000000000:role/lambda-exec \
--zip-file fileb://function.zip
Container image deployment
Build a container image and push it to Simfra's ECR, then reference it in CreateFunction:
# Create ECR repository
aws ecr create-repository --repository-name my-function
# Build and push
docker build -t my-function .
docker tag my-function localhost:4599/000000000000/my-function:latest
docker push localhost:4599/000000000000/my-function:latest
# Create function from image
aws lambda create-function \
--function-name my-function \
--package-type Image \
--code ImageUri=000000000000.dkr.ecr.us-east-1.amazonaws.com/my-function:latest \
--role arn:aws:iam::000000000000:role/lambda-exec
Invoking a Function
aws lambda invoke \
--function-name my-processor \
--payload '{"table_name":"orders","bucket":"data","id":"123","message":"hello"}' \
output.json
cat output.json
Layers
Lambda layers work normally. Create a layer with PublishLayerVersion, then reference it in your function configuration:
# Create a layer
zip layer.zip -r python/
aws lambda publish-layer-version \
--layer-name my-utils \
--compatible-runtimes python3.12 \
--zip-file fileb://layer.zip
# Attach to function
aws lambda update-function-configuration \
--function-name my-processor \
--layers arn:aws:lambda:us-east-1:000000000000:layer:my-utils:1
Layers are extracted and mounted into the container at the standard paths (/opt/).
Event Source Mappings
Simfra supports event source mappings that poll from:
- SQS queues - messages are received and forwarded to your function as batches
- Kinesis streams - records are read from shards and delivered to your function
- DynamoDB Streams - stream records trigger your function on table changes
# Create an SQS event source mapping
aws lambda create-event-source-mapping \
--function-name my-processor \
--event-source-arn arn:aws:sqs:us-east-1:000000000000:incoming-queue \
--batch-size 10
The Lambda service polls the source using the function's execution role. IAM policies on the role must allow the relevant read actions (e.g., sqs:ReceiveMessage, sqs:DeleteMessage).
Credential Lifecycle
Lambda containers receive temporary credentials from the function's execution role via STS AssumeRole. These credentials have a limited lifetime. When credentials are within 5 minutes of expiration, Simfra recycles the container - stopping the old one and creating a new one with fresh credentials on the next invocation.
This matches the real AWS Lambda behavior where the execution environment is refreshed before credentials expire.
If the execution role does not exist or its trust policy does not allow lambda.amazonaws.com, Simfra falls back to root credentials and logs a warning.
Environment Variables
User-defined environment variables set on the function configuration are merged into the container environment. They are applied last, so they can override any Simfra-injected variable except the reserved Lambda runtime variables:
aws lambda update-function-configuration \
--function-name my-processor \
--environment 'Variables={TABLE_NAME=orders,BUCKET=data}'
Next Steps
- Lambda Execution - how Simfra runs Lambda containers, cold starts, and concurrency
- Endpoint Discovery - how Lambda functions discover the Simfra endpoint
- SDK Configuration - per-language AWS SDK setup