How to deploy a uv project to AWS Lambda
AWS Lambda packages a Python function as either a zip archive (up to 250 MB unzipped) or a container image (up to 10 GB). Both formats need every dependency the function imports, compiled for Lambda’s Linux runtime, with no virtual environment and no pip install step at runtime. uv is fast at producing exactly that shape, and its lockfile makes the resulting deployment reproducible.
Pick a packaging format
| Format | Pick when |
|---|---|
| Container image | Production services, dependencies near or above the 250 MB zip limit, native libraries that don’t ship manylinux wheels, image-based local testing. |
| Zip archive | Small functions, simple dependencies, fastest cold starts on Python managed runtimes. |
| Lambda layer + zip | Dependencies change rarely, function code changes often. |
The zip and layer paths use the same uv export + uv pip install --target recipe; only the zip layout differs. The container path uses Lambda’s official Python base image and uv pip install at build time.
Build a container image with uv
The container image path is the upstream-recommended approach for production. The astral-sh/uv-aws-lambda-example repo is the canonical reference; this section walks through what each piece does.
Project layout:
my-function/
├── app/
│ ├── __init__.py
│ └── main.py
├── Dockerfile
├── pyproject.toml
└── uv.lockDefine your handler in app/main.py:
def handler(event, context):
return {"statusCode": 200, "body": "hello from uv"}Add a Dockerfile that builds dependencies in one stage and copies them into Lambda’s task root in a second:
FROM ghcr.io/astral-sh/uv:0.11.8 AS uv
FROM public.ecr.aws/lambda/python:3.13 AS builder
ENV UV_COMPILE_BYTECODE=1
ENV UV_NO_INSTALLER_METADATA=1
ENV UV_LINK_MODE=copy
RUN --mount=from=uv,source=/uv,target=/bin/uv \
--mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv export --frozen --no-emit-workspace --no-dev --no-editable -o requirements.txt && \
uv pip install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
FROM public.ecr.aws/lambda/python:3.13
COPY --from=builder ${LAMBDA_TASK_ROOT} ${LAMBDA_TASK_ROOT}
COPY ./app ${LAMBDA_TASK_ROOT}/app
CMD ["app.main.handler"]A few details that matter:
public.ecr.aws/lambda/python:3.13is the Lambda runtime image. Switch to:3.13-arm64to target Graviton.uv export ... | uv pip install --target "${LAMBDA_TASK_ROOT}"flattens dependencies into Lambda’s working directory. Lambda imports modules directly from there; no virtual environment, no.venvactivation.--no-emit-workspaceexcludes local workspace members fromrequirements.txt. Bundle the workspace separately if you need it (see the upstream example for a workspace recipe).--no-dev --no-editablestrips dev-only dependencies and editable installs, both of which would bloat the image and break inside Lambda.UV_NO_INSTALLER_METADATA=1skips installer metadata so the layer is byte-for-byte deterministic on rebuilds.- The bind mounts let uv read
uv.lockandpyproject.tomlwithout copying them into a layer, which means a change to your application code does not invalidate the dependency-install layer.
Build, push to ECR, and create the function:
uv lock
docker build -t my-function .
aws ecr get-login-password --region us-east-1 \
| docker login --username AWS --password-stdin <account>.dkr.ecr.us-east-1.amazonaws.com
docker tag my-function:latest <account>.dkr.ecr.us-east-1.amazonaws.com/my-function:latest
docker push <account>.dkr.ecr.us-east-1.amazonaws.com/my-function:latest
aws lambda create-function \
--function-name my-function \
--package-type Image \
--code ImageUri=<account>.dkr.ecr.us-east-1.amazonaws.com/my-function:latest \
--role arn:aws:iam::<account>:role/lambda-execution-roleFor Docker fundamentals beyond Lambda, see How to use uv in a Dockerfile.
Build a zip archive with uv
Zip is the right choice when dependencies fit under 250 MB unzipped and the function uses one of Lambda’s managed Python runtimes. This avoids ECR entirely and gives the fastest cold start the platform offers.
Export, install for Lambda’s platform, and zip:
uv export --frozen --no-dev --no-editable -o requirements.txt
uv pip install \
--no-installer-metadata \
--no-compile-bytecode \
--python-platform x86_64-manylinux2014 \
--python 3.13 \
--target packages \
-r requirements.txt
cd packages && zip -r ../package.zip . && cd ..
zip -r package.zip appFor Graviton (ARM64), swap x86_64-manylinux2014 for aarch64-manylinux2014 and pair it with an arm64 Lambda architecture.
Deploy the resulting archive:
aws lambda create-function \
--function-name my-function \
--runtime python3.13 \
--zip-file fileb://package.zip \
--handler app.main.handler \
--role arn:aws:iam::<account>:role/lambda-execution-role--no-compile-bytecode is the opposite default from the container path. Pre-compiling bytecode at install time bakes timestamps into .pyc files, which kills reproducibility for the zip without saving meaningful cold-start time on Lambda’s managed runtime. The container path keeps UV_COMPILE_BYTECODE=1 because the runtime image preserves the .pyc files for every invocation in the same container; the managed runtime does not.
Split dependencies into a Lambda layer
If your dependencies stabilize but your function code changes daily, a layer cuts upload size on every deploy. Build the layer once, attach it to the function, and ship only the application zip on subsequent deploys.
Build the layer:
uv export --frozen --no-dev --no-editable -o requirements.txt
uv pip install \
--no-installer-metadata \
--no-compile-bytecode \
--python-platform x86_64-manylinux2014 \
--python 3.13 \
--prefix packages \
-r requirements.txt
mkdir python && cp -r packages/lib python/
zip -r layer.zip pythonLambda layers expect a directory called python/ at the archive root, which is why this recipe copies packages/lib/ into python/lib/ before zipping. The --prefix flag (instead of --target) gives uv the layout that maps cleanly into that structure.
Publish and attach:
aws lambda publish-layer-version \
--layer-name my-deps \
--zip-file fileb://layer.zip \
--compatible-runtimes python3.13 \
--compatible-architectures x86_64
aws lambda update-function-configuration \
--function-name my-function \
--layers arn:aws:lambda:us-east-1:<account>:layer:my-deps:1The application zip now needs to contain only app/ (no dependencies). Re-run only the layer build when uv.lock changes.
Handle native dependencies on macOS and Windows
uv pip install --target runs on the developer’s machine, but Lambda runs Linux x86_64 (or ARM64). Without an explicit target, uv resolves wheels for the local platform: pydantic-core, psycopg2-binary, cryptography, numpy, and other packages with C or Rust extensions will install macOS or Windows wheels that crash on Lambda with Runtime.ImportModuleError or invalid ELF header.
Two flags fix this:
--python-platform x86_64-manylinux2014(oraarch64-manylinux2014) tells uv which target platform to resolve wheels for.manylinux2014is the widest-compatible Linux wheel tag and matches Lambda’s Amazon Linux 2 base.--python 3.13pins the target Python version independently of the developer’s local Python.
The container path side-steps this entirely because the install runs inside the Lambda base image, where the local platform already matches Lambda. That is one reason container images become the default once a project pulls in native dependencies.
If a dependency does not ship a Linux wheel at all, neither path works without compiling from source. For those cases, build inside the Lambda base image (the container path) so the compiler toolchain matches.
Cut cold-start time
Cold starts on Python Lambdas spend most of their non-init time on imports, not on pip install (which never runs at runtime). Trim what gets imported and pre-compile what does:
- Pre-compile bytecode in container images.
UV_COMPILE_BYTECODE=1writes.pycfiles at image-build time. The Lambda container runtime keeps them, unlike the managed runtime. Cold starts skip the first-import bytecode compile, which can shave hundreds of milliseconds for FastAPI apps with many routes or anything pulling in numpy or pandas. - Drop dev dependencies with
--no-dev. Pytest, Ruff, IPython, and other tools have no business shipping to production. - Strip editable installs with
--no-editable. The project becomes a regular install rather than a.pthfile pointing at source. Editable installs do not work inside Lambda’s frozen filesystem and bloat the deployment with*.dist-info/RECORDreferences. - Use
--no-installer-metadatain the container path for deterministic builds. It removes timestamps from installer metadata, which means a clean rebuild produces a byte-identical install layer and improves Docker layer caching.
For functions that import 100+ MB of dependencies, the container path with bytecode pre-compilation typically beats the zip path on cold-start time. Measure with the Lambda console’s Init duration metric on a cold invocation rather than guessing.
Reach for AWS SAM if you already use it
AWS SAM CLI added a BuildMethod: python-uv for sam build (still in preview as of May 2026). It runs the same uv export + uv pip install --target recipe under the hood, with SAM’s template metadata as the configuration surface. Useful if a project is already on SAM; not a reason to migrate to SAM if it is not. The Astral docs section on Using uv with AWS Lambda covers SAM and AWS CDK integration patterns.
Learn more
- astral-sh/uv-aws-lambda-example is the canonical container-image reference.
- Using uv with AWS Lambda covers all three formats plus SAM and CDK.
- Building Python Lambda functions with uv in AWS SAM documents the
python-uvSAM build method. - Deploy Python Lambda functions with container images covers Lambda container image semantics independent of uv.
- How to use uv in a Dockerfile covers Docker patterns that apply outside Lambda.
- What is a lock file? explains why
--frozenin these recipes matters for reproducibility.