Book a consult
Product Design and Development
Concept DevelopmentMVPEnhancement & ExpansionIntegrationsMaintenance
Consulting & Services
Venture StudioFractional Executives
Industry Solutions
MedTechTransportation & Logistics
Our Work
Partners & Integrations
About Us
Blog
Careers
Contact Us
REQUEST A QUOTEBOOK A CONSULT

info@weavik.com

14 Erb Street W., Waterloo, ON, Canada

Share
close or reset button

Facebook

Twitter

Linkedin

Email

Copy Link

Blog
arrow
Streamlining Local Development of Python Lambda Functions

Streamlining Local Development of Python Lambda Functions

Software Development
May 23, 2024

"This blog originally appeared on sachasmart.com. Sacha is a Software Engineering team lead at Weavik and excels in managing product development and client interactions."

‍

Overview

AWS Lambda is a serverless, event-driven compute service commonly used in microservice architectures, offering developers greater flexibility in application development. For instance, using AWS Lambda, a Node.js developer can integrate image processing into their application by leveraging a Python Lambda function and receive the results seamlessly within the platform.

Unlike Amazon Elastic Compute Cloud (EC2), which is priced by the hour but metered by the second, AWS Lambda is billed by rounding up to the nearest millisecond with no minimum execution time. This means developers only pay for the processing they use, and there are no charges outside the period of invocation.

AWS Lambda scales with load, rapidly creating more functions as demand increases and scaling down to zero when demand decreases. This flexibility allows developers to handle varying workloads efficiently and cost-effectively, without the need to provision or manage servers.

‍

Containers with Lambda

During Andy Jassy's AWS re:Invent 2020 keynote, it was announced that AWS Lambda would support containers. Developers can now package and deploy Lambda functions as container images using a process similar to the following:

  1. Create a Dockerfile using an AWS provided base images,
  2. Define the container environment, including installing dependencies and copying the code to be executed.
  3. Build and push the container image to a Container Registry (such as AWS Elastic Container Registry or another registry).
  4. Use the registry image as the source code for the Lambda function.

An additional benefit of using containers is the increase in ephemeral storage, increasing the limits from 512MB to 10GB.

‍

Understanding the Lambda Environment and Lifecycle

AWS Lambda offers an HTTP API that allows custom runtimes to receive invocation events from Lambda and send response data back while operating within the Lambda execution environment.

‍

Filename Path Function
bootstrap /var/runtime/ A script that runs at startup that interacts between the Lambda service and the function code
[anything] /var/task/ Function code to be executed

‍

he Runtime API is the orchestration layer that invokes the Lambda function's code, passing in the context and receiving the response of the invocation request. A custom runtime can be defined by updating the bootstrap executable. In a traditional Python lambda function, this bootstrap executable looks like this:

‍

#!/bin/bash
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.

export AWS_EXECUTION_ENV=AWS_Lambda_python3.8

if [ -z "$AWS_LAMBDA_EXEC_WRAPPER" ]; then
  exec /var/lang/bin/python3.8 /var/runtime/bootstrap.py

else
  wrapper="$AWS_LAMBDA_EXEC_WRAPPER"
  if [ ! -f "$wrapper" ]; then
    echo "$wrapper: does not exist"
    exit 127
  fi
  if [ ! -x "$wrapper" ]; then
    echo "$wrapper: is not an executable"
    exit 126
  fi
    exec -- "$wrapper" /var/lang/bin/python3.8 /var/runtime/bootstrap.py
    return $?

‍

This is effectively just executing boostrap.py, a Python script that instantiates a LambdaRuntimeClient with the HTTP address (AWS_LAMBDA_RUNTIME_API by default this is 127.0.0.1:9001) of the Lambda runtime API:

‍

lambda_runtime_api_addr = os.environ["AWS_LAMBDA_RUNTIME_API"]
lambda_runtime_client = LambdaRuntimeClient(lambda_runtime_api_addr)

‍

With the invocation of the Lambda function, the rest is quite straightforward. The handler script is found and loaded as a module, then executed. Upon successful execution, the Runtime API receives a POST request with the invoke_id and result_data from the RAPID client (interestingly, written in C++ for AWS Python 3.8 container images).

More information about the OpenAPI specification can be found here.

‍

Streamlining Local Development of Lambda Functions

Developing and integrating Lambda functions into existing projects can be a challenging process. One common pain point is the need to repeatedly go through the deployment steps (see above) for each minor revision of the microservice. As well, most companies have a separate team for managing and deploying cloud infrastructure.

As a developer, I wanted to have the ability to run my Lambda function code locally and invoke it from an existing service. Sure, there are alternatives to this approach like LocalStack that could facilitate rapid prototyping. However, in this context, it's overblown and (in my opinion), should be used for AWS services that cannot be dockerized easily (SNS, Firehose, STS, etc. - note: S3 can easily be swapped in with MinIO, stay tuned for its write up)

Prototyping locally allows for quick iterations and refactoring without the need for frequent deployments to AWS Lambda. Once the microservice has been tested and is performing well, I could then go through a more formal deployment process and make it available to the platform in a production or staging environment.

Some of my requirements therefore include:

  • Hot reloading of the code running on the Lambda function.
  • An endpoint that could
    • Invoke the Lambda function.
    • Accepts a payload.
  • Accurate measurement of the Lambda runtime duration to assess costs.
Tutorial
  • To enable hot reloads in your Lambda function, update the bootstrap.py script that is executed by the Lambda runtime. The modification involves reloading the handler after each invocation, ensuring that the latest function code is always executed. Additionally, the script will request the next invocation event from the Lambda Runtime API, ensuring that the function remains responsive.
    • You can find the updated bootstrap.py script here.
      • Link to diff here.
  1. Create a Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
COPY requirements.txt /opt/requirements.txt
RUN pip install -r /opt/requirements.txt -t ${LAMBDA_TASK_ROOT}/
# COPY ./bootstrap.sh /var/runtime/bootstrap # <-- optional if creating a custom runtime. 
COPY ./bootstrap.py /var/runtime/bootstrap.py
RUN chmod +x /var/runtime/bootstrap*

‍

  1. Create a Docker compose file:
services:
  lambda:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - ./lambda.py:/var/task/lambda.py # <-- mount function code here
    command: "lambda.handler"
    ports:
      - "3002:8080" # <-- expose port 3002 and map to container port 8080
    networks:
      - local_lambda_network

networks:
  local_lambda_network:
    name: local_lambda_network
    driver: bridge

‍

  1. Create an example microservice that gets an image of a cat and converts it to grayscale
import base64
import datetime
import io

import requests
from PIL import Image


def handler(event, context):
    response = download_cat_image()
    with open("/tmp/output_image.png", "rb") as f:
        image_data = f.read()
    encoded_image = base64.b64encode(image_data).decode("utf-8")

    body = {
        "message": f"Image Processed at {datetime.datetime.now()}",
        "input": event,
        "image": encoded_image,
    }
    response = {
        "statusCode": 200,
        "body": body,
        "headers": {"Content-Type": "application/json"},
    }
    return response


def download_cat_image():
    try:
        response = requests.get("https://cataas.com/cat") # Underrated service
        response.raise_for_status()

        image = Image.open(io.BytesIO(response.content))
        grayscale_image = image.convert("L")

        output_buffer = io.BytesIO()
        grayscale_image.save(output_buffer, format="PNG")

        with open("/tmp/output_image.png", "wb") as f:
            f.write(output_buffer.getvalue())

        return output_buffer.getvalue()
    except requests.exceptions.HTTPError as err:
        raise Exception(f"HTTP error occurred: {err}")
    except requests.exceptions.RequestException as err:
        raise Exception(f"Request error occurred: {err}")
    except Exception as err:
        raise Exception(f"An unexpected error occurred: {err}")

‍

  1. Run docker compose up -d (include --build when needed)
  2. cURL the endpoint with:
curl -XPOST "http://localhost:3002/2015-03-31/functions/function/invocations" -d '{"payload":"Give me a snug"}' | jq

# For convenience and copy the body.image to clipboard use:
curl -XPOST "http://localhost:3002/2015-03-31/functions/function/invocations" -d '{"payload":"Give me a snug"}' | tee /dev/tty | jq -r '.body.image' | pbcopy

jq and pbcopy to copy the body.image to clipboard

‍

  1. The response will be a base64 encoded image. For convenience paste the body.image here and see the grayscale image.
  2. Logs from docker stdout can be viewed:
lambda-1  | START RequestId: edbf340b-bae2-4607-aea5-c7b399469afc Version: $LATEST
lambda-1  | 28 Apr 2024 18:44:37,442 [INFO] (rapid) INVOKE START(requestId: 9df1db0f-8074-453e-b70c-d5c992ca8ce0)
lambda-1  | 28 Apr 2024 18:44:38,602 [INFO] (rapid) INVOKE RTDONE(status: success, produced bytes: 0, duration: 1159.407000ms)
lambda-1  | END RequestId: 9df1db0f-8074-453e-b70c-d5c992ca8ce0
lambda-1  | REPORT RequestId: 9df1db0f-8074-453e-b70c-d5c992ca8ce0	Duration: 1160.77 ms	Billed Duration: 1161 ms	Memory Size: 3008 MB	Max Memory Used: 3008 MB

‍

  1. Great, requests can be made to a local Lambda function. The next step is to now have the local Node application invoke the Lambda function with the AWS SDK.
import aws from "aws-sdk";

async invokeCat(): Promise<CatResponse> {
  
  const lambda = new aws.Lambda({
    apiVersion: "2015-03-31",
    endpoint: "http://127.0.0.1:3002", // the port exposed for the container
    sslEnabled: false,
    region: "ca-central-1", // arbitrary for local development
    accessKeyId: "any", // arbitrary
    secretAccessKey: "any", // arbitrary
  });
  
  const params = {
    FunctionName: "cat",
    InvocationType: "RequestResponse",
    Payload: JSON.stringify({
      payload: "Give me a snug",
    }),
  };
  
  return lambda.invoke(params).promise();
}

‍

Conclusion

Developing Python Lambda functions locally offers developers a streamlined workflow, enabling rapid iteration and efficient testing. By leveraging containers and understanding the Lambda environment, developers can enhance their serverless development process. Embracing these practices not only improves productivity but also ensures smoother deployments and a more robust application architecture.

‍

Resources:

  • GitHub Repository

‍

SHARE
Weavik logo

Let’s work together

Looking for a great partner? We’d love to work with you.

GET IN TOUCH
Go to

Home

About Us

Careers

Our Offerings

Build Your MVP

Elevate Your Product

Connect Your Product

Develop Your Idea

Access Fractional Experts

Simplify Product Upkeep

Find us

14 Erb Street W.
Waterloo, ON Canada

Contact us

info@weavik.com

Linkedin link

Copyright © 2023. Weavik Inc. All rights reserved.

Privacy Policy