Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't work with Lambda in Docker locally #309

Open
j-adamczyk opened this issue Nov 3, 2023 · 2 comments
Open

Doesn't work with Lambda in Docker locally #309

j-adamczyk opened this issue Nov 3, 2023 · 2 comments

Comments

@j-adamczyk
Copy link

I can't manage to get Mangum to work with AWS Lambda in Docker. Everything works if I just run it as a Python process, but not in Docker container.

Error:

{"errorMessage": "The adapter was unable to infer a handler to use for the event. This is likely related to how the Lambda function was invoked. (Are you testing locally? Make sure the request payload is valid for a supported handler.)", "errorType": "RuntimeError", "requestId": "8a98acd4-aab9-47bd-a21c-180f2c3fb483", "stackTrace": ["  File \"/var/lang/lib/python3.10/site-packages/mangum/adapter.py\", line 76, in __call__\n    handler = self.infer(event, context)\n", "  File \"/var/lang/lib/python3.10/site-packages/mangum/adapter.py\", line 68, in infer\n    raise RuntimeError(  # pragma: no cover\n"]}

I'm also getting the same error when I deploy on AWS Lambda.

My server:

import joblib
import uvicorn
from api_models import Properties
from fastapi import FastAPI
from mangum import Mangum
from starlette.middleware.cors import CORSMiddleware


app = FastAPI()
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

model = joblib.load(filename="./saved_models/current_model.pkl")


@app.get("/")
def read_root():
    return {"Hello": "World"}


@app.post("/make_prediction")
def make_prediction(properties: Properties):
    # some calculations
    return result


handler = Mangum(app)


if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

Dockerfile:

FROM public.ecr.aws/lambda/python:3.10

RUN yum install libgomp git -y && \
    yum clean all -y && \
    rm -rf /var/cache/yum

COPY requirements.txt ${LAMBDA_TASK_ROOT}
RUN pip install -r requirements.txt

COPY serving/backend ${LAMBDA_TASK_ROOT}
COPY saved_models ${LAMBDA_TASK_ROOT}/saved_models

CMD ["server.handler"]

I run it as docker run -p 9000:8080 prediction_lambda. I want to deploy this as Lambda Function URL, which uses API Gateway 2.0 style requests. My request:

import requests

r = requests.get(
    url="http://localhost:9000/2015-03-31/functions/function/invocations",
    json={
        "version": "2.0",
        "routeKey": "GET /",
        "rawPath": "/",
        "pathParameters": {}
    }
)
print(r.text)

So I'm tring to reach at least the / endpoint with GET to debug this, but eventually for production I want to use the POST method.

How can I fix this?

@DolevAlgam
Copy link

I get the same error.

@bencwallace
Copy link

You're missing the request context. I got this to work:

import requests

json = {
  "version": "2.0",
  "routeKey": "GET /",
  "rawPath": "/",
  "pathParameters": {},
  "requestContext": {
    "http": {
      "sourceIp": "192.0.0.1",
      "path": "/",
      "method": "GET"
    }
  }
}
r = requests.get(url="http://localhost:9000/2015-03-31/functions/function/invocations", json=json)
print(r.text)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants