Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

httpx.ConnectError with --openai_server=True --ssl-verify=False #1589

Open
wypiki opened this issue Apr 29, 2024 · 9 comments
Open

httpx.ConnectError with --openai_server=True --ssl-verify=False #1589

wypiki opened this issue Apr 29, 2024 · 9 comments

Comments

@wypiki
Copy link

wypiki commented Apr 29, 2024

Gradio UI works flawlessly but when trying to connect to the OpenAI-endpoint with HTTP, I get
httpx.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
on the server, which is strange because it shouldn't be handled as HTTPS in the first place, but when trying to reach it via HTTPS, I get
Invalid HTTP request received.

When trying to contact an openai-compatible tabbyAPI-server directly on the same host (the one powering H2oGPT), it just works, but I need RAG.

Did I misconfigure something?

Here are the relevant parameters:

export GRADIO_SERVER_PORT=1000
python generate.py --do_sample=True --temperature=0.1 --top_p=0.3 --base_model=turboderp/Mixtral-8x22B-Instruct-v0.1-exl2 --revision=6.0bpw --inference_server=vllm:127.0.0.1:5000 --max_seq_len=65000 --max_new_tokens=16384 --hf_embedding_model=intfloat/multilingual-e5-large --openai_server=True --openai_port=1001 --allow_upload_to_user_data=False --allow_upload_to_my_data=False --enable_text_upload=False --enable_sources_list=False --allow_api=True --ssl_verify=False --ssl_certfile="cert.crt" --ssl_keyfile="cert.key" --ssl_keyfile_password="password"

(...)

Here are the relevant parts of the log;

OpenAI API URL: http://0.0.0.0:1001
INFO:__name__:OpenAI API URL: http://0.0.0.0:1001
OpenAI API key: EMPTY
INFO:__name__:OpenAI API key: EMPTY
OpenAI user: {}
Getting gradio client at https://localhost:1000
@wypiki
Copy link
Author

wypiki commented Apr 30, 2024

Edit: left out some relevant parameters and had formatting errors. Corrected now.

@pseudotensor
Copy link
Collaborator

pseudotensor commented May 1, 2024

Hi,

  1. I'm supposing the ssl stuff you pass has no effect. So can you remove the ssl args and have the same issue? I don't pass those through so it should have no effect.

i.e. these:

--ssl_verify=False
--ssl_certfile="cert.crt"
--ssl_keyfile="cert.key"
--ssl_keyfile_password="password"

shouldn't matter for the OpenAI proxy.

  1. As for why uvicorn is asking for ssl certificate and failed, I'm unsure.

You can try editing this line and remove the ssl stuff, just to be sure.

uvicorn.run(app, host=host, port=port, ssl_certfile=ssl_certfile, ssl_keyfile=ssl_keyfile)

  1. In my case the same message appear but then I get a permission denied:
OpenAI API URL: http://0.0.0.0:1001
INFO:__name__:OpenAI API URL: http://0.0.0.0:1001
OpenAI API key: EMPTY
INFO:__name__:OpenAI API key: EMPTY
[Errno 13] error while attempting to bind on address ('0.0.0.0', 1001): permission denied

Maybe you have something else on that port that is secure? Or maybe the port isn't allowed to be used?

  1. If I change the port to 5001, then I don't get permission denied.
OpenAI API URL: http://0.0.0.0:5001
INFO:__name__:OpenAI API URL: http://0.0.0.0:5001
OpenAI API key: EMPTY
INFO:__name__:OpenAI API key: EMPTY
Invalid HTTP request received.
INFO:     127.0.0.1:46322 - "GET / HTTP/1.1" 405 Method Not Allowed
INFO:     127.0.0.1:46322 - "GET /favicon.ico HTTP/1.1" 404 Not Found
  1. Perhaps 1000 and 1001 are too close.

@wypiki
Copy link
Author

wypiki commented May 1, 2024

Hi! Thanks for your detailed answer.

  1. I need the SSL stuff for the main UI. Because of limited VRAM I'd rather not have more H2oGPT-instances than necessary... Is that possible? (To have the main UI via HTTPS and the OpenAI-API via HTTP?
  2. You mean I could just remove those two parameters, like:
    uvicorn.run(app, host=host, port=port)
    ?
  3. That port is allowed. But I also tried 5001 (with another service on port 5000).
  4. I'll try another port, too.

@wypiki
Copy link
Author

wypiki commented May 2, 2024

I changed
uvicorn.run(app, host=host, port=port, ssl_certfile=ssl_certfile, ssl_keyfile=ssl_keyfile)
to
uvicorn.run(app, host=host, port=port)

and the port from 1001 to 10001, but have the same error...

@pseudotensor
Copy link
Collaborator

Maybe there is some ENV that is affecting uvicorn and causing it to trigger SSL. Not sure.

@pseudotensor
Copy link
Collaborator

Also, on MAC, I've seen issues with SSL

#530
#1530
#1531

Needs to be setup right. Are you on MAC or have a proxy?

@wypiki
Copy link
Author

wypiki commented May 6, 2024

It's on Ubuntu 22.04, and there is no proxy explicitly set up, nor is the internet connection filtered. But there is a proxy server available which browsers get via the http://gateway/wpad.dat when they're configured to automatically determine the proxy server. I don't know if this plays a role.
Also, I added the intermediate and the root certificate so the verification would work, but it still doesn't.
And, only IPv4 is available, no IPv6 (if that plays a role.)

@wypiki
Copy link
Author

wypiki commented May 15, 2024

I spent a few hours now trying to get this to work:

  • added the FQDN to /etc/hosts
  • set GRADIO_SERVER_HOST=...
  • changed the parameters for uvicorn.run, changed host to the FQDN
  • made changes to HTTPX
  • and some more things,

but I still get:
httpx.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'localhost'. (_ssl.c:1007)

Please help, @pseudotensor !

Gradio UI does work with the certificates, just the openai api doesn't.

@pseudotensor
Copy link
Collaborator

"localhost" sounds wrong, it should be maybe a real IP address.

In latest h2oGPT what I see in end is:

Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Started Gradio Server and/or GUI: server_name: localhost port: None
Use local URL: http://localhost:7860/
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_info" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_names" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
OpenAI API URL: http://0.0.0.0:5000
INFO:__name__:OpenAI API URL: http://0.0.0.0:5000
OpenAI API key: EMPTY
INFO:__name__:OpenAI API key: EMPTY

i.e. 0.0.0.0 that for linux means open public version of local IP AFAIK. Maybe you have 'localhost' either using older h2oGPT or are on windows/mac. But you say you are on Ubuntu PC?

When I go to chrome to that IP/Port, it works (the message is valid):

image

Unfortunately I'm not expert in SSL stuff.

I would recommend trying to setup a fastAPI server, since that is what h2oGPT is doing to get OpenAI proxy server. So do it in very basic form and see what one can do.

E.g. from chatgpt:

from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def read_root():
    return {"Hello": "World"}

@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
    return {"item_id": item_id, "q": q}

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

This runs, do you have problems with this? We should be able to bisect the issue if no problem here and still problems with h2oGPT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants