Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama support doesn't work as shown in docs #207

Closed
lzytitan494 opened this issue Aug 21, 2024 · 4 comments
Closed

Ollama support doesn't work as shown in docs #207

lzytitan494 opened this issue Aug 21, 2024 · 4 comments
Assignees

Comments

@lzytitan494
Copy link

Example in "docs/lmms.md" shows code as:

import vision_agent as va

model = va.lmm.get_lmm("llava")
model.generate("Describe this image", "image.png")

which shows the following error:

Traceback (most recent call last):
  File "/mnt/d/coding/Testing/vision-agents/agent.py", line 9, in <module>
    model = va.lmm.get_lmm("llava")
            ^^^^^^^^^^^^^^
AttributeError: module 'vision_agent.lmm' has no attribute 'get_lmm'

I'm using the 0.2.109 version.

@lzytitan494
Copy link
Author

I changed the code as:

import vision_agent as va

model = va.lmm.OllamaLMM("llava:7b")
response = model.generate(prompt = "Describe this image", media=["jar.png"])
print(response)

and the changed the lmm.py as the following:

# stream = stream.json()
# return stream["message"]["content"]

response = stream.content.decode('utf-8')
json_objects = response.strip().split('\n')
parsed_data = []
for obj in json_objects:
    parsed_data.append(json.loads(obj))

res = ''.join(obj['response'] for obj in parsed_data)
return res

image

Because earlier stream.json() was showing the following error:

Traceback (most recent call last):
  File "/home/lzytitan/miniconda3/envs/test/lib/python3.11/site-packages/requests/models.py", line 962, in json
    return complexjson.loads(self.content.decode(encoding), **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lzytitan/miniconda3/envs/test/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lzytitan/miniconda3/envs/test/lib/python3.11/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 99)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/d/coding/Testing/vision-agents/agent.py", line 4, in <module>
    response = model.generate(prompt = "Describe this image", media=["jar.png"])
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lzytitan/miniconda3/envs/test/lib/python3.11/site-packages/vision_agent/lmm/lmm.py", line 448, in generate
    stream = stream.json()
             ^^^^^^^^^^^^^
  File "/home/lzytitan/miniconda3/envs/test/lib/python3.11/site-packages/requests/models.py", line 970, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Extra data: line 2 column 1 (char 99)

Now this works fine for me, I tested it 10 times with different images and prompt. But is it really the correct way to solve this?

@dillonalaird dillonalaird self-assigned this Aug 25, 2024
@dillonalaird
Copy link
Member

Hey @lzytitan494 , thanks for bringing these issues up. On the first post, that is some old documentation I've forgotten to remove, your second post is how you should query the Ollama model.

There's a bug in generate where I don't explicitly pass stream=False if it's not streaming. I'll have a PR up today that fixes this as well as adds better Ollama support for VisionAgent.

@dillonalaird
Copy link
Member

This PR should fix the issue and also adds OllamaVisionAgentCoder class #208

@lzytitan494
Copy link
Author

lzytitan494 commented Aug 26, 2024

Yes, I have checked the updated llm.py file it works fine. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants