openai implementation possible you think? #14
Replies: 1 comment 3 replies
-
Hi @Drlordbasil , Thank you for the suggestion. Analyzing screenshots from a video for extra context can indeed be useful, yet it's true that this method might not always be effective due to the diversity of information each frame can contain. Additionally, this approach could slow down the bot. An alternative strategy could involve leveraging the video's title, description, or even subtitles from the transcript. These elements could be processed by an LLM, which would then generate new comments based on the given context. However, this might lead to repetitive comments. In my view, the optimal solution would be for the LLM to identify the most relevant comments from the pool of comments we have and then employ the existing code logic to arrange them by their commented recency. Given the bot's focus on fast commenting, utilizing low-latency models such as GPT-3.5-turbo would likely be more advantageous. Let me know what you think. |
Beta Was this translation helpful? Give feedback.
-
was going over it and thinking adding in an API based openai gpt4 model would be helpful, especially when they have image reading now(docs):
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What’s in this image?"},
{
"type": "image_url",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
],
}
],
max_tokens=300,
)
print(response.choices[0])
maybe we take a screenshot of the video for the AI to understand?
Hell prob can use a free model from huggingface pipelines too, for free generation.
Beta Was this translation helpful? Give feedback.
All reactions