Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Add a new flag to k8sgpt analyze to directly output prompt #994

Open
2 tasks done
gdoctor opened this issue Mar 1, 2024 · 0 comments
Open
2 tasks done

Comments

@gdoctor
Copy link

gdoctor commented Mar 1, 2024

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've discussed this feature request in the K8sGPT Slack and got positive feedback

Is this feature request related to a problem?

No

Problem Description

IMO the main value of the k8s analyze command is extracting k8s resource data, and intelligently packaging it into a prompt template that enables users to get insightful responses from an LLM (for a majority of "popular" LLMs). Well crafted prompts have tremendous value by themselves.

While having a built-in integration with LLMs is nice, I would argue that is a secondary benefit of this command. It allows people to run this tool locally and have quick feedback. However, a lot of teams have complex frameworks in place to facilitate interactions with LLMs. As of a result, they may prefer the actual LLM interactions go through that framework, and are not bypassed for a tool such as this one.

What would be immensely valuable is getting access to the prompts. I could then take the prompt and feed it into my LLM framework. This fits naturally with the "function calling"/tool integrations.

Edit: I would need access to the system prompts + k8s resource prompt in the response*

Solution Description

Add a new flag --prompt. This flag is mutually exclusive with the --explain flag, which actually goes through the work to create the prompt/anonymize it, but then additionally passes it to an LLM and returns the response.

It may also be desirable to get the same result by modifying the existing -e,--explain flag. It could become -e llm or -e prompt. These are probably bad names but hopefully you get the idea. I am not sure which option fits best with the mindset of the developers who have worked on this.

Benefits

We have access to well constructed prompts that we can use however we want. I could technically tie into the code and do this today, but this removes those extra steps.

Potential Drawbacks

N/A

Additional Information

No response

@gdoctor gdoctor changed the title [Feature]: Add new flag to k8sgpt analyze directly output prompt [Feature]: Add a new flag to k8sgpt analyze to directly output prompt Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

1 participant