You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What if I have a skill where it's easy to generate synthetic data in an algorithmic way (so I can generate 100 questions easily myself), and I'm not satisfied with the questions generated by the teacher model?
Can I turn it off?
Or is there a way for me to influence the teacher model in some way?
Example (Semantic version skill):
In the qna.yaml, I have questions like:
Sort following version string from lowest to highest: 1.2.3, 2.2.1, 1.0.1
Which of these versions belong to major version 1: 2.1.1, 1.2.0, 11.1.0
The generated questions are sometimes way off, like:
Determine the highest peak in the world for each continent.
Africa: Kilimanjaro (5,895 m)
Antarctica: Vinson Massif (4,892 m)
Other generated questions are about basic arithmetic.
The text was updated successfully, but these errors were encountered:
I would think this sort of behavior is hard for LLMs since they are text based and not math oriented. So even if you could generate a lot of q/a pairs with correct information, I don't think the LLM would ever learn to sort-by-number in a reliable way.
The Which of these versions belong to major version skill is more possible since it is sort of text extraction rather than math.
What if I have a skill where it's easy to generate synthetic data in an algorithmic way (so I can generate 100 questions easily myself), and I'm not satisfied with the questions generated by the teacher model?
Can I turn it off?
Or is there a way for me to influence the teacher model in some way?
Example (Semantic version skill):
In the
qna.yaml
, I have questions like:The generated questions are sometimes way off, like:
Other generated questions are about basic arithmetic.
The text was updated successfully, but these errors were encountered: