How to authenticate a webhook with a provider. #23
RobertoGongora
started this conversation in
Show and tell
Replies: 1 comment
-
Can you add this to the docs https://github.com/LlmLaraHub/docs-vitepress/blob/main/use-cases.md that would be a big help! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This will cover the specific use-case of authorizing a Toggl Track webhook to Larallama, but it can easily be replicated for any other vendor.
Context
I added a new Webhook Source in Larallama, and used the generated token and URL for a custom webhook Integration in Toggl.
This was my initial prompt:
After I created the Webhook in Larallama, I received my first event immediately. Toggl sent a PING event to the webhook URL to validate it works, so this proves the Source is working 🎉
But this is the result i got:
Since Toggl sent only a ping event, it didnt include any time_entries and the prompt wasn't specific about what to do, that makes sense.
But it's a non-issue if we can receive valid time entries from Toggl.
The Problem
Some providers might require a webhook validation to enable event pushing, even if their PING returns 200. In Toggl's case, they have documentation about it and they show an
UNVALIDATED
tag next to your webhook.In their docs they show that the PING event includes a validation URL that you can just follow to mark your webhook as enabled. So we need the output of the PING event.
Solution
In a situation like this, it'd be necessary to implement a listener for the ping event, parse it, extract the url and click it. This can be simple if you have access to the raw webhook logs, or it can get complicated if it's protected and only the App has access to it.
With LaraLlama, I added this to the prompt:
And when the ping event was re-delivered, this was the result:
A simple click on the URL, and that's it.
After that, the webhook started receiving events in real-time as expected, with correct outputs. Here's an example:
Conclusion
This proves how powerful an LLM-based tool like this can be. With no modification to source code, no deployments and no downtime, I was able to implement a new feature and make my Source interact with a different kind of event by asking nicely.
It also showcases Larallama's resistance to hallucinations, as it clearly stated that it couldn't complete the task instead of "trying its best" and making up incorrect assumptions for the expected output.
Beta Was this translation helpful? Give feedback.
All reactions