You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would surely be useful for the CLI to support the new Collections feature directly
A particularly strong use-case would be to upload mapping data into a collection. I have 1000 mappings in my GH repo and want to upload them to lightning for use in my workflow. I don't really want to have to create a workflow step and copy and paste the values in for this. I want to call the collections API directly.
get would get values from a collection and return a JSON object of key/value pairs. Uses --output to write the result to disk (if unset, returns to stdout). Pass --json to parse values as JSON.
count would return the number of values which match a particular pattern. Returns to stdout only.
keys would return a list of matching keys without downloading any data (we don't even support this!)
set would accept data through a file. CSV or JSON. If JSON, it must expect and object of key value pairs, and it should probably stringify values
delete would delete keys according to the pattern and return the count of deleted keys.
Auth
We need to think carefully about auth.
The request needs to include a personal access token.
But we probably don't want to take this as a flat env var because access tokens are scoped per project, so users will have several. Which makes it a bit fiddly to submit to the CLI.
Options:
accept an env var name in the command, ie, --token MY_OPENFN_TOKEN (but surely --token $MY_OPENFN_TOKEN also works?)
If a token isn't provided, prompt the user to paste it in (annoying)
Allow tokens to be saved by the CLI (which must encrypt and write them to the repo somewhere). Now we only need to take a token once and the CLI can remember for that collection name
Implementation details
Should this use the collections adaptor under the hood? Probably not, because it behaves very differently. But we might copy some code across to handle streaming efficiently.
The text was updated successfully, but these errors were encountered:
It would surely be useful for the CLI to support the new Collections feature directly
A particularly strong use-case would be to upload mapping data into a collection. I have 1000 mappings in my GH repo and want to upload them to lightning for use in my workflow. I don't really want to have to create a workflow step and copy and paste the values in for this. I want to call the collections API directly.
get
would get values from a collection and return a JSON object of key/value pairs. Uses --output to write the result to disk (if unset, returns to stdout). Pass --json to parse values as JSON.count
would return the number of values which match a particular pattern. Returns to stdout only.keys
would return a list of matching keys without downloading any data (we don't even support this!)set
would accept data through a file. CSV or JSON. If JSON, it must expect and object of key value pairs, and it should probably stringify valuesdelete
would delete keys according to the pattern and return the count of deleted keys.Auth
We need to think carefully about auth.
The request needs to include a personal access token.
But we probably don't want to take this as a flat env var because access tokens are scoped per project, so users will have several. Which makes it a bit fiddly to submit to the CLI.
Options:
--token MY_OPENFN_TOKEN
(but surely--token $MY_OPENFN_TOKEN
also works?)Implementation details
Should this use the collections adaptor under the hood? Probably not, because it behaves very differently. But we might copy some code across to handle streaming efficiently.
The text was updated successfully, but these errors were encountered: