New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Apple's "Vision" framework for on-device OCR text recognition #10480
Comments
Completely agree. We have used the vision kit with apps before and it’s quite nice. RN vision camera from Margelo is likely possible to be used with NativeScript as Marc mentioned its libraries were recently decoupled from react; we want to try sometime early this year. |
this is already possible in N using native iOS APIs through JS? I am not sure I understand what more needs to be added to N |
I believe @jasongitmail is just mentioning a condensed API to make it a bit more palatable. We have some examples of vision kit usage in NativeScript apps I’ll have to share sometime. |
@NathanWalker oh ok. I will switch my app to using it too (using tesseract right now) so will have another real time example at that point. |
Is your feature request related to a problem? Please describe.
I'd like to be able to use Apple's excellent, on-device OCR text recognition "vision" framework, available since iOS 13. (For clarity, this is not for VisionOS head set.)
It offers high accuracy and speed for OCR, due to being on-device, using machine learning for accuracy, and optimized by Apple in recent years.
Describe the solution you'd like
An API in TypeScript to provide an image and get a result of both text and positions of the text within the image, using Apple's Vision OCR APIs.
Describe alternatives you've considered
Anything else?
Please accept these terms
The text was updated successfully, but these errors were encountered: