You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
rr: Company
rr: Contact
rr: Country
rr: Maria Anders
rr: Germany
rr: Berglunds snabbkOp
rr: Alfreds Futterkiste
rr: Christina Berglund
rr: Sweden
rr: Centro comercial Moctezuma
rr: Francisco Chang
rr: Mexico
rr: Roland Mendel
rr: Austria
rr: Island Trading
rr: Ernst Handel.
rr: Helen Bennett
rr: UK
rr: Philip Cramer
rr: Germany
rr: Yoshi Tannamuri
rr: Koniglich Essen
rr: Laughing Bacchus Winecellars
rr: Canada
rr: Magazzini Alimentari Riuniti
rr: Giovanni Rovelli
rr: ttaly
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I've integrated the deploy/cpp_infer example to my project.
I use the precompiled avx_mkl_cuda10.1_cudnn7.6.5_avx_mkl_no_trt library (https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html) on windows 11.
I've set all needed flags and use ocr like this:
I use this image as input: https://i.imgur.com/FlTDRoq.png
When I run it on cpu, I get this:
Full output with log: http://sprunge.us/WJmhCR
But when I run it on gpu (I just change DEFINE_bool(use_gpu, false, "Infering with GPU or CPU."); to true in args.cpp), I get this:
Full output with log: http://sprunge.us/HwI2NU
Please help me to understand what I'm missing to get PaddleOCR to work on GPU.
Beta Was this translation helpful? Give feedback.
All reactions