-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
执行 paddle serving 报错 #1535
Comments
这个SERVING_BIN是从哪里来的 |
fail to open file ./serving_server/batch_norm_41.b_0 从报错信息上看,你的模型是散列多文件的? |
是从
得来的。 把其中的 serving_version 换成 0.7.0 |
您好。 再通过 如下命令 得到 serving_server 以及 serving_client python -m paddle_serving_client.convert --dirname . \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--serving_server ./serving_server/ \
--serving_client ./serving_client/ seving_server 信息如下: .
├── fluid_time_file
├── model.pdiparams
├── model.pdmodel
├── serving_server_conf.prototxt
└── serving_server_conf.stream.prototxt 发生以下报错后 Error Message Summary:
----------------------
NotFoundError: Cannot open file ./serving_server/__model__, please confirm whether the file is normal.
[Hint: Expected static_cast<bool>(fin.is_open()) == true, but received static_cast<bool>(fin.is_open()):0 != true:1.] (at /paddle/paddle/fluid/inference/api/analysis_predictor.cc:1119) 改名model: model.pdmodel ==> Error Message Summary:
----------------------
UnavailableError: Load operator fail to open file ./serving_server/batch_norm_41.b_0, please check whether the model file is complete or damaged.
[Hint: Expected static_cast<bool>(fin) == true, but received static_cast<bool>(fin):0 != true:1.] (at /paddle/paddle/fluid/operators/load_op.h:41)
[operator < load > error]
|
好的 了解 我这边复现一下 大概2小时后给个结论。 |
抱歉 我这里没能复现。
|
@zouxiaoshi @bjjwwang 抱歉打扰了,我遇到了同样的问题,请教一下当时是怎么解决的呢? 我的操作步骤是:
python3 -m paddle_serving_server.serve --model ppocr_det_v3_serving --port 8181 命令时,报错如下:
请问该如何解决? 相关环境使用镜像 其它
|
问题:
Q1:
执行如下代码时报错:
export SERVING_BIN=/usr/local/serving_bin/serving python -m paddle_serving_server.serve \ --model ./serving_server \ --thread 8 --port 10010 \ --gpu_ids 0
错误信息:
后用通过如下代码 进行转换:
python -m paddle_serving_client.convert --dirname . \ --model_filename model.pdmodel \ --params_filename model.pdiparams \ --serving_server ./serving_server/ \ --serving_client ./serving_client/
得到 如下文件:
. ├── model.pdiparams ├── model.pdmodel ├── serving_server_conf.prototxt └── serving_server_conf.stream.prototxt
Q2:
强制 对 model.pdmodel 重命名,
mv model.pdmodel __model__
然后启动 paddle serving 服务,得到如下错误:
SOLOv2 模型
Yolov3 模型
YOLO V3 模型在 以下环境下运行是可以的:
环境
The text was updated successfully, but these errors were encountered: