Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you send the request if you have two input nodes? #9

Open
ackbar03 opened this issue Aug 17, 2018 · 2 comments
Open

How do you send the request if you have two input nodes? #9

ackbar03 opened this issue Aug 17, 2018 · 2 comments

Comments

@ackbar03
Copy link

Hi,

In my normal model I need to dict.feed two placeholders into the model to run. I've also defined it as such in my tensorflow serving

prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'image': tensor_info_x1, 'im_info': tensor_info_x2},
outputs={'cls_score': tensor_info_y1, 'cls_prob': tensor_info_y2, 'bbox_pred': tensor_info_y3,
'rois_l3': tensor_info_y4},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
)

How do I put that in the req_data list? I'm currently doing

[{'in_tensor_name': 'Placeholder:0', 'in_tensor_dtype': 'DT_FLOAT', 'data': blobs['data']},
{'in_tensor_name': 'Placeholder_1:0', 'in_tensor_dtype': 'DT_FLOAT', 'data': blobs['im_info']}

]

but I keep getting the error

<_Rendezvous of RPC that terminated with (StatusCode.FAILED_PRECONDITION, Serving signature key "serving_default" not found.)>
Prediction failed!

I'm suspect its because the request data isn't being formatted correctly.

Help!

Thanks

@ackbar03
Copy link
Author

Nvm I figured it out, it was because when creating the serving dictionary I used the name prediction_signature, I rebuilt it using serving_default and it matched up.

That being said though, I think it would be a good option if we can define the signature name in the client side app? I took a quick look through the code and am not sure where I can change it or make it an option. I think it would be a good feature to have. I don't mind forking and adding to it myself but not sure I have the time or know-how :P

@chagmgang
Copy link

chagmgang commented Sep 29, 2018

I want to know you how to deal with that problem in detail.
I met the same problem.

import tensorflow as tf
import os

SAVE_PATH = './save'
MODEL_NAME = 'test'
VERSION = 1
SERVE_PATH = './serve/{}/{}'.format(MODEL_NAME, VERSION)

checkpoint = tf.train.latest_checkpoint(SAVE_PATH)

tf.reset_default_graph()

with tf.Session() as sess:
    # import the saved graph
    saver = tf.train.import_meta_graph(checkpoint + '.meta')
    # get the graph for this session
    graph = tf.get_default_graph()
    saver.restore(sess, checkpoint)
    # get the tensors that we need
    inputs = graph.get_tensor_by_name('inputs:0')
    outputs = graph.get_tensor_by_name('outputs:0')
    targets = graph.get_tensor_by_name('targets:0')
    predictions = graph.get_tensor_by_name('prediction:0')

    model_input = tf.saved_model.utils.build_tensor_info(inputs)
    model_outputs = tf.saved_model.utils.build_tensor_info(outputs)
    model_targets = tf.saved_model.utils.build_tensor_info(targets)

    model_predictions = tf.saved_model.utils.build_tensor_info(predictions)

    signature_definition = tf.saved_model.signature_def_utils.build_signature_def(
        inputs={'inputs': model_input, 'outputs': model_outputs, 'targets': model_targets},
        outputs={'prediction': model_predictions},
        method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)

    builder = tf.saved_model.builder.SavedModelBuilder(SERVE_PATH)

    builder.add_meta_graph_and_variables(
        sess, [tf.saved_model.tag_constants.SERVING],
        signature_def_map={
            'prediction': signature_definition
        })


    builder.add_meta_graph_and_variables(
        sess, [tf.saved_model.tag_constants.SERVING],
        signature_def_map={
            tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
                signature_definition
        })
    # Save the model so we can serve it with a model server :)
    builder.save()
import numpy as np
from predict_client.prod_client import ProdClient

HOST = '0.0.0.0:9000'
# a good idea is to place this global variables in a shared file
MODEL_NAME = 'test'
MODEL_VERSION = 1

client = ProdClient(HOST, MODEL_NAME, MODEL_VERSION)

req_data = [{'in_tensor_name': 'inputs', 'in_tensor_dtype': 'DT_FLOAT', 'data': x},
            {'in_tensor_name': 'outputs', 'in_tensor_dtype': 'DT_FLOAT', 'data': y},
            {'in_tensor_name': 'targets', 'in_tensor_dtype': 'DT_FLOAT', 'data': z}]

prediction = client.predict(req_data, request_timeout=10)

print(prediction)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants