|
tensorflow_cpp 1.0.6
|
Wrapper class for running TensorFlow SavedModels or FrozenGraphs. More...
#include <model.h>
Public Member Functions | |
| Model () | |
| Creates an uninitialized model. | |
| Model (const std::string &model_path, const bool warmup=false, const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="") | |
| Creates a model by loading it from disk. | |
| void | loadModel (const std::string &model_path, const bool warmup=false, const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="") |
| Loads a SavedModel or FrozenGraph model from disk. | |
| bool | isLoaded () const |
| Checks whether the model is loaded already. | |
| std::unordered_map< std::string, tf::Tensor > | operator() (const std::vector< std::pair< std::string, tf::Tensor > > &inputs, const std::vector< std::string > &output_names) const |
| Runs the model. | |
| tf::Tensor | operator() (const tf::Tensor &input_tensor) const |
| Runs the model. | |
| std::vector< tf::Tensor > | operator() (const std::vector< tf::Tensor > &input_tensors) const |
| Runs the model. | |
| std::vector< int > | getNodeShape (const std::string &name) |
| Determines the shape of a model node. | |
| std::vector< int > | getInputShape () |
| Determines the shape of the model input. | |
| std::vector< int > | getOutputShape () |
| Determines the shape of the model output. | |
| std::vector< std::vector< int > > | getInputShapes () |
| Determines the shape of the model inputs. | |
| std::vector< std::vector< int > > | getOutputShapes () |
| Determines the shape of the model outputs. | |
| tf::DataType | getNodeType (const std::string &name) |
| Determines the datatype of a model node. | |
| tf::DataType | getInputType () |
| Determines the datatype of the model input. | |
| tf::DataType | getOutputType () |
| Determines the datatype of the model output. | |
| std::vector< tf::DataType > | getInputTypes () |
| Determines the datatype of the model inputs. | |
| std::vector< tf::DataType > | getOutputTypes () |
| Determines the datatype of the model outputs. | |
| std::string | getInfoString () |
| Returns information about the model. | |
| tf::Session * | session () const |
| Returns the underlying TensorFlow session. | |
| const tf::SavedModelBundleLite & | savedModel () const |
| Returns the underlying SavedModel. | |
| const tf::GraphDef & | frozenGraph () const |
| Returns the underlying FrozenGraph GraphDef. | |
| bool | isSavedModel () const |
| Returns whether loaded model is from SavedModel. | |
| bool | isFrozenGraph () const |
| Returns whether loaded model is from FrozenGraph. | |
| int | nInputs () const |
| Returns number of model inputs. | |
| int | nOutputs () const |
| Returns number of model outputs. | |
| std::vector< std::string > | inputNames () const |
| Returns names of model inputs. | |
| std::vector< std::string > | outputNames () const |
| Returns names of model outputs. | |
Protected Member Functions | |
| void | dummyCall () |
| Runs the model once with dummy input to speed-up first inference. | |
Protected Attributes | |
| tf::Session * | session_ = nullptr |
| underlying TensorFlow session | |
| tf::SavedModelBundleLite | saved_model_ |
| underlying SavedModel | |
| tf::GraphDef | graph_def_ |
| underlying FrozenGraph GraphDef | |
| bool | is_saved_model_ = false |
| whether loaded model is from SavedModel | |
| bool | is_frozen_graph_ = false |
| whether loaded model is from FrozenGraph | |
| int | n_inputs_ |
| number of model inputs | |
| int | n_outputs_ |
| number of model outputs | |
| std::vector< std::string > | input_names_ |
| (layer) names of model inputs | |
| std::vector< std::string > | output_names_ |
| (layer) names of model outputs | |
| std::unordered_map< std::string, std::string > | saved_model_node2layer_ |
| mapping between SavedModel node and layer input/output names | |
| std::unordered_map< std::string, std::string > | saved_model_layer2node_ |
| mapping between SavedModel layer and node input/output names | |
Wrapper class for running TensorFlow SavedModels or FrozenGraphs.
|
inline |
|
inline |
Creates a model by loading it from disk.
| [in] | model_path | SavedModel or FrozenGraph path |
| [in] | warmup | run dummy inference to warmup |
| [in] | allow_growth | dynamically grow GPU usage |
| [in] | per_process_gpu_memory_fraction | maximum GPU memory fraction |
| [in] | visible_device_list | list of GPUs to use, e.g. "0,1" |
Definition at line 69 of file model.h.
|
inlineprotected |
Runs the model once with dummy input to speed-up first inference.
Definition at line 537 of file model.h.
|
inline |
Returns the underlying FrozenGraph GraphDef.
|
inline |
Returns information about the model.
Returns a formatted message containing information about the shape and type of all inputs/outputs of the model.
Definition at line 439 of file model.h.
|
inline |
Determines the shape of the model input.
This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.
Definition at line 289 of file model.h.
|
inline |
|
inline |
Determines the datatype of the model input.
This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.
Definition at line 373 of file model.h.
|
inline |
|
inline |
Determines the shape of a model node.
| [in] | name | node name |
Definition at line 269 of file model.h.
|
inline |
Determines the datatype of a model node.
| [in] | name | node name |
Definition at line 354 of file model.h.
|
inline |
Determines the shape of the model output.
This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.
Definition at line 309 of file model.h.
|
inline |
|
inline |
Determines the datatype of the model output.
This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.
Definition at line 393 of file model.h.
|
inline |
|
inline |
Returns names of model inputs.
|
inline |
Returns whether loaded model is from FrozenGraph.
|
inline |
Checks whether the model is loaded already.
Definition at line 143 of file model.h.
|
inline |
Returns whether loaded model is from SavedModel.
|
inline |
Loads a SavedModel or FrozenGraph model from disk.
After the model has loaded, it's also run once with dummy inputs in order to speed-up the first actual inference call.
| [in] | model_path | SavedModel or FrozenGraph path |
| [in] | warmup | run dummy inference to warmup |
| [in] | allow_growth | dynamically grow GPU usage |
| [in] | per_process_gpu_memory_fraction | maximum GPU memory fraction |
| [in] | visible_device_list | list of GPUs to use, e.g. "0,1" |
Definition at line 91 of file model.h.
|
inline |
|
inline |
Returns number of model outputs.
|
inline |
Runs the model.
The input/output names are expected to be set to the model layer names given during model construction. Information about the model can be printed using getInfoString. For FrozenGraphs, layer names are unknown and node names are expected.
| [in] | inputs | inputs by name |
| [in] | output_names | output names |
Definition at line 163 of file model.h.
|
inline |
Runs the model.
This version of operator() works without having to specify input/output names of the model, but is limited to FrozenGraph models.
| [in] | input_tensors | input tensors |
Definition at line 237 of file model.h.
|
inline |
Runs the model.
This version of operator() works without having to specify input/output names of the model, but is limited to single-input/single-output models.
| [in] | input_tensor | input tensor |
Definition at line 210 of file model.h.
|
inline |
Returns names of model outputs.
|
inline |
Returns the underlying SavedModel.
|
inline |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |
|
protected |