tensorflow_cpp 1.0.6
Loading...
Searching...
No Matches
Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
tensorflow_cpp::Model Class Reference

Wrapper class for running TensorFlow SavedModels or FrozenGraphs. More...

#include <model.h>

Public Member Functions

 Model ()
 Creates an uninitialized model.
 
 Model (const std::string &model_path, const bool warmup=false, const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="")
 Creates a model by loading it from disk.
 
void loadModel (const std::string &model_path, const bool warmup=false, const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="")
 Loads a SavedModel or FrozenGraph model from disk.
 
bool isLoaded () const
 Checks whether the model is loaded already.
 
std::unordered_map< std::string, tf::Tensor > operator() (const std::vector< std::pair< std::string, tf::Tensor > > &inputs, const std::vector< std::string > &output_names) const
 Runs the model.
 
tf::Tensor operator() (const tf::Tensor &input_tensor) const
 Runs the model.
 
std::vector< tf::Tensor > operator() (const std::vector< tf::Tensor > &input_tensors) const
 Runs the model.
 
std::vector< int > getNodeShape (const std::string &name)
 Determines the shape of a model node.
 
std::vector< int > getInputShape ()
 Determines the shape of the model input.
 
std::vector< int > getOutputShape ()
 Determines the shape of the model output.
 
std::vector< std::vector< int > > getInputShapes ()
 Determines the shape of the model inputs.
 
std::vector< std::vector< int > > getOutputShapes ()
 Determines the shape of the model outputs.
 
tf::DataType getNodeType (const std::string &name)
 Determines the datatype of a model node.
 
tf::DataType getInputType ()
 Determines the datatype of the model input.
 
tf::DataType getOutputType ()
 Determines the datatype of the model output.
 
std::vector< tf::DataType > getInputTypes ()
 Determines the datatype of the model inputs.
 
std::vector< tf::DataType > getOutputTypes ()
 Determines the datatype of the model outputs.
 
std::string getInfoString ()
 Returns information about the model.
 
tf::Session * session () const
 Returns the underlying TensorFlow session.
 
const tf::SavedModelBundleLite & savedModel () const
 Returns the underlying SavedModel.
 
const tf::GraphDef & frozenGraph () const
 Returns the underlying FrozenGraph GraphDef.
 
bool isSavedModel () const
 Returns whether loaded model is from SavedModel.
 
bool isFrozenGraph () const
 Returns whether loaded model is from FrozenGraph.
 
int nInputs () const
 Returns number of model inputs.
 
int nOutputs () const
 Returns number of model outputs.
 
std::vector< std::string > inputNames () const
 Returns names of model inputs.
 
std::vector< std::string > outputNames () const
 Returns names of model outputs.
 

Protected Member Functions

void dummyCall ()
 Runs the model once with dummy input to speed-up first inference.
 

Protected Attributes

tf::Session * session_ = nullptr
 underlying TensorFlow session
 
tf::SavedModelBundleLite saved_model_
 underlying SavedModel
 
tf::GraphDef graph_def_
 underlying FrozenGraph GraphDef
 
bool is_saved_model_ = false
 whether loaded model is from SavedModel
 
bool is_frozen_graph_ = false
 whether loaded model is from FrozenGraph
 
int n_inputs_
 number of model inputs
 
int n_outputs_
 number of model outputs
 
std::vector< std::string > input_names_
 (layer) names of model inputs
 
std::vector< std::string > output_names_
 (layer) names of model outputs
 
std::unordered_map< std::string, std::string > saved_model_node2layer_
 mapping between SavedModel node and layer input/output names
 
std::unordered_map< std::string, std::string > saved_model_layer2node_
 mapping between SavedModel layer and node input/output names
 

Detailed Description

Wrapper class for running TensorFlow SavedModels or FrozenGraphs.

Definition at line 51 of file model.h.

Constructor & Destructor Documentation

◆ Model() [1/2]

tensorflow_cpp::Model::Model ( )
inline

Creates an uninitialized model.

Definition at line 57 of file model.h.

57{}

◆ Model() [2/2]

tensorflow_cpp::Model::Model ( const std::string & model_path,
const bool warmup = false,
const bool allow_growth = true,
const double per_process_gpu_memory_fraction = 0,
const std::string & visible_device_list = "" )
inline

Creates a model by loading it from disk.

Parameters
[in]model_pathSavedModel or FrozenGraph path
[in]warmuprun dummy inference to warmup
[in]allow_growthdynamically grow GPU usage
[in]per_process_gpu_memory_fractionmaximum GPU memory fraction
[in]visible_device_listlist of GPUs to use, e.g. "0,1"

Definition at line 69 of file model.h.

72 {
73
74 loadModel(model_path, warmup, allow_growth, per_process_gpu_memory_fraction,
75 visible_device_list);
76 }
void loadModel(const std::string &model_path, const bool warmup=false, const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="")
Loads a SavedModel or FrozenGraph model from disk.
Definition model.h:91

Member Function Documentation

◆ dummyCall()

void tensorflow_cpp::Model::dummyCall ( )
inlineprotected

Runs the model once with dummy input to speed-up first inference.

Definition at line 537 of file model.h.

537 {
538
539 // infer input shapes/types to create dummy input tensors
540 auto input_shapes = getInputShapes();
541 auto input_types = getInputTypes();
542 std::vector<tf::Tensor> input_dummies;
543 for (int k = 0; k < n_inputs_; k++) {
544 std::vector<long int> dummy_shape(input_shapes[k].begin(),
545 input_shapes[k].end());
546 // Replace -1 (batch size dimension, None in python) with 1
547 std::replace(dummy_shape.begin(), dummy_shape.end(), -1l, 1l);
548 auto dummy_tensor_shape =
549 tf::TensorShape(tf::gtl::ArraySlice<long int>(dummy_shape));
550 tf::Tensor dummy(input_types[k], dummy_tensor_shape);
551 // init to zero, based on type
552 switch (input_types[k]) {
553 case tf::DT_FLOAT:
554 dummy.flat<float>().setZero();
555 break;
556 case tf::DT_DOUBLE:
557 dummy.flat<double>().setZero();
558 case tf::DT_INT32:
559 dummy.flat<tf::int32>().setZero();
560 break;
561 case tf::DT_UINT32:
562 dummy.flat<tf::uint32>().setZero();
563 break;
564 case tf::DT_UINT8:
565 dummy.flat<tf::uint8>().setZero();
566 break;
567 case tf::DT_UINT16:
568 dummy.flat<tf::uint16>().setZero();
569 break;
570 case tf::DT_INT16:
571 dummy.flat<tf::int16>().setZero();
572 break;
573 case tf::DT_INT8:
574 dummy.flat<tf::int8>().setZero();
575 break;
576 case tf::DT_STRING:
577 dummy.flat<tf::tstring>().setZero();
578 break;
579 case tf::DT_COMPLEX64:
580 dummy.flat<tf::complex64>().setZero();
581 break;
582 case tf::DT_COMPLEX128:
583 dummy.flat<tf::complex128>().setZero();
584 break;
585 case tf::DT_INT64:
586 dummy.flat<tf::int64>().setZero();
587 break;
588 case tf::DT_UINT64:
589 dummy.flat<tf::uint64>().setZero();
590 break;
591 case tf::DT_BOOL:
592 dummy.flat<bool>().setZero();
593 break;
594 case tf::DT_QINT8:
595 dummy.flat<tf::qint8>().setZero();
596 break;
597 case tf::DT_QUINT8:
598 dummy.flat<tf::quint8>().setZero();
599 break;
600 case tf::DT_QUINT16:
601 dummy.flat<tf::quint16>().setZero();
602 break;
603 case tf::DT_QINT16:
604 dummy.flat<tf::qint16>().setZero();
605 break;
606 case tf::DT_QINT32:
607 dummy.flat<tf::qint32>().setZero();
608 break;
609 case tf::DT_BFLOAT16:
610 dummy.flat<tf::bfloat16>().setZero();
611 break;
612 case tf::DT_HALF:
613 dummy.flat<Eigen::half>().setZero();
614 break;
615 }
616 input_dummies.push_back(dummy);
617 }
618
619 // run dummy inference
620 volatile auto output_dummies = (*this)(input_dummies);
621 }
std::vector< tf::DataType > getInputTypes()
Determines the datatype of the model inputs.
Definition model.h:410
std::vector< std::vector< int > > getInputShapes()
Determines the shape of the model inputs.
Definition model.h:326
int n_inputs_
number of model inputs
Definition model.h:652

◆ frozenGraph()

const tf::GraphDef & tensorflow_cpp::Model::frozenGraph ( ) const
inline

Returns the underlying FrozenGraph GraphDef.

Returns
const tf::GraphDef& FrozenGraph GraphDef

Definition at line 473 of file model.h.

473 {
474 return graph_def_;
475 }
tf::GraphDef graph_def_
underlying FrozenGraph GraphDef
Definition model.h:637

◆ getInfoString()

std::string tensorflow_cpp::Model::getInfoString ( )
inline

Returns information about the model.

Returns a formatted message containing information about the shape and type of all inputs/outputs of the model.

Returns
std::string formatted info message

Definition at line 439 of file model.h.

439 {
440
441 if (is_saved_model_) {
443 } else if (is_frozen_graph_) {
445 } else {
446 return "";
447 }
448 }
bool is_saved_model_
whether loaded model is from SavedModel
Definition model.h:642
bool is_frozen_graph_
whether loaded model is from FrozenGraph
Definition model.h:647
tf::SavedModelBundleLite saved_model_
underlying SavedModel
Definition model.h:632
std::string getGraphInfoString(const tf::GraphDef &graph_def)
std::string getSavedModelInfoString(const tf::SavedModelBundleLite &saved_model)

◆ getInputShape()

std::vector< int > tensorflow_cpp::Model::getInputShape ( )
inline

Determines the shape of the model input.

This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.

Returns
std::vector<int> node shape

Definition at line 289 of file model.h.

289 {
290
291 if (n_inputs_ != 1) {
292 throw std::runtime_error(
293 "std::vector<int> tensorflow_cpp::Model::getInputShape()' is only "
294 "available for single-input models. Found " +
295 std::to_string(n_inputs_) + " inputs.");
296 }
297
298 return getNodeShape(input_names_[0]);
299 }
std::vector< int > getNodeShape(const std::string &name)
Determines the shape of a model node.
Definition model.h:269
std::vector< std::string > input_names_
(layer) names of model inputs
Definition model.h:662

◆ getInputShapes()

std::vector< std::vector< int > > tensorflow_cpp::Model::getInputShapes ( )
inline

Determines the shape of the model inputs.

Returns
std::vector<std::vector<int>> node shapes

Definition at line 326 of file model.h.

326 {
327
328 std::vector<std::vector<int>> shapes;
329 for (const auto& name : input_names_) shapes.push_back(getNodeShape(name));
330
331 return shapes;
332 }

◆ getInputType()

tf::DataType tensorflow_cpp::Model::getInputType ( )
inline

Determines the datatype of the model input.

This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.

Returns
tf::DataType node datatype

Definition at line 373 of file model.h.

373 {
374
375 if (n_inputs_ != 1) {
376 throw std::runtime_error(
377 "'tf::DataType tensorflow_cpp::Model::getInputType()' is only "
378 "available for single-input models. Found " +
379 std::to_string(n_inputs_) + " inputs.");
380 }
381
382 return getNodeType(input_names_[0]);
383 }
tf::DataType getNodeType(const std::string &name)
Determines the datatype of a model node.
Definition model.h:354

◆ getInputTypes()

std::vector< tf::DataType > tensorflow_cpp::Model::getInputTypes ( )
inline

Determines the datatype of the model inputs.

Returns
std::vector<tf::DataType> node datatypes

Definition at line 410 of file model.h.

410 {
411
412 std::vector<tf::DataType> types;
413 for (const auto& name : input_names_) types.push_back(getNodeType(name));
414
415 return types;
416 }

◆ getNodeShape()

std::vector< int > tensorflow_cpp::Model::getNodeShape ( const std::string & name)
inline

Determines the shape of a model node.

Parameters
[in]namenode name
Returns
std::vector<int> node shape

Definition at line 269 of file model.h.

269 {
270
271 if (is_saved_model_) {
274 } else if (is_frozen_graph_) {
275 return getGraphNodeShape(graph_def_, name);
276 } else {
277 return {};
278 }
279 }
std::unordered_map< std::string, std::string > saved_model_layer2node_
mapping between SavedModel layer and node input/output names
Definition model.h:677
std::vector< int > getGraphNodeShape(const tf::GraphDef &graph_def, const std::string &node_name)
Determines the shape of a given graph node.
std::vector< int > getSavedModelNodeShape(const tf::SavedModelBundleLite &saved_model, const std::string &node_name, const std::string &signature="serving_default")
Determines the shape of a given SavedModel node.

◆ getNodeType()

tf::DataType tensorflow_cpp::Model::getNodeType ( const std::string & name)
inline

Determines the datatype of a model node.

Parameters
[in]namenode name
Returns
tf::DataType node datatype

Definition at line 354 of file model.h.

354 {
355
356 if (is_saved_model_) {
358 } else if (is_frozen_graph_) {
359 return getGraphNodeType(graph_def_, name);
360 } else {
361 return tf::DataType();
362 }
363 }
tf::DataType getSavedModelNodeType(const tf::SavedModelBundleLite &saved_model, const std::string &node_name, const std::string &signature="serving_default")
Determines the datatype of a given SavedModel node.
tf::DataType getGraphNodeType(const tf::GraphDef &graph_def, const std::string &node_name)
Determines the datatype of a given graph node.

◆ getOutputShape()

std::vector< int > tensorflow_cpp::Model::getOutputShape ( )
inline

Determines the shape of the model output.

This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.

Returns
std::vector<int> node shape

Definition at line 309 of file model.h.

309 {
310
311 if (n_outputs_ != 1) {
312 throw std::runtime_error(
313 "std::vector<int> tensorflow_cpp::Model::getOutputShape()' is only "
314 "available for single-output models. Found " +
315 std::to_string(n_outputs_) + " outputs.");
316 }
317
318 return getNodeShape(output_names_[0]);
319 }
int n_outputs_
number of model outputs
Definition model.h:657
std::vector< std::string > output_names_
(layer) names of model outputs
Definition model.h:667

◆ getOutputShapes()

std::vector< std::vector< int > > tensorflow_cpp::Model::getOutputShapes ( )
inline

Determines the shape of the model outputs.

Returns
std::vector<std::vector<int>> node shapes

Definition at line 339 of file model.h.

339 {
340
341 std::vector<std::vector<int>> shapes;
342 for (const auto& name : output_names_) shapes.push_back(getNodeShape(name));
343
344 return shapes;
345 }

◆ getOutputType()

tf::DataType tensorflow_cpp::Model::getOutputType ( )
inline

Determines the datatype of the model output.

This function works without having to specify input/output names of the model, but is limited to single-input/single-output models.

Returns
tf::DataType node datatype

Definition at line 393 of file model.h.

393 {
394
395 if (n_outputs_ != 1) {
396 throw std::runtime_error(
397 "'tf::DataType tensorflow_cpp::Model::getOutputType()' is only "
398 "available for single-output models. Found " +
399 std::to_string(n_outputs_) + " outputs.");
400 }
401
402 return getNodeType(output_names_[0]);
403 }

◆ getOutputTypes()

std::vector< tf::DataType > tensorflow_cpp::Model::getOutputTypes ( )
inline

Determines the datatype of the model outputs.

Returns
std::vector<tf::DataType> node datatypes

Definition at line 423 of file model.h.

423 {
424
425 std::vector<tf::DataType> types;
426 for (const auto& name : output_names_) types.push_back(getNodeType(name));
427
428 return types;
429 }

◆ inputNames()

std::vector< std::string > tensorflow_cpp::Model::inputNames ( ) const
inline

Returns names of model inputs.

Returns
std::vector<std::string> model input names

Definition at line 520 of file model.h.

520 {
521 return input_names_;
522 }

◆ isFrozenGraph()

bool tensorflow_cpp::Model::isFrozenGraph ( ) const
inline

Returns whether loaded model is from FrozenGraph.

Returns
true if loaded from FrozenGraph
false if not loaded from FrozenGraph

Definition at line 493 of file model.h.

493 {
494 return is_frozen_graph_;
495 }

◆ isLoaded()

bool tensorflow_cpp::Model::isLoaded ( ) const
inline

Checks whether the model is loaded already.

Returns
true if model is loaded
false if model is not loaded

Definition at line 143 of file model.h.

143 {
144
145 bool is_loaded = bool(session_);
146
147 return is_loaded;
148 }
tf::Session * session_
underlying TensorFlow session
Definition model.h:627

◆ isSavedModel()

bool tensorflow_cpp::Model::isSavedModel ( ) const
inline

Returns whether loaded model is from SavedModel.

Returns
true if loaded from SavedModel
false if not loaded from SavedModel

Definition at line 483 of file model.h.

483 {
484 return is_saved_model_;
485 }

◆ loadModel()

void tensorflow_cpp::Model::loadModel ( const std::string & model_path,
const bool warmup = false,
const bool allow_growth = true,
const double per_process_gpu_memory_fraction = 0,
const std::string & visible_device_list = "" )
inline

Loads a SavedModel or FrozenGraph model from disk.

After the model has loaded, it's also run once with dummy inputs in order to speed-up the first actual inference call.

Parameters
[in]model_pathSavedModel or FrozenGraph path
[in]warmuprun dummy inference to warmup
[in]allow_growthdynamically grow GPU usage
[in]per_process_gpu_memory_fractionmaximum GPU memory fraction
[in]visible_device_listlist of GPUs to use, e.g. "0,1"

Definition at line 91 of file model.h.

94 {
95
96 is_frozen_graph_ = (model_path.substr(model_path.size() - 3) == ".pb");
98
99 // load model
100 if (is_frozen_graph_) {
101 graph_def_ = loadFrozenGraph(model_path);
102 session_ = createSession(allow_growth, per_process_gpu_memory_fraction,
103 visible_device_list);
105 } else {
107 loadSavedModel(model_path, allow_growth,
108 per_process_gpu_memory_fraction, visible_device_list);
109 session_ = saved_model_.GetSession();
110 }
111
112 // automatically find inputs and outputs
113 if (is_frozen_graph_) {
116 } else {
119 const auto input_nodes_ = getSavedModelInputNames(saved_model_, false);
120 const auto output_nodes_ = getSavedModelOutputNames(saved_model_, false);
121 for (int k = 0; k < input_names_.size(); k++) {
122 saved_model_node2layer_[input_nodes_[k]] = input_names_[k];
123 saved_model_layer2node_[input_names_[k]] = input_nodes_[k];
124 }
125 for (int k = 0; k < output_names_.size(); k++) {
126 saved_model_node2layer_[output_nodes_[k]] = output_names_[k];
127 saved_model_layer2node_[output_names_[k]] = output_nodes_[k];
128 }
129 }
130 n_inputs_ = input_names_.size();
131 n_outputs_ = output_names_.size();
132
133 // run dummy inference to warm-up
134 if (warmup) dummyCall();
135 }
void dummyCall()
Runs the model once with dummy input to speed-up first inference.
Definition model.h:537
std::unordered_map< std::string, std::string > saved_model_node2layer_
mapping between SavedModel node and layer input/output names
Definition model.h:672
std::vector< std::string > getGraphOutputNames(const tf::GraphDef &graph_def)
Determines the names of all graph output nodes.
tf::SavedModelBundleLite loadSavedModel(const std::string &dir, const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="")
Loads a TensorFlow SavedModel from a directory into a new session.
tf::GraphDef loadFrozenGraph(const std::string &file)
Loads a TensorFlow graph from a frozen graph file.
Definition graph_utils.h:53
std::vector< std::string > getSavedModelInputNames(const tf::SavedModelBundleLite &saved_model, const bool layer_names=false, const std::string &signature="serving_default")
Determines the names of the SavedModel input nodes.
std::vector< std::string > getGraphInputNames(const tf::GraphDef &graph_def)
Determines the names of all graph input nodes.
bool loadGraphIntoSession(tf::Session *session, const tf::GraphDef &graph_def)
Loads a TensorFlow graph into an existing session.
Definition graph_utils.h:74
std::vector< std::string > getSavedModelOutputNames(const tf::SavedModelBundleLite &saved_model, const bool layer_names=false, const std::string &signature="serving_default")
Determines the names of the SavedModel output nodes.
tf::Session * createSession(const bool allow_growth=true, const double per_process_gpu_memory_fraction=0, const std::string &visible_device_list="")
Creates a new TensorFlow session.
Definition utils.h:78

◆ nInputs()

int tensorflow_cpp::Model::nInputs ( ) const
inline

Returns number of model inputs.

Returns
int number of inputs

Definition at line 502 of file model.h.

502 {
503 return n_inputs_;
504 }

◆ nOutputs()

int tensorflow_cpp::Model::nOutputs ( ) const
inline

Returns number of model outputs.

Returns
int number of outputs

Definition at line 511 of file model.h.

511 {
512 return n_outputs_;
513 }

◆ operator()() [1/3]

std::unordered_map< std::string, tf::Tensor > tensorflow_cpp::Model::operator() ( const std::vector< std::pair< std::string, tf::Tensor > > & inputs,
const std::vector< std::string > & output_names ) const
inline

Runs the model.

The input/output names are expected to be set to the model layer names given during model construction. Information about the model can be printed using getInfoString. For FrozenGraphs, layer names are unknown and node names are expected.

Parameters
[in]inputsinputs by name
[in]output_namesoutput names
Returns
std::unordered_map<std::string, tf::Tensor> outputs by name

Definition at line 163 of file model.h.

165 {
166
167 // properly set input/output names for session->Run()
168 std::vector<std::pair<std::string, tf::Tensor>> input_nodes;
169 std::vector<std::string> output_node_names;
170 if (is_saved_model_) {
171 for (const auto& input : inputs)
172 input_nodes.push_back(
173 {saved_model_layer2node_.find(input.first)->second, input.second});
174 for (const auto& name : output_names)
175 output_node_names.push_back(saved_model_layer2node_.find(name)->second);
176 } else if (is_frozen_graph_) {
177 input_nodes = inputs;
178 output_node_names = output_names;
179 } else {
180 return {};
181 }
182
183 // run model
184 tf::Status status;
185 std::vector<tf::Tensor> output_tensors;
186 status = session_->Run(input_nodes, output_node_names, {}, &output_tensors);
187
188 // build outputs
189 std::unordered_map<std::string, tf::Tensor> outputs;
190 if (status.ok()) {
191 for (int k = 0; k < output_tensors.size(); k++)
192 outputs[output_names[k]] = output_tensors[k];
193 } else {
194 throw std::runtime_error("Failed to run model: " + status.ToString());
195 }
196
197 return outputs;
198 }

◆ operator()() [2/3]

std::vector< tf::Tensor > tensorflow_cpp::Model::operator() ( const std::vector< tf::Tensor > & input_tensors) const
inline

Runs the model.

This version of operator() works without having to specify input/output names of the model, but is limited to FrozenGraph models.

Parameters
[in]input_tensorsinput tensors
Returns
std::vector<tf::Tensor> output tensors

Definition at line 237 of file model.h.

238 {
239
240 if (input_tensors.size() != n_inputs_) {
241 throw std::runtime_error(
242 "Model has " + std::to_string(n_inputs_) + " inputs, but " +
243 std::to_string(input_tensors.size()) + " input tensors were given");
244 }
245
246 // assign inputs in default order
247 std::vector<std::pair<std::string, tf::Tensor>> inputs;
248 for (int k = 0; k < n_inputs_; k++)
249 inputs.push_back({input_names_[k], input_tensors[k]});
250
251 // run model
252 auto outputs = (*this)(inputs, output_names_);
253
254 // return output tensors in default order
255 std::vector<tf::Tensor> output_tensors;
256 for (const auto& name : output_names_)
257 output_tensors.push_back(outputs[name]);
258
259 return output_tensors;
260 }

◆ operator()() [3/3]

tf::Tensor tensorflow_cpp::Model::operator() ( const tf::Tensor & input_tensor) const
inline

Runs the model.

This version of operator() works without having to specify input/output names of the model, but is limited to single-input/single-output models.

Parameters
[in]input_tensorinput tensor
Returns
tf::Tensor output tensor

Definition at line 210 of file model.h.

210 {
211
212 if (n_inputs_ != 1 || n_outputs_ != 1) {
213 throw std::runtime_error(
214 "'tf::Tensor tensorflow_cpp::Model::operator()(const tf::Tensor&)' is "
215 "only available for single-input/single-output models. Found " +
216 std::to_string(n_inputs_) + " inputs and " +
217 std::to_string(n_outputs_) + " outputs.");
218 }
219
220 // run model
221 auto outputs =
222 (*this)({{input_names_[0], input_tensor}}, {output_names_[0]});
223
224 return outputs[output_names_[0]];
225 }

◆ outputNames()

std::vector< std::string > tensorflow_cpp::Model::outputNames ( ) const
inline

Returns names of model outputs.

Returns
std::vector<std::string> model output names

Definition at line 529 of file model.h.

529 {
530 return output_names_;
531 }

◆ savedModel()

const tf::SavedModelBundleLite & tensorflow_cpp::Model::savedModel ( ) const
inline

Returns the underlying SavedModel.

Returns
const tf::SavedModelBundleLite& SavedModel

Definition at line 464 of file model.h.

464 {
465 return saved_model_;
466 }

◆ session()

tf::Session * tensorflow_cpp::Model::session ( ) const
inline

Returns the underlying TensorFlow session.

Returns
tf::Session* session

Definition at line 455 of file model.h.

455 {
456 return session_;
457 }

Member Data Documentation

◆ graph_def_

tf::GraphDef tensorflow_cpp::Model::graph_def_
protected

underlying FrozenGraph GraphDef

Definition at line 637 of file model.h.

◆ input_names_

std::vector<std::string> tensorflow_cpp::Model::input_names_
protected

(layer) names of model inputs

Definition at line 662 of file model.h.

◆ is_frozen_graph_

bool tensorflow_cpp::Model::is_frozen_graph_ = false
protected

whether loaded model is from FrozenGraph

Definition at line 647 of file model.h.

◆ is_saved_model_

bool tensorflow_cpp::Model::is_saved_model_ = false
protected

whether loaded model is from SavedModel

Definition at line 642 of file model.h.

◆ n_inputs_

int tensorflow_cpp::Model::n_inputs_
protected

number of model inputs

Definition at line 652 of file model.h.

◆ n_outputs_

int tensorflow_cpp::Model::n_outputs_
protected

number of model outputs

Definition at line 657 of file model.h.

◆ output_names_

std::vector<std::string> tensorflow_cpp::Model::output_names_
protected

(layer) names of model outputs

Definition at line 667 of file model.h.

◆ saved_model_

tf::SavedModelBundleLite tensorflow_cpp::Model::saved_model_
protected

underlying SavedModel

Definition at line 632 of file model.h.

◆ saved_model_layer2node_

std::unordered_map<std::string, std::string> tensorflow_cpp::Model::saved_model_layer2node_
protected

mapping between SavedModel layer and node input/output names

Definition at line 677 of file model.h.

◆ saved_model_node2layer_

std::unordered_map<std::string, std::string> tensorflow_cpp::Model::saved_model_node2layer_
protected

mapping between SavedModel node and layer input/output names

Definition at line 672 of file model.h.

◆ session_

tf::Session* tensorflow_cpp::Model::session_ = nullptr
protected

underlying TensorFlow session

Definition at line 627 of file model.h.


The documentation for this class was generated from the following file: