Get Output Tensor
/**
* @brief get output tensor for given name.
* @param session given session.
* @param name given name. if NULL, return first output.
* @return tensor if found, NULL otherwise.
*/
Tensor* getSessionOutput(const Session* session, const char* name);
/**
* @brief get all output tensors.
* @param session given session.
* @return all output tensors mapped with name.
*/
const std::map<std::string, Tensor*>& getSessionOutputAll(const Session* session) const;
Interpreter
provides two ways to get the output Tensor
: getSessionOutput
for getting a single output tensor and getSessionOutputAll
for getting the output tensor map.
When there is only one output tensor, you can pass NULL to get the tensor when calling
getSessionOutput
.
Read Data
auto outputTensor = interpreter->getSessionOutput(session, NULL);
auto score = outputTensor->host<float>()[0];
auto index = outputTensor->host<float>()[1];
// ...
The simplest way to read data is to read from host
of Tensor
directly, but this usage is limited to the CPU backend, other backends need to read the data through deviceid
. On the other hand, users need to handle the differences between NC4HW4
and NHWC
data layouts.
For non-CPU backends, or users who are not familiar with data layout, copy data interfaces should be used.
Copy Data
NCHW example:
auto outputTensor = interpreter->getSessionOutput(session, NULL);
auto nchwTensor = new Tensor(outputTensor, Tensor::CAFFE);
outputTensor->copyToHostTensor(nchwTensor);
auto score = nchwTensor->host<float>()[0];
auto index = nchwTensor->host<float>()[1];
// ...
delete nchwTensor;
NC4HW4 example:
auto outputTensor = interpreter->getSessionOutput(session, NULL);
auto nc4hw4Tensor = new Tensor(outputTensor, Tensor::CAFFE_C4);
outputTensor->copyToHostTensor(nc4hw4Tensor);
auto score = nc4hw4Tensor->host<float>()[0];
auto index = nc4hw4Tensor->host<float>()[1];
// ...
delete nc4hw4Tensor;
NHWC example:
auto outputTensor = interpreter->getSessionOutput(session, NULL);
auto nhwcTensor = new Tensor(outputTensor, Tensor::TENSORFLOW);
outputTensor->copyToHostTensor(nhwcTensor);
auto score = nhwcTensor->host<float>()[0];
auto index = nhwcTensor->host<float>()[1];
// ...
delete nhwcTensor;
By copying the data in this way, the data layout of tensor created with
new
is only thing you need to pay attention to.copyToHostTensor
is responsible for processing the conversion on the data layout (if needed) and the data copy between the backends (if needed).