Get Input Tensor

  1. /**
  2. * @brief get input tensor for given name.
  3. * @param session given session.
  4. * @param name given name. if NULL, return first input.
  5. * @return tensor if found, NULL otherwise.
  6. */
  7. Tensor* getSessionInput(const Session* session, const char* name);
  8. /**
  9. * @brief get all output tensors.
  10. * @param session given session.
  11. * @return all output tensors mapped with name.
  12. */
  13. const std::map<std::string, Tensor*>& getSessionInputAll(const Session* session) const;

Interpreter provides two methods for getting the input Tensor: getSessionInput for getting a single input tensor and getSessionInputAll for getting the input tensor map.

When there is only one input tensor, you can pass NULL to get tensor when calling getSessionInput.

Fill Data

  1. auto inputTensor = interpreter->getSessionInput(session, NULL);
  2. inputTensor->host<float>()[0] = 1.f;

The simplest way to fill tensor data is assigning value to host directly, but this usage is limited to the CPU backend, other backends need to write data via deviceid. On the other hand, users need to handle the differences between NC4HW4 and NHWC data layouts.

For non-CPU backends, or users who are not familiar with data layout, copy data interfaces should be used.

Copy Data

NCHW example:

  1. auto inputTensor = interpreter->getSessionInput(session, NULL);
  2. auto nchwTensor = new Tensor(inputTensor, Tensor::CAFFE);
  3. // nchwTensor-host<float>()[x] = ...
  4. inputTensor->copyFromHostTensor(nchwTensor);
  5. delete nchwTensor;

NC4HW4 example:

  1. auto inputTensor = interpreter->getSessionInput(session, NULL);
  2. auto nc4hw4Tensor = new Tensor(inputTensor, Tensor::CAFFE_C4);
  3. // nc4hw4Tensor-host<float>()[x] = ...
  4. inputTensor->copyFromHostTensor(nc4hw4Tensor);
  5. delete nc4hw4Tensor;

NHWC example:

  1. auto inputTensor = interpreter->getSessionInput(session, NULL);
  2. auto nhwcTensor = new Tensor(inputTensor, Tensor::TENSORFLOW);
  3. // nhwcTensor-host<float>()[x] = ...
  4. inputTensor->copyFromHostTensor(nhwcTensor);
  5. delete nhwcTensor;

By copying the data in this way, the data layout of tensor created with new is only thing you need to pay attention to. copyFromHostTensor is responsible for processing the conversion on the data layout (if needed) and the data copy between the backends (if needed).

Image Processing

CV module is provided in MNN to help users simplify the processing of images, and it also helps to avoid introducing image processing libraries such as opencv and libyuv.

Currently, the CV module only supports the CPU backend.

Image Process Config

  1. struct Config
  2. {
  3. Filter filterType = NEAREST;
  4. ImageFormat sourceFormat = RGBA;
  5. ImageFormat destFormat = RGBA;
  6. //Only valid if the dest type is float
  7. float mean[4] = {0.0f,0.0f,0.0f, 0.0f};
  8. float normal[4] = {1.0f, 1.0f, 1.0f, 1.0f};
  9. };

in CV::ImageProcess::Config

  • Specify input and output formats by sourceFormat and destFormat, currently supports RGBARGBBGRGRAYBGRAYUV_NV21
  • Specify the type of interpolation by filterType, currently supports NEAREST, BILINEAR and BICUBIC
  • Specify the mean normalization by mean and normal, but the setting is ignored when the data type is not a floating point type

Image Transform Matrix

CV::Matrix is ported from the Skia used by Android. For usage, please refer to Skia’s Matrix: https://skia.org/user/api/SkMatrix_Reference.

It should be noted that the Matrix set in ImageProcess is the transformation matrix from the target image to the source image. When using, you can set the transformation from the source image to the target image, and then take the inverse. E.g:

  1. // source image:1280x720
  2. // target image:Rotate 90 degrees counterclockwise
  3. // and then reduce it to 1/10 of the original,
  4. // which becomes 72x128
  5. Matrix matrix;
  6. // reset to unit matrix
  7. matrix.setIdentity();
  8. // zoom out and change to the [0,1] interval:
  9. matrix.postScale(1.0f / 1280, 1.0f / 720);
  10. // rotate 90 degrees from the center point [0.5, 0.5]
  11. matrix.postRotate(90, 0.5f, 0.5f);
  12. // zoom back to 72x128
  13. matrix.postScale(72.0f, 128.0f);
  14. // transition to target image -> source image transformation matrix
  15. matrix.invert(&matrix);

Image Process Instance

MNN uses CV::ImageProcess for image processing. ImageProcess contains a series of caches internally. To avoid repeated allocation/release for memory, it is recommended to create it only once and cache it. We use the convert of ImageProcess to fill the tensor data.

  1. /*
  2. * source: source image
  3. * iw: source image width
  4. * ih: source image height
  5. * stride: the number of bytes in the source image after alignment (if no alignment is required, set to 0 (equivalent to iw*bpp))
  6. * dest: target tensor, could be uint8 or float type
  7. */
  8. ErrorCode convert(const uint8_t* source, int iw, int ih, int stride, Tensor* dest);

Complete Example

  1. auto input = net->getSessionInput(session, NULL);
  2. auto output = net->getSessionOutput(session, NULL);
  3. auto dims = input->shape();
  4. int bpp = dims[1];
  5. int size_h = dims[2];
  6. int size_w = dims[3];
  7. auto inputPatch = argv[2];
  8. FREE_IMAGE_FORMAT f = FreeImage_GetFileType(inputPatch);
  9. FIBITMAP* bitmap = FreeImage_Load(f, inputPatch);
  10. auto newBitmap = FreeImage_ConvertTo32Bits(bitmap);
  11. auto width = FreeImage_GetWidth(newBitmap);
  12. auto height = FreeImage_GetHeight(newBitmap);
  13. FreeImage_Unload(bitmap);
  14. Matrix trans;
  15. //Dst -> [0, 1]
  16. trans.postScale(1.0/size_w, 1.0/size_h);
  17. //Flip Y (因为 FreeImage 解出来的图像排列是Y方向相反的)
  18. trans.postScale(1.0,-1.0, 0.0, 0.5);
  19. //[0, 1] -> Src
  20. trans.postScale(width, height);
  21. ImageProcess::Config config;
  22. config.filterType = NEAREST;
  23. float mean[3] = {103.94f, 116.78f, 123.68f};
  24. float normals[3] = {0.017f,0.017f,0.017f};
  25. ::memcpy(config.mean, mean, sizeof(mean));
  26. ::memcpy(config.normal, normals, sizeof(normals));
  27. config.sourceFormat = RGBA;
  28. config.destFormat = BGR;
  29. std::shared_ptr<ImageProcess> pretreat(ImageProcess::create(config));
  30. pretreat->setMatrix(trans);
  31. pretreat->convert((uint8_t*)FreeImage_GetScanLine(newBitmap, 0), width, height, 0, input);
  32. net->runSession(session);

Variable Dimension

  1. /**
  2. * @brief resize given tensor.
  3. * @param tensor given tensor.
  4. * @param dims new dims. at most 6 dims.
  5. */
  6. void resizeTensor(Tensor* tensor, const std::vector<int>& dims);
  7. /**
  8. * @brief resize given tensor by nchw.
  9. * @param batch / N.
  10. * @param channel / C.
  11. * @param height / H.
  12. * @param width / W
  13. */
  14. void resizeTensor(Tensor* tensor, int batch, int channel, int height, int width);
  15. /**
  16. * @brief call this function to get tensors ready. output tensor buffer (host or deviceId) should be retrieved
  17. * after resize of any input tensor.
  18. * @param session given session.
  19. */
  20. void resizeSession(Session* session);

When input tensor dimensions are unknown or need to be updated, you need to call resizeTensor to update the dimension information. These situations generally occur when input tensor dimensions are not set or the dimension information is variable. After updating dimension information for all tensors, you need to call resizeSession to perform pre-inference which allocates or reuses memory. An example is as follows:

  1. auto inputTensor = interpreter->getSessionInput(session, NULL);
  2. interpreter->resizeTensor(inputTensor, {newBatch, newChannel, newHeight, newWidth});
  3. interpreter->resizeSession(session);
  4. inputTensor->copyFromHostTensor(imageTensor);
  5. interpreter->runSession(session);