在 TensorFlow (Lite) 中使用 GluonCV 模型

问题描述:

我正致力于在嵌入式设备上部署模型、进行性能比较等.这是一次实习,所以我的时间真的很有限,不能重新实施/重新训练模型,但我必须使用可用的东西(我实际上明确地向我的主管询问了这个).由于 TorchScript 并不像 TF Lite 那样成熟,至少从我收集到的信息来看,我将选择后者.它与 2018 年之前的模型进展顺利,但像 ResNeSt 这样的许多 SotA 模型只有 PyTorch 中的代码.然而,GluonCV 似乎在他们的动物园中提供了很好的模型选择并且基于 TensorFlow,所以我认为有一种方法可以将它们导出到 SavedModel、Keras .h5 或其他任何东西,但经过大量搜索,我没有找到.我发现 MMdnn 但在转换为 IR 期间尝试在 JSON 导出模型上失败(我正在附加输出在底部,似乎 MXNet JSON 和 Gluon JSON 格式不同).

I'm working on the deployment of models on embedded devices, making performance comparisons and the like. This is an internship, so I'm really constrained on time and can't go about re-implementing / re-trained models but I have to use what is available (I actually asked this explicitly to my supervisor). Since TorchScript is not really as mature as TF Lite, at least from what I've gathered, I'm going with the latter. It's going well with pre-2018 models, but many SotA models like ResNeSt only have code in PyTorch. GluonCV, however, seems to provide a nice selection of models in their zoo and is based on TensorFlow, so I thought there'd be a way of exporting those to a SavedModel, a Keras .h5 or whatever, but I found none after a lot of searching. I found MMdnn but trying it on JSON exported models fails during conversion to IR (I'm attaching the output at the bottom, it seems that MXNet JSON and Gluon JSON are not the same format).

是否有其他人将 Gluon 模型导出到野外?进展如何?

Has anybody else worked with exporting Gluon models to the wild? How did it go?

谢谢!

mmtoir -f mxnet -n resnest200-symbol.json -d resnest200 --inputShape 3,257,257 的输出:

/home/kmfrick/Gluon_Tinkering/venv/lib/python3.8/site-packages/mxnet/module/base_module.py:55: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:
    data
    _defaultpreprocess1_init_mean
    _defaultpreprocess1_init_scale
  warnings.warn(msg)
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
Warning: MXNet Parser has not supported operator null with name _defaultpreprocess1_init_scale.
Warning: convert the null operator with name [_defaultpreprocess1_init_scale] into input layer.
terminate called after throwing an instance of 'dmlc::Error'
  what():  [09:24:49] src/c_api/c_api_symbolic.cc:540: InferShapeKeyword argument name data not found.
Candidate arguments:
    [0]_defaultpreprocess1_init_scale

Stack trace:
  [bt] (0) /home/kmfrick/Gluon_Tinkering/venv/lib/python3.8/site-packages/mxnet/libmxnet.so(+0x307d3b) [0x7f0127eb9d3b]
  [bt] (1) /home/kmfrick/Gluon_Tinkering/venv/lib/python3.8/site-packages/mxnet/libmxnet.so(+0x33a3755) [0x7f012af55755]

Gluoncv 是一款出色的基于 MXNet 的计算机视觉工具包!将 gluoncv 模型部署到嵌入式运行时的几个选项:

Gluoncv is an excellent MXNet-based toolkit for computer vision! Several options to deploy gluoncv models to embedded runtimes:

  1. 您可以使用 ONNX 将模型转换为其他运行时,例如 CoreML for iOSNNAPI 安卓版
  2. 您可以使用TVM
  3. 您可以使用 SageMaker Neo + DLR 运行时,这可能是最简单的解决方案.git 包含适用于 Android 的示例.
  1. You can use ONNX to convert models to other runtimes, for example CoreML for iOS or NNAPI for Android
  2. You can use TVM
  3. You can use SageMaker Neo + DLR runtime, probably the easiest solution. The git includes examples for Android.

请记住,从一个框架到另一个框架的编译和可移植性取决于运营商的覆盖范围,它可能不适用于奇异的或最新的模型

Keep in mind that compilation and portability from a framework to another depends on operators coverage, it may not work for exotic or very recent models