[PYTHON] Google Edge TPU Inference Overview

Edge TPU is only compatible with TensorFlow Lite models. Therefore, you need to train your TensorFlow model, convert it to TensorFlow Lite, and compile it for your Edge TPU. You can then run the model on the Edge TPU using any of the options described on this page. (For more information on creating models compatible with Edge TPU, see Edge TPU TensorFlow Models. Https://coral.ai/docs/edgetpu/models-intro/)

0_JmfSG0YqReUOjyGH_.png

Perform inference in Python

If you are using Python to perform inference, you have two options.

Use the TensorFlow Lite API:

This is the traditional approach to running the TensorFlow Lite model. Full control over data input and output allows you to perform inference on a variety of model architectures. If you've used TensorFlow Lite before, your Interpreter code can run your model on the edge TPU with very few changes. (For more information, see Performing inference with TensorFlow Lite in Python. Https://coral.ai/docs/edgetpu/tflite-python/)

Use Edge TPU API:

It's a Python library built on top of the TensorFlow Lite C ++ API, so you can use the image classification and object detection models to make inferences more easily. This API is useful if you have no experience with the TensorFlow Lite API and just want to perform image classification or object detection. This is to abstract the code needed to prepare the input tensor and parse the results. It also provides a unique API for performing fast transfer learning of classification models on Edge TPUs. (See the Edge TPU Python API overview for more information. Https://coral.ai/docs/edgetpu/api-intro/)

Perform inference in C ++

If you want to write code in C ++, you need to use the TensorFlow Lite C ++ API, just as you would run TensorFlow Lite on other platforms. However, you need to make some changes to your code using the API in the ʻedgetpu.h or ʻedgetpu_c.h files. Basically, you just need to register the Edge TPU device as the external context of the ʻInterpreter` object. (For more information, see Performing inference with TensorFlow Lite in C ++. Https://coral.ai/docs/edgetpu/tflite-cpp/)

The Coral Accelerator module will be available on the Coral website in early 2020. If you are interested in the details of Google Coral products and mass sales or mass sales (volume discount), welcome to Coral overseas distributor Gravitylink: https://store.gravitylink.com/global

Recommended Posts

Google Edge TPU Inference Overview
Run Keras on Google Colaboratory TPU