Addendum 3: I received a correction request regarding the notation from @ 84d010m08 and merged it. Thank you very much. Currently, there seem to be two methods, TensorFlow mobile and lite. Please refer to this article.
Addendum 2: Please refer to the official explanation for a proper explanation. https://www.tensorflow.org/mobile/android_build
Postscript: The content of this article is out of date. To explain the current method very simply, write the following contents in build.gradle.
build.gradle
allprojects {
repositories {
jcenter()
}
}
dependencies {
compile 'org.tensorflow:tensorflow-android:+'
}
By doing so, you can use the TensorFlowInferenceInterface. (See the old article below for TensorFlowInferenceInterface) The names of the methods have changed a lot, so it's a good idea to look for them in the official repositories. At the time of addition, ・ Input with feed () ・ Learn with run () -Fetch output with fetch () is. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/android/java/org/tensorflow/contrib/android/TensorFlowInferenceInterface.java
Article before addition below
Although there were some articles that ran Tensorflow on Android, http://qiita.com/tomoima525/items/99a2df5cb0559c41647a http://qiita.com/ornew/items/cf7e4478936fbf28b271 http://qiita.com/ornew/items/cf7e4478936fbf28b271 http://qiita.com/miyosuda/items/e53ad2efeed0ff040606 Since there was no Japanese document on how to simply execute using the TensorFlowInferenceInterface class, it was just a memo. The method in this article allows you to create an app with just a trained model and Android Studio. There are many good articles on how to use Deep Learning and how to use Tensorflow, so please look for them.
I'm writing a lot, so I'll fix it one by one when I have time.
The following three. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/java/ https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/android https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
From the preparation of .jar and .so files. It is the same as the content of the official document below. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/android
If you don't want to put bazel https://ci.tensorflow.org/view/Nightly/job/nightly-android/ It seems that you can get the pre-built file from (unconfirmed)
First of all
git clone --recurse-submodules https://github.com/tensorflow/tensorflow.git
Download the source of Tensorflow at
next
tensorflow/WORKSPACE
Change inside Uncomment the following part and specify the path appropriately
\# Uncomment and update the paths in these entries to build the Android demo.
android_sdk_repository(
name = "androidsdk",
api_level = 23,
build_tools_version = "25.0.1",
\# Replace with path to Android SDK on your system
path = "/home/user/Android/Sdk",
)
\#Android NDK r12b is recommended (higher may cause issues with Bazel)
android_ndk_repository(
name="androidndk",
path="/home/user/Android/Sdk/ndk-bundle",
api_level=14) \# This needs to be 14 or higher to compile TensorFlow.
Next, build.
bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \
--crosstool_top=//external:android/crosstool \
--host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--cpu=armeabi-v7a
Build with bazel.
--cpu=armeabi-v7a
The part of
arm64-v8a x86 x86_64
Please change to an appropriate one.
bazel-bin/tensorflow/contrib/android/libtensorflow_inference.so
Has been generated.
next
bazel build //tensorflow/contrib/android:android_tensorflow_inference_java
so
bazel-bin/tensorflow/contrib/android/libandroid_tensorflow_inference_java.jar
A jar file will be created in.
▽libs libandroid_tensorflow_inference_java.jar ▽armeabi-v7a libtensorflow_inference.so Arrange the files like this in app / build.gradle
app/build.gradle
sourceSets {
main {
jniLibs.srcDirs = ['libs']
}
}
Added
Import-> Create Instance-> Input-> Run-> Get Output It is a procedure of.
import org.tensorflow.contrib.android.TensorFlowInferenceInterface;
When
static {
System.loadLibrary("tensorflow_inference");
}
You can use it by writing.
public TensorFlowInferenceInterface inferenceInterface;
inferenceInterface = new TensorFlowInferenceInterface();
Create an instance with, etc.
inferenceInterface.initializeTensorFlow(getAssets(), "PATH_TO_MODEL");
Load the model from app / src / main / assets / model.pb with. Please change model.pb and "PATH_TO_MODEL" according to your environment.
inferenceInterface.fillNodeFloat("NAME_OF_INPUT", int[] shape = {1,784},float[] input);
Enter data into the trained model with "NAME_OF_INPUT" is the name of the placeholder in the input part Tensorflow's MNIST tutorial
x = tf.placeholder(tf.float32, [None, 784],name="input")
It is "input". Similarly, int [] shape = {1,784} is part of [None, 784]. Also, float [] input is the vector string you want to input, or in MNIST, the vector string of handwritten characters. Please change it according to your own environment.
In addition to Float for the input array inferenceInterface.fillNodeInt() inferenceInterface.fillNodeByte() And so on. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/android/java/org/tensorflow/contrib/android/TensorFlowInferenceInterface.java Please find it from here.
inferenceInterface.runInference(String[] {"NAME_OF_OUTPUT"});
Calculate the output from the input with. "NAME_OF_OUTPUT" is Tensorflow
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2,name="output")
It is a name given by such as. Since there may be multiple output layers, it is String [].
inferenceInterface.readNodeFloat("NAME_OF_OUTPUT", float[] output);
Receives the output at the location specified by "NAME_OF_OUTPUT" in float [] output. This is the same as input, and Int and Double can also be output.
The explanation is quite rough & there is no sample code, so I think I do not understand the reason, so I will correct it sequentially when I have time. I have prepared sample code, but since there are many parts such as the input part that I made by copying and pasting, I will add it after correcting it so that there is no copyright problem.
I think that the official explanation at the time of implementation on Android is too few and I think that there is a part that I misunderstood, so if there is a mistake, please comment.
Recommended Posts