android.png

TensorFlow is a multipurpose machine learning framework. TensorFlow can be used anywhere from training huge models across clusters in the cloud, to running models locally on an embedded system like your phone.

This codelab uses TensorFlow Lite to run an image recognition model on an Android device.

What you'll learn

What you will build

A simple camera app that runs a TensorFlow image recognition program to identify flowers.

License: Free to use

This codelab will be using Colaboratory and Android Studio.

Open the Colab which shows how to train a classifier with Keras to recognize flowers using transfer learning, convert the classifier to TFLite and download the converted classifier to be used in the mobile app.

Clone the Git repository

The following command will clone the Git repository containing the files required for this codelab:

git clone https://github.com/tensorflow/examples.git

Next, go to the directory you just cloned the repository. This is where you will be working on for the rest of this codelab:

cd examples

android.png

Install Android Studio

If you don't have it installed already, go install AndroidStudio 3.0+.

Open the project with Android Studio

Open a project with Android Studio by taking the following steps:

  1. Open Android Studio. After it loads select " Open an existing Android Studio project" from this popup:

  1. In the file selector, choose examples/lite/codelabs/flower_classification/android/start from your working directory.
  1. You will get a "Gradle Sync" popup, the first time you open the project, asking about using gradle wrapper. Click "OK".

Add the TensorFlow Lite model to assets folder

Copy the TensorFlow Lite model model.tflite and label.txt that you trained earlier to assets folder at lite/codelabs/flower_classification/start/app/src/main/assets/.

Update build.gradle

  1. Go to build.gradle of the app module and find this block.
dependencies {
    // TODO: Add TFLite dependencies
}
  1. Add TensorFlow Lite to the app's dependencies.
implementation('org.tensorflow:tensorflow-lite:0.0.0-nightly') { changing = true }
implementation('org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly') { changing = true }
implementation('org.tensorflow:tensorflow-lite-support:0.0.0-nightly') { changing = true }
  1. Then find this code block.
android {
  ...
  // TODO: Add an option to avoid compressing TF Lite model file
  ...
}
  1. Add the following lines to this code block to prevent Android from compressing TensorFlow Lite model files when generating the app binary. You must add this option for the model to work.
aaptOptions {
  noCompress "tflite"
}
  1. Click Sync Now to apply the changes.
  1. Open ClassifierFloatMobileNet.java, find this code block.
public class ClassifierFloatMobileNet extends Classifier {
  ...
  // TODO: Specify model.tflite as the model file and labels.txt as the label file
  ...
}
  1. Add model.tflite and label.txt that are to be used for inference.
@Override
protected String getModelPath() {
  return "model.tflite";
}

@Override
protected String getLabelPath() {
  return "labels.txt";
}
  1. Open Classifier.java (Classifier is the parent class of ClassifierFloatMobileNet). This is the main source file we will be working with. Find this code block.
public abstract class Classifier {
  ...
  // TODO: Declare a TFLite interpreter
  ...
}
  1. Declare a TFLite interpreter in the Classifier class.
protected Interpreter tflite;
  1. Next, find the Classifier class constructor.
protected Classifier(Activity activity, Device device, int numThreads) throws IOException {
  ...
  // TODO: Create a TFLite interpreter instance
  ...
}
  1. Now, create an instance of the interpreter:
tflite = new Interpreter(tfliteModel, tfliteOptions);
  1. We will save the part of actually running TensorFlow Lite inference to the next section. But as a good practice we want to close the interpreter after use. Find this code block:
public void close() {
    ...
    // TODO: Close the interpreter
    ...
}
  1. Dispose of the interpreter instance by adding:
tflite.close();
tflite = null;

TensorFlow Lite supports several hardware accelerators to speed up inference on your mobile device. GPU is one of the accelerators that TensorFlow Lite can leverage through a delegate mechanism and it is fairly easy to use.

  1. First, open strings.xml which lives in (Android Studio menu path: app -> res -> values -> strings.xml).
examples/lite/codelabs/flower_classification/android/app/src/main/res/values/strings.xml
  1. Find the following code block.
<string-array name="tfe_ic_devices" translatable="false">
   <item>CPU</item>
   <!-- TODO: Add GPU  -->
   
</string-array>
  1. Add a "GPU" string to the XML file so that "GPU" shows up in the UI.
<item>GPU</item>
  1. Next go back to Classifier.java.
public abstract class Classifier {
  ...
  /** Optional GPU delegate for acceleration. */
  // TODO: Declare a GPU delegate
  ...
}
  1. Add a field in Classifier class.
private GpuDelegate gpuDelegate = null;
  1. Now we handle the case when our user chooses to use GPU.
protected Classifier(Activity activity, Device device, int numThreads) throws IOException {
  ...
  switch (device) {
    case GPU:
      // TODO: Create a GPU delegate instance and add it to the interpreter options
    ...
 }
}
  1. We add the GPU delegate to the TFLite options so that it can be wired up to the interpreter.
gpuDelegate = new GpuDelegate();
tfliteOptions.addDelegate(gpuDelegate);
  1. Don't forget to close the GPU delegate after use. Find the code block:
public void close() {
  ...
  // TODO: Close the GPU delegate
  ...
}
  1. Add the code to close the GPU delegate
if (gpuDelegate != null) {
  gpuDelegate.close();
  gpuDelegate = null;
}

That's it. In Android Studio click Run () to start the build and install process as before.

Now in the UI if you swipe up the bottom sheet and choose GPU instead of CPU, you should see a much faster inference speed.

Here are some links for more information: