If you’re already an experienced ML Developer, chances are you already have your own model that can perform operations such as Text Recognition and Face Detection.
So why might you want to host your model using Firebase ML Kit? Well, here are the advantages of doing so:
- Reduce your app’s binary size
- Choose whether to host your model on-device or on the cloud… or both
- Automatic handling of multiple model sources for graceful fallback
- Automatic downloading of new versions of your model
By no means am I an ML expert, so I don’t know just how much of an advantage each of these are (I’d love to know though). They do seem pretty neat.
If this is your first time hearing about Firebase ML Kit, you can check out my introduction on it right here.
Info on Model Storage and Security
To make your model available to ML Kit, you can either store your model remotely on the Firebase Console, or bundled with your app, or both. In doing both, you can ensure that your app model is always up to date (from being stored on the Console), and when network connection isn’t great, your ML features will still work using the model bundled with the app.
Regardless of where you store it, your model will be stored in the standard serialized protobuf format in local storage. In theory, anyone will be able to copy your model, but in practice, most models are so application-specific and obfuscated by optimisations that the risk is similar to that of competitors disassembling and reusing your code.
(I couldn’t word this very differently from the official docs, so LINK)
For Android API 20 and lower, the model is downloaded to a directory named com.google.firebase.ml.custom.models in app-private internal storage. For API 21 and up, the model is downloaded to a directory that is excluded from automatic backup.
Implementation
Prerequisites
Make sure your app is already connected to Firebase. If you’re not sure how to do that, here’s a really quick way.
Dependencies and Setup
implementation 'com.google.firebase:firebase-ml-model-interpreter:16.0.0'
Add this dependency to your app-level build.gradle file.
<uses-permission android:name="android.permission.INTERNET" />
Add this permission to your AndroidManifest.xml
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" android:maxSdkVersion="18" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" android:maxSdkVersion="18" />
If you’re targeting Android API 18 and lower, add this to your manifest as well.
Make the Model Available
On the Cloud
Go to the Firebase Console > MK Kit > Custom, and add your model there.
On-Device (Asset)
android { // ... aaptOptions { noCompress "tflite" } }
Copy the model file to your app’s assets/ folder, then add this code to your project-level build.gradle
On-Device (Downloaded into Local Storage)
Well, just download it at an appropriate point in your app. You’ll just be referencing its location later when you load it.
Load the Model
FirebaseCloudModelSource cloudSource = new FirebaseCloudModelSource.Builder("my_cloud_model") .enableModelUpdates(true) .setInitialDownloadConditions(conditions) .setUpdatesDownloadConditions(conditions) .build(); FirebaseModelManager.getInstance().registerCloudModelSource(cloudSource);
If you hosted your model on the cloud, build a FirebaseCloudModelSource passing in the name you gave to your model in the console when you uploaded it.
FirebaseModelDownloadConditions conditions = new FirebaseModelDownloadConditions.Builder().requireWifi() .requireCharging() .requireDeviceIdle() .build();
You can also set conditions for when the model should be downloaded initially and whenever a new update is available.
FirebaseLocalModelSource localSource = new FirebaseLocalModelSource.Builder("my_local_model") .setAssetFilePath("mymodel.tflite") // Or setFilePath if you downloaded from your host .build(); FirebaseModelManager.getInstance().registerLocalModelSource(localSource);
If you hosted your model on-device, build a FirebaseLocalModelSource passing in the filename of the model and whether it’s been stored as an asset or downloaded into local storage.
FirebaseModelOptions options = new FirebaseModelOptions.Builder() .setCloudModelName("my_cloud_model") .setLocalModelName("my_local_model") .build(); FirebaseModelInterpreter firebaseInterpreter = FirebaseModelInterpreter.getInstance(options);
Then build a FirebaseModelOptions passing in your cloud and local names (which ones are made available), and a FirebaseModelInterpreter which will handle using the cloud model, or if not that’s not available, the local model.
Specify Model Input and Output
FirebaseModelInputOutputOptions inputOutputOptions = new FirebaseModelInputOutputOptions.Builder() .setInputFormat(0, FirebaseModelDataType.BYTE, new int[]{1, 640, 480, 3}) .setOutputFormat(0, FirebaseModelDataType.FLOAT32, new int[]{1, 1000}) .build();
The model’s input and output uses one or more multidimensional arrays which contain either byte, int, long, or float values. Using a FirebaseModelInputOutputOptions you should define the number and dimensions your array uses.
This example from the official docs For example, an image classification model might take as input a 1x640x480x3 array of bytes, representing a single 640×480 truecolor (24-bit) image, and produce as output a list of 1000 float values, each representing the probability the image is a member of one of the 1000 categories the model predicts.
Perform Inference on Input Data
byte[][][][] input = new byte[1][640][480][3]; input = getYourInputData(); FirebaseModelInputs inputs = new FirebaseModelInputs.Builder() .add(input) // add() as many input arrays as your model requires .build(); firebaseInterpreter.run(inputs, inputOutputOptions) .addOnSuccessListener( new OnSuccessListener<FirebaseModelOutputs>() { @Override public void onSuccess(FirebaseModelOutputs result) { // ... } }) .addOnFailureListener( new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { // Task failed with an exception // ... } });
Prepare your model inputs, create a FirebaseModelInputs with your inputs, then call run on your interpreter.
float[][] output = result.<float[][]>getOutput(0); float[] probabilities = output[0];
In your success method, call getOutput() specifying the format of the output as well. From here on out, what you do next depends on your model and its intended use. For example, if you are performing classification, you could map the indexes to the labels they represent.
Finding out a model’s input and output formats
Lutzroeder made a tool called Neutron which lets you upload a tensforflow lite model and find out its inputs and outputs right there. A literal lifesaver when using custom models you didn’t make yourself.
Common mistakes
You might run into a few errors that I ran into when first trying it out. It will frustrate you, but I’m here to save you.
If you see an exception that says Internal Error when you try to use your model, your input or output format is probably wrong for the model. Use the tool above to find out about your model’s formats. Otherwise another cause is that you lack a SHA-1 Fingerprint for your app on the Firebase Console, although with those getting added automatically now, it’s pretty unlikely.
Get the Source Code
I made an app that demonstrates the use of a custom model, DenseNet downloaded from Tensorflow’s list of hosted models. The model is an image classification one and is stored locally as an asset.
I use it to try and classify a drawable of a panda and it tells me with full confidence that the image shows a lab coat. Great!
Conclusion
As little as I know about making an ML model, this looks pretty neat. A streamlined way of getting the model into the app, though I have no basis for comparison. This does however make me want to learn more about ML.
This is the final entry in the ML Kit mini-course. If you haven’t already done so, why not check it out?