This is the ninth post in my series around training and running Cloud AutoML models on the edge. This post follows up from the post earlier on training a multi-label image classification model and covers how to run the trained model in a python environment.
If you haven’t read the post earlier, I would suggest that you do so:
Following up from my earlier blogs on training and using TensorFlow models on the edge in Python, in this eighth blog post in this series, I’ll be talking about how to train a multi-label image classification model that can be used with TensorFlow.
If you’ve been following my blogs lately, you might have noticed that I’ve been writing a lot on edge machine learning, for both mobile and desktop.
While building models and writing code that runs inference on them is one thing, it’s equally important to also package your solution in a way that lets your end-users actually use them.
This is extremely easy to do as a mobile developer since tools like Android Studio and Xcode take this burden away from you as a developer and handle the packaging of the code itself.
Following up from my earlier blogs on training and using TensorFlow models on the edge in Python, in this seventh blog in the series; I wanted to cover a topic that’s generally not talked about enough—optimizing the performance and latency of your TensorFlow models.
This blog is the sixth blog in the series and a follow-up to my previous blog post on running TensorFlow Lite image classification models in Python. If you haven’t read that post, you can read it here:
Following up on my earlier blogs on running edge models in Python, this fifth blog in the series of Training and running Tensorflow models will explore how to run a TensorFlow Lite image classification model in Python. If you haven’t read my earlier post on model training for this task, you can read that here:
This blog is the fourth one in my series on training and running Tensorflow models in a Python environment. If you haven’t read my earlier blogs centered on AutoML and machine learning on edge devices, I’d suggest that you do so before continuing with this post.
Here’s the post in which I outline how to train a custom object detection model of your own in less than a few hours:
If you’ve read my earlier blogs centered on AutoML and machine learning on edge devices, you know how easy it is to train and test a custom ML model with little to no prerequisite knowledge.
However, just training an ML model isn’t enough. You also need to know how to use them to make predictions. Maybe you need to build a cross-platform app using tools like QT, or maybe you want to host your model on a server to serve requests via an API. …
This is the follow-up blog post from my previous article on using Google Analytics for Firebase. While GA allows you to measure and understand user behavior, Crashlytics instead allows you to track and keep a log of all the crashes that have happened on the devices using your app—regardless of whether they choose to report the said crash or not.
If you haven’t read my earlier post on using GA on Android, you can read it here:
If you are working on Firebase as a mobile developer, you might also want to go through a series I wrote earlier on…
As a mobile developer, getting user feedback on what features to implement next and what areas to improve upon in your app is an essential part of the development process. And this is true not just for improving app features, but also for ensuring that users who aren’t happy with the app don’t get rid of it for good!
Traditional ways of acquiring feedback relied on survey forms or in-person interviews. While still relevant, a lot of developers can’t afford the time or budget on such intense feedback sessions. Instead, many are focusing their attention on available alternatives. …