Some time ago I developed a Word Search game solver Android application using the services from Firebase ML Kit.
It was an interesting trip discovering the features of a framework that allows the developer to use AI capabilities without knowing all the rocket science behind.
In the specific, I’ve used the Document recognition feature to try to extract text from a word search game image.
After the text recognition phase, the output was cleaned and arranged into a matrix to be processed by the solver algorithm. This algo tried to look for all the words formed by grouping the letters respecting the rules of the games: contiguous letters in all the straight directions (vertical, horizontal…
Some months ago, during a boring Twitter reading session, I’ve spotted a tweet from Reto Meier:
This looked like a really interesting challenge and a way to have fun with Android development and Algorithms. Also, there was a comment from Hoi Lam:
adding a little of ML Kit to it was a really good point, it would allow capturing the input for the algorithm in a modern AI-fashioned way.
So I decided to take this journey that I completed roughly in a couple of weeks
Then I decided to dedicate to the project more time, to learn better ML Kit and wrote this article to make it useful for others devs to come. …
One week ago I’ve read the article from Sara Robinson about how to add Computer Vision to an iOS app.
It is a great article describing playing around a cool idea: develop a serverless application combining the Firebase API (Cloud Storage, Cloud Functions and Cloud Firestore) with a Cloud Service like Google Vision API, offering Machine Learning powered image recognition.
I start to thinking about a cool application using these services, but I’m an Android developer. So I tried to apply the same functionalities to an Android app developed in Kotlin language.
This app allows you to upload a picture to Firebase Cloud Storage. This will trigger a Cloud Function sending the picture to Vision API and retrieving back the info we need. These infos will be stored back in a Cloud Firestore database. Our Android application will listen for modification of the Firestore database and will update a View respectively. …
Dev log of the Teamwork 2 Project for Udacity VR Nanodegree
Udacity Teamwork is a super cool opportunity to “learn by doing” what you are studying. So when I received the mail about the starting of the VR Teamwork 2 I immediately answer YES! “Colors” was the Theme!
I’ve some experience in organizing teams so I decide to be the project leader for the Team Sorrento :) (I choosed the Italian name to honor my origin).
My team of 4 was composed all by beginners so I have the big responsibility to guide them into learning having fun.
First thing we create a Trello board to handle the brainstorming phase and later the…
Augmented Reality applications are spreading around us thanks to the evolution of Computer Vision algorithms and the relative easiness of development using powerful frameworks as Vuforia and ARKit.
Even Google as announced their AR framework at the end of August 2017 offering developer a new software-only solution to create Augmented Reality application in an easy way.
Google has finally released the 1.0 Version of ARCore
I’ve updated the sample source code with the new SDK — Have a look :) to compare the change in the API.
Fundamentally ARCore is based on 3 main points to create virtual content on the real…
Daydream is the high-quality VR platform by Google. It has been presented at Google I/O 16 from Clay Bavor. Today this platform is compatible with only one Headset: Google Daydream View a sort of Cardboard 3.0 with a 3DOF Controller. Any only a little subset of high-end smartphones is compatible with this platform.
For sure more compatible smartphone are yet to comes but the most exciting wait is for Daydream Standalone VR Headsets:
A brand new HMDs presented at Google I/O 17 working without smartphone and compatible with the Daydream API that will be available for the end of the year. …
Published on December 15th, 2015 | by Giovanni Laquidara
Back in August 2011 Mark Andreessen‘s on the Wall Street Journal stated that “Software is eating the world”.
And Now? How is it doing? Quite everything around us is based on software. Every business is or will at some point be based on software. Nowadays everyone of us has software running in our pockets (on a Smartphone), and on our wrists (on a Smartwatch). We drive software based cars, and soon they will become software driven cars. We use software to play, to organize our life, to look for our mates and to communicate with them (Tinder and Whatsapp, I’m thinking of you!). What more do we need from the digital arena? …