Future of Android in a fireside chat with Senior Android Developer, Kruti Patel

Google launched new technologies, programming languages and tools to support the next generation of Android development at one of the most sought after tech events last month, Google IO. This time it was all about Artificial Intelligence and Machine Learning.

They created Android studio using IntelliJ, which is created by JetBrains, so it was no surprise they adopted JetBrains’ new programming language Kotlin as an official language for Android.

We recently sat down with Kruti, a Senior Android Mobile Developer here at Isobar and chatted about her thoughts on the Android announcements made at Google IO 2017.

What’s different about the Kotlin programming language?

As an Android Developer, I was very excited about Google officially supporting Kotlin for Android. I started playing around with Kotlin to see what it had to offer compared to what Java lacked. There are many amazing features Kotlin has which Java doesn’t have but a few of my favourites are Lambda expressions, inline functions and coroutines.

How does this new programming language change the development process?

Over the past couple of years, one of the biggest programming shifts was the use of functional programming languages over procedural languages. Every time a new programming language is launched, developers have to upskill themselves and adapt to the new language. Every language has its own syntax and encourages certain programming paradigms — each paradigm has a different approach to structuring code.

Kotlin encourages a functional programming paradigm that avoids changing state and mutable data. This improves code readability, scalability and ultimately, the quality of applications when compared to the widely adopted object-oriented paradigm of Java (the dominant programming language to date for Android applications). The beauty of Kotlin is in the process of using the language — it’s like working with oil paints as opposed to giant crayons.

What other exciting technologies were announced at Google IO this year?

Another very exciting technology that I am looking forward to following in the Augmented Reality (AR) space is Tango. Tango is an AR platform for mobile devices, developed by Google in 2015. Google announced Asus Zenfone AR at Google IO this year, which will have a Tango sensor and Daydream features. Basically you can use the Asus Zenfone to see full color 3D maps of any physical space. When you pop the phone into a Daydream VR headset you can experience the virtual walkthrough.

Sounds exciting! How does the Tango new technology work?

Google has achieved centimeter-level accuracy in positioning and navigation without using GPS in Tango. The concept of AR with precise indoor positioning opens enormous opportunities for retail, education and travel. Imagine being able to go to school with the help of Tango where you see directions and arrows over your maps. Google takes incredible risks in innovation and technology and as a result we have seen technologies like Google glass, Google Home, Waymo and Tango.

Google has developed the Tango API in multiple languages which are C/C++, Unity3D and Java. There are many samples and tutorials available on GitHub. However, to run the Tango enabled mobile applications you will need a Tango enabled device which is a major drawback at the moment. Tango applications will not work on emulators at this stage.

What’s next?

As a software engineer I often get the privilege of working with multiple technologies and languages. In a recent client project we had the opportunity to develop an Android app using Kotlin where we also used React Native on a number of occasions. I would love to play around with these languages more and share our learnings with the internal team, as well as the greater developer communities. Asus Zenfone AR is launching in July this year and once that is available we will have more access to Tango. It’s exciting to see Google stepping up their game and further supporting developers such as myself in creating the next generation of consumer applications.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.