What came and what’s to come at Geoblink Tech

Happy new year! In these first days of 2018 here at Geoblink we have done a quick look back on what were the technologies that got us more excited during 2017. This includes technologies that some of us had to learn to work with our existing systems, the ones that we played around with just for fun and others that were new to us and were cool enough to end up in our production systems.

Not only that but we also compared that list against the technologies that each of us is looking forward to learn or work with in 2018. We hope you find the list interesting, and if you want to comment on it let us know in Twitter (@geoblinkTech).

Data team

2017: Luigi, workflow manager

2018: Integrating Machine Learning with PySpark

2017: JavaScript (not that common tool for a GIS engineer!)

2018: Finding data patterns and trends in our data

2017: Asynchronous code writing in python, pandas

2018: Machine learning

2017:I started to use pySpark to use spark within python to do tasks that uses a very large database (Cuebiq), then I started to take a look directly at Spark. Also I improved my knowledge of Postgres and Postgis.

2018: I want to start using the new features included in Postgres 10, also I would like to learn more Spark and other Big Data tools to analyze huge quantities of data. Other than that, improving my Machine Learning skills.

2017: Specially the Geoblink tech talks we do covering new technologies that we are not currently using.

2018: Probably talks where everybody can do stuff together at the same time. It’s the best way to learn quickly.

2017: Postgres/Postgis: while I had previous basic experience using SQL-flavoured databases, 2017 has been the year of diving into Postgres (and its exciting version 10) and learning about the geospatial analysis niceties in Postgis.

2018: Apache Spark: given the volume of data we are currently working with at Geoblink, using distributed Big Data tools has become a must.

2017: Did a couple of data science projects using Pyspark (using dataframes and RDD).

2018: Understand better deep learning models, when to use, which type and tuning them.

2017: I’ll choose team management and basics of spark

2018: Lots of stuff! Write efficient code in Scala, foundations and data analysis with Spark, building data pipelines to automate processes, learn about bayesian networks and how can be applied to us, deep learning, and probably more team management like how to efficiently lead and scale data teams.

2017: Understand what a model is and how geolocation works: coordinates, coordinate reference system (CRS), visualization…

2018: Get deep into how GeoSpark works, use case for Spark MLib and learn more about Big Data infrastructure (best options, choices, costs, …).

Infrastructure team

2017: Using Ansible to store out Infra configuration

2018: Chef

2017: Being able to use Linux at a lower level, learning the insights of the operating system.

2018: Continuous Delivery architecture. Transitioning from integration-deployment way to deployment per feature, with all that that entails.

Core (web) team

2017: Concurrency with golang

2018: Wasm

2017: We’ve started to have most of our code covered by end to end tests, using CypressJS to that end. Its beta just got released and we’re really excited to see what is coming. Also, we had the chance to build some components for our app in VueJS, and I can’t wait to start using it more and more.

2018: More VueJS!

2017: Reactive programming, incredibly useful to keep UI status consistent even in most complicated environment with tons of possible ways of mutating data

2018: Vue.js + Vuex, Vue.js as a component-oriented framework to build user interfaces targeting the browser and Vuex to implement one-way data flows easily, both of them are considered top good practices in web frontend development right now

2017: I did a lot of things using React Native and some serverless stuff on AWS

2018: I’m almost sure that it will be VueJS

2017: Along with Postgis version 2.4, we created our first vector tiles querying directly the database through Node.js, and visualizing them in a map with Leafletjs. We also started using Chrome headless instead of Phantomjs, which has taken away several headaches.

2018: This year I am looking forward to start coding some services in Golang, and to migrate our Angularjs components into Vuejs.

Regarding myself, during 2017 we found the boundaries of some of our data systems and started using distributed databases, building a few prototypes with Cassandra and starting to think how to use it in production. The release of Postgres 10 later in the year provided some alternatives, so we opted for continuing with Postgres while keeping an eye on new versions of Cassandra.

And for 2018, we are going to embrace Ansible Playbooks using them for all of our Infra operations and configuration. We will also keep experimenting with new ways of storing and querying our geolocated data, I have Geomesa in the queue to take a look at and consider if it would be something worth exploring for us.

By Miguel Ángel 'MA' Fajardo

Geoblink Tech blog

Tech&Data blog from the team powering the Geoblink systems. We are the engineers, developers, data scientists, mathematicians and physicists trying to build the best Location Intelligence tool out there.

Geoblink Tech blogger

Written by

Blog administrator of Geoblink Tech

Geoblink Tech blog

Tech&Data blog from the team powering the Geoblink systems. We are the engineers, developers, data scientists, mathematicians and physicists trying to build the best Location Intelligence tool out there.