Green Code Lab Challenge 2015 Results
The 3rd Green Code Lab Challenge took place from December 2 to 4. It has helped more than 400 students and 25 professionals to showcase an unsung yet essential topic in the digitalization of our economy, namely eco-design software. Objectives of these 48 hours, optimize team and school in France and eight other countries, a process of communication between a connected object and a server. The Internet Of Things or the connected objects already invaded our lives every day and should explode in the coming years. With a small but significant numbers consumption, there is a strong impact on resources and energy. Indeed, according to an October 2015 study by IDATE, there will be 155 billion objects connected in 2025, with an average annual growth rate of 14%.
The first 3 were in average 31 lines of code against 131 for the 22 teams. Simplicity is the friend and eco-design! Another interesting metric that corroborates the interest of sobriety in the quest for efficiency software, these three teams averaging using a single outdoor Library 2 against averages for the 22 functional teams.
In earnings on energy, if one takes the 1st place on the energy challenge, it was a factor of 421 with the last! This again demonstrates the impact of the developer in its development choices on energy use in fine.
The final ranking is as on the Green Code Lab Challenge website . 0 point indicates either that the application was not functional, or that there was no measure or that the code was not shared or that coding was not justified. Indeed, it is difficult to rank team for example simply on a measure without a functional code.
Lot of teams don’t success in having measures or functional application. The reasons : no agility for certain teams, tunnel effect with a rush the Friday, application not robust…
Here is an analysis of our partner Sigma on the evolution of functional results of the teams :
20% of teams do not have a log. This corresponds to the teams that give up. We see a progression on improving logs. Until the end of the contest. The fact that all the teams have not led to a functional application is explained below.
And we have also the evolution of team with measurement ok in Greenspector :
Greenspector & dashboard
Greenspector was used for the challenge. A single account was used to store the results. A public dashboard and visible was used and allowed to see the rankings
In the dashboard, teams can monitor their consumption:
Throughout the challenge, students can send their code and verify that it was functional. 8500 tests were carried out on the competition via our infrastructure. In Greenspector interface, organizing team had access to detailed measures. For a team, we have the opportunity to see the results of each measure.
Greenspector managed a quite particular infrastructure: a 100 Raspberry PI cluster.
We have worked many hours on this introduction with all associated problematic : Power has dimension, hub to distribute food … Thanks lara_hogan for his book Device lab building that has served us. http://www.fivesimplesteps.com/products/building-a-device-lab
Some figures on the infrastructure and teams
Here the energy of the teams leading to a functional application.
The 1st team(5th energy of Raspberry) have a factor of 192 with the latter in term of energy. By cons, if we take the first on the energy classification (above) was a factor 421!
Analysis of results
Statistics of languages used on the Raspberry:
The languages used on the Raspberry for the first 10:
Note on 85 teams, only 22 have managed to have a functional code and measured. Of these 22, only two teams had the C and C ++ code. Does that mean that Python is faster than C? No, our analysis is that the students went on a language they mastered. Moreover within Python lines can make treatment faster. Also C was more difficult to have a functional and stable code in 48 hours. The first 3 were in average 31 lines of code against 131 for the 22 teams. Simplicity is the friend of eco-design. Another metric is used three teams averaging 1 2 External libraries against averages for the 22 functional teams.
Return for Greenspector and next actions
This is a long history of the Greenspector R&D. Since several year, we have worked on the measure of the software layer. It’s not easy. A first idea is to put a wattmeter on the computer. But after, how to get back the data ? One measure every 1 second is sufficient for software ? And what if I want to measure only one process and not to be pollute by other process ? We put a lot of R&D to solve these questions. And we work also with a lot of great partners. The infrastructure we mount for the Green Code Lab Challenge contains the essence of this work.
The challenge for us was a technical challenge! We had 85 teams actually measured every 15 minutes. For this we used probes of our partners:
- For Raspberry, the Power API probe. This probe estimates the energy consumption of a process.
- For servers, the DSCope of Easyvirt probe to estimate the energy consumption of a virtual machine.
This challenge is successful, we used to measure each team. However, this implementation has detected our strengths and weaknesses. We are being integrated in the roadmap of these corrections. In particular, the optimization of the probe which is too Raspberry consuming to our taste.
For the best practices, several conclusions :
. The best teams are the team which apply the KISS principle (Keep it simple, stupid)
. So language can be more efficient but you need to master them, and it depend of the context. One language for one need.
. IoT doesn’t imply naturally an efficient code. We need to apply best practices to make it efficient.
And now ? We will integrate the Challenge experience feedback in Greenspector and prepare for the next challenge.