Create a smart clock with a Raspberry Pi

Simone Lippolis
15 min readNov 3, 2015

--

An image of the clock on display during the Frog’s event

Shopping list and system setup

I recently finished working on the first version of a “smart” wall clock based on a Raspberry Pi Model B. Since this kind of setup allows you to create endless solutions while being extremely simple to develop, I think sharing the basics about how I did it could be interesting

The basics

The smart clock shows the current time and weather, a weather forecast for the next 18 hours and waiting time at the bus stops closer to my house. As you’ll read, configuring the kind of data displayed is really easy, and it only requires a little knowledge of Linux systems (in my case, the Raspbian distribution) and PHP (even if you can also use Python, NodeJS or any other language).

Shopping list

First of all, you’ll need a Raspberry Pi computer. I chose the “Model B” because it is a little bit more powerful than the original one. You can buy one from one of theRaspberry Pi official resellers. If you live in Italy, you’ll find that buying it on Amazon.it is easier than ordering it from the official sellers. I bought mine from this Amazon reseller.

The second step is to buy an SD card. Check the list of compatible SD Cards on the Raspberry Pi site. I am using this one bought (once again) on Amazon.

Your next step is finding a way to provide power to the Raspberry. You have two choices: if you plan to use a modern TV as a monitor for your clock, then you can just plug the Raspberry into one of the TV’s USB ports; the other solution is to use any micro-USB adaptor (yes, the one that comes with your Android or Windows Phone mobile will work perfectly). Should you need to buy one, Amazon is still a good place where you can find a cheap power adaptor. Every Raspberry Pi official seller has its own adaptor.
Also, I noticed that sites like Adafruit sell a “Raspberry Starter Kit” that already contains the Raspberry itself, the power adaptor and the SD card. You might consider buying one of them.

Another thing you’ll need is a monitor. My specific need was to have something extremely small, so I chose to buy a 4.3” LCD monitor. This kind of monitors are the one that you are using as rearview monitors on your car, they’re very cheap (mine cost me around 26 EUR) but not too accurate in displaying fancy graphics (consider this if your design contains a lot of pictures and low-contrast graphics). My idea is to fit everything, Raspberry + Monitor in some sort of box and hang it to the wall next to my apartment’s entrance door. If your requirements are different, maybe you’ll find a different solution. Anyway, the monitor that I am using is like this one, again, from Amazon.

You will probably need to find a way to supply power to the monitor: This kind of device is designed to work on cars, so you’ll need to buy a power adaptor to make it work in your house. Amazon is full of cheap devices that will do this job.

Another thing that you have to consider is the internet connection: the Raspberry comes with an Ethernet port, if you plan to use WI-FI you have to buy a USB adaptor; again, check the Raspberry Pi site for a list of compatible devices, the one that I am using is an Acer USB to Wi-Fi dongle.

Raspberry Pi: 38.90 EUR
SD Card: 6.08 EUR
Power adaptor for the Raspberry: 16.99 EUR
Monitor: 29.55 EUR
Power adaptor for the monitor: 14.14 EUR
Wi-fi USB adaptor: 12.08 EUR
— — — — — — — — — — — — — — — —
Total: 117.74 EUR

Initial setup

First thing you’ll need to do is setting up your Raspberry properly. Follow the instructions that you can find on the Raspberry website on how to install the latest Raspbian version on your SD card. The process won’t take more than 30 minutes; once it’s completed, just insert your SD card into the Raspberry, plug the power in, and start some advanced configuration. During this process, my advice is to connect the Raspberry to a real monitor, just to be able to read what appears on the screen.

Now you have to setup your network connection: if you plugged an ethernet cable in, then you don’t need to do anything. If you are using the USB wi-fi adaptor, follow this tutorial to configure it.

The next phase consists of updating your system (type sudo apt-get update) and installing the server-side software you’ll need. I wrote my bots with PHP, so I installed Apache and Php5 with mod_curl. Follow this tutorial for step-by-step instructions.

Now your Raspberry is ready to be transformed into a Kiosk or, since the monitor you bought is smaller than the one on your iPhone, a wall clock. Follow this tutorial to install chromium and configure your Raspberry to boot with a full-screen chromium instance. If you plan to use a small analog monitor as a display, you can skip any editing at the /boot/config.txt file.

Once it’s done, you’re almost set. Shut the Raspberry off, disconnect it from the “real” monitor and connect it to your small monitor. Turn the power on and see what happens.

Well, actually I know what will happen. The Raspberry will start, and you’ll see a lot of unreadable lines on your monitor. Since the screen that you are using is analog, the Raspberry is unable to read its resolutions and properties, so you’ll have to manually configure them. Just use another computer to SSH into your Raspberry Pi, and follow this tutorial on how to setup the screen resolution: you’ll probably need to play a lot with the overscan settings: my monitor was sold as a 480x272px but I was forced to add more than 50px at the top and bottom as overscan… This is the most boring part, because every time you’ll edit a parameter you’ll need to reboot to see the changes. But once it’s one, you’re all set!

The bots

We’ll now talk about some Apache configuration, some cron configuration and shell scripting, and writing bots with PHP. Later on, I’ll show you how to use AngularJS to create the clock’s UI.

Architecture

The smart clock is extremely simple and, at least for the moment, the software that drives it is… extremely simple as well. I could have developed the interface using some native language, but I thought that using HTML5 was easier and faster, and that this choice would allow me to easily expand its functions. I thought that with some small changes I could be able to install it on every kind of computer, and transform it into a mobile application. This is the reason why I tried to keep the front-end (or: what you see when you look at the screen) and the backend (all the bots collecting and parsing informations) completely separated.

The front-end is made of four parts:

  • an HTML page that defines the container for an AngularJS application;
  • an AngularJS application;
  • a stylesheet;
  • a couple of PHP scripts that read the data saved by the bots, and serve it to the AngularJS application

All these parts run under the default Apache installation that we already did; our application will be installed under /var/www and will answer athttp://localhost/ on port 80 (remember the tutorial to “install chromium and configure your Raspberry to boot with a full-screen chromium instance“? Thishttp://localhost/ is the URL you have to write in the xinitrc file).

The basic software architecture of the system

The backend is made of a collection of different, completely independent bots. I chose to keep them under my user’s directory because the Raspbian runs Apache as root, and I wanted to avoid creating problems on the machine should the bots do something wrong. Besides, Ididn’t want to mess with the Apache configuration. /home/pi/bots/ is the location for all my bots.

Every bot is contained in its own directory, and has a unique name. The same directory contains the output file of the bot’s activity, and a shell script (cron.sh) which is called by periodically by cron and that I use to avoid running more than one instance of the same bot at the same time. Since this method is not exactly elegant, just be sure to give a unique name to each of your bots. The dotted line represents the link between front-end and backend.

My idea was to provide the front-end with information from different sources. In this first release I’m showing a clock, the current weather and the weather forecast, and the waiting time at the bus stops closer to my apartment. While for the clock we don’t need any kind of backend (JavaScript can read the current time from the browser), we need some backend code to get weather and public transport info. Let’s start from the basics.

PHP

PHP is a solid, easy-to-learn server-side scripting language. It’s the server-side language that I know best, that’s the reason why I chose it for the first prototype. It already provides built-in functions to read content from remote servers (using CURL or “normal” http calls) and the huge community around it provides a lot of documentation, libraries, and objects that can help you doing almost everything. If you never tried it, have a look at the documentation on the site, and search Google for some tutorial. If you already know it, but you never wrote a bot with it, my advice is to follow Giulio Pons’ blog Barattalo, and to buy and study “Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL“, you’ll learn the basic techniques to grab a webpage using PHP and to parse it to extract the information you need.

How to get weather info

There are a lot of providers of weather information on the web. Some are more reliable than others, some provide APIs, some need some page-scraping to get the info you want. My choice was to use OpenWeatherMap because they provide a free API, and because I like all this kind of free, open-source projects. Getting the weather information is as simple as reading the OWM API documentation, create the custom URL for your location, and grab the returning JSON file. Here is the code I am using:

How does this code work? It’s pretty simple: the file_get_content() grabs the weather forecast from OpenWeatherMap and the following loop just creates a new PHP Object containing only the information I need: date and time, temperature, short and long description, and an icon code. Once done, the object is converted into a JSON file and saved on the disc. The script then sleeps 50 minutes before requesting another set of forecasts.

Note: freezing the execution of the script for 50 minutes is possible only because a) we can set a longer time limit (with the set_time_limit() function); b) we’ll control the execution of the bot using a shell script run by the cron.

When consuming third-party APIs always remember to respect usage limits, and write your bots in a way that will not harm their service. TheOpenWeatherMap pricing page highlights that their APIs are free if you issue less than 30.000 calls per minute: I am safe on this side, since I will issue just one request every 50 minutes.

I wrote that a shell script will check if my PHP bot is running correctly. Adding an entry to the crontab is easy (here’s a simple tutorial), I configured it to run a script called cron.sh located in the same directory as the PHP bot. The script just checks if there’s a process with the same name of my bot already running, if not it launches it. Here is the code:

This isn’t a very accurate way to check if a process is running, but it works in my case: I’m not performing this check on a shared and/or production server, I have a total control on what’s running and… I gave my PHP bot a very unique name. Should you need to use something similar on a real production server, please use Google to find some better solution.

We’re done with our first bot, now let’s talk about the other one, the one that grabs waiting time from ATM’s website.

How to get data from websites that don’t have public APIs

You want to grab data from websites for various purposes: because you want to have a customized list of news, or because you want to monitor price changes over a period of time, or maybe, like in my case, because you just want to know how much time are you going to spend at the closest bus stop. A lot of information can be grabbed from publicly-available repositories or API, but not every company is kind enough to provide you with all the information you need in an easily-readable format. That was my case.

When writing a “screen scraper” (a computer program that reads what is shown on a screen) or a “bot” (a software application that runs automated tasks -like collecting data- over the Internet), always remember that what you are doing can be either a) illegal, or b) annoying (or harmful) for the target website and its owner. Always remember that being invisible is the best choice: don’t issue too many requests, don’t force the target webserver to do too much work, respect the robot.txt file indications if present, and never run a bot that can harm or disrupt other people’s business.

ATM (the Transit Authority for the City of Milan) has a nice service called “Giromilano” that helps you moving across the city of Milan: you start by defining the starting and end points, the date and time of your travel and it returns the route, complete with a timetable at each bus/tram stop and an estimated time of arrival. ATM can provide you with this kind of information because they know in real time where each single bus/tram is. This information is shown at bus stops, and it’s usually accurate. Starting from the result page, you can navigate to the detail of each bus/tram line, have a look at their timetable, see the location of each stop on the map and, by clicking on one of the points on the map, know the waiting time for that line at that stop. But how can you get it?

NETWORK INSPECTOR IS YOUR FRIEND

At this point, I know that the information I want to grab is somewhere on the internet, but it is “hidden” somewhere. In cases like this, a network analyzer tool (like Fiddler if you’re using Windows), or the Network tab in your browser’s inspector are needed. Keeping the inspector open, I started doing all my steps, starting from the search to the final page. The inspector shows you any data transfer between your current browser’s tab and the server (whatever server it is), and provides you with a preview of the sent data and the response. I was looking for a call to a backend service originated by an interaction with a map: not so many calls, if we exclude map’s tiles (in form of PNG images). It took something like 5 minutes to find the right service to target.

Once I had identified the URL of the web service, I tried to understand the format of the request, which turned out to be a POST of an XML file: that was the reason why calling it directly from the browser wasn’t working. I then went back to the network inspector, to read the XML sent to the server: it contained the number of the line, the name of the stop, and a numeric code.

The numeric code turned out to be the unique ID of each stop, and it is also printed on every of them. Now that I knew how to fire a query, I had a look at the return values: I hoped for a JSON file, or another XML file, but I wasn’t that lucky: the web server returned an XML with an ugly HTML snippet inside it:

The information I needed is just the “3 min.” string on line 16. Time to parse the result.

PARSING HTML CODE

The difficulty of parsing an HTML page depends on how the page has been written by its author, on how big it is, and on the instrument you use to parse it. Obviously you can try to parse it by editing the string with “replace”, “substrings”, “splits” but, believe me, you’d waste your time. If the HTML is simple, you can write your own regex to extract the information you need. An approach I usually like is to check if someone already wrote a library that can help me solve my problem. For this project I used the LIB_parse.php library distributed with the book “Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL“. Another good option would have been the Simple HTML DOM class (an article about it can be found on Barattalo). I have everything needed now, I can write the code:

You may not be able to use this code as-is, but it does highlight how to issue a cURL request with parameters, and with a custom referrer URL using PHP (lines 39 to 57). It also shows how to use the return_between() function (part of the LIB_parselibrary) to extract the content of an HTML string.

This bot behaves like the previous one: it is wrapped in an infinite loop, and it is launched the first time by a shell script. After each iteration (for this bot, one every 59 seconds), the info.json file will contain the computed output: an array of objects containing the bus line number, the name of the targeted bus stop, and a string representing the waiting time.

The frontend

In chapter one we set the URL of our web application in the file called xinitrc; in my particular case, I pointed it to “http://localhost” which more or less means “request the page to the web server that you’re running locally”. The physical path (or the directory) that maps to localhost differs among different linux versions (it’s also pretty easy to customize, you’ll find dozens of tutorials on Google) but in our default Raspbian installation it points to /var/www/: this is the location where we should put our files.

The frontend for our application will be very easy: on the one hand we don’t need a super-complex interface, on the other hand we need to deal with the performances of the Pi (no 3d graphic accelerator means no CSS3 animations), and of the monitor (I know, the one I chose sucks, but it was cheaper and smaller).

To create the user interface I chose to use AngularJS, a Javascript MVC framework developed by Google that simplifies a lot the data binding between the application layer and the HTML DOM. If you don’t know it, read the documentation and try some tutorials: you’ll find it very easy and intuitive to use for small projects, but don’t be fooled -when used on big web projects things will start to get complex.

Back to the code: the first thing to do is to create our index.html page, the one that will be loaded one we start the Pi:

index.html

As you can see, this is a very simple HTML5 page, enhanced with AngularJS attributes. The text between the double brackets {{ and }} includes outputs from AngularJS: during the execution of the script they will be replaced by the actual values of that variables. This code highlights why I chose to use AngularJS for this project: it takes care of the data-binding, thus I don’t have to do anything when the value of a variable changes. The interface will be updated automatically.

The information that I am going to show is:

  • the time (“panel.clock” in the source code);
  • the current weather (“panel.weather”);
  • the weather forecast for the next hours (“panel.forecast”);
  • and, obviously, the waiting time at the bus stop (“panel.atm”)
The Javascript code

The Javascript code here is just a reference, you should write your own. What I do is issue different AJAX requests to get the weather and the waiting time, while the clock uses the Pi’s internal time to show the hours, the minutes, and the seconds.

The only tricky part in the frontend flow is making the web app read the data that I saved in each bot’s folder. To achieve this I wrote three very simple proxy scripts (you can see a reference to them in the Javascript source (‘scripts/atm_alerts.php’, ‘scripts/atm_stops.php’, ‘scripts/weather.php’): their task is just to grab the data file generated by the bots and serve it to the Javascript application via http.

Everything should be fine now, here is the final result:

The 1st draft of the clock’s interface

The Physical object

The concept designed by Malin Grummas, more screenshots on her Behance page.

Once finished all the software and electronic stuff, the problem was creating a nice display for it. I don’t have any experience in industrial design, but luckily I work at a company where you can find a lot of talented people willing to help. Frog’s Malin Grummas came in help, designing a concept for the clock and a temporary stand for the prototype (which has then been created in Frog’s Munich own model shop). In this prototype format the clock has been presented at Frog’s event during the Salone del Mobile di Milano 2014.

--

--

Simone Lippolis

Data Visualization Practicioner in Milan, with a passion for connected objects and photography.