Introducing CoderBot, a didactical, open source robot

CoderBot
Student Voices
Published in
13 min readOct 23, 2015

A fresh, open-source, inexpensive and camera-equipped robotic system, conceived as a didactic tool for the Primary School.

Introduction

The use of robots as tools to foster the development and disciplinary or cross-disciplinary abilities and competences, in schools and in other kinds of education agencies, is now widespread [1]–[3], and many educational robotic platforms and kits are available on the market. Some systems have a fixed rover-like form (like the Bee-Bot and the Blue-bot), while other systems can be assembled as to assume various different forms, such as the LEGO MINDSTORMS kit [6]. Some systems can be programmed through an interface directly embedded in the robot (such as the Bee-Bot), while other systems need a software running on a computer connected to the robot (as in the LEGO system). So-called “tangible programming” systems [4] include the Cubelets and the Little Bits systems.

The aim of this article is to present an inexpensive, open-source, camera-equipped robotic system, provisionally named CoderBot (actually 1.0 version), which offers some distinctive advantages with respect to many of the aforementioned platforms especially for use with Primary School children. Its most interesting features include:

  • a) being easily programmable through a scratch-like interface;
  • b) having a camera, which is notably absent in many other educational robots;
  • c) being open-source, thus highly customizable for a wide variety of needs;
  • d) being cheaper than many other systems, thus affordable for schools (it is built on top of consumer-available and inexpensive hardware such as the Raspberry PI computer, commonly used for educational purposes).

The hardware and software structure of the system is described in the next Section. A brief report on a pilot CoderBot-supported activity, exemplifying its potential uses in Primary School, is described in Section “Mission to Mars game”. The development of a second release of the robot, with several revisions to the hardware, to the software and to the external structure of the current 1.0 release, is discussed in the concluding Section “Engineering CoderBot 2.0”.

CoderBot 1.0

CoderBot (a Bot for young Coders) 1.0 is a small robot assembled in the form of a rover, equipped with a camera and a speaker, easily programmable through a scratch-like web-based interface. A version with PMMA (Plexiglas®) structure is shown in Figure 1.

Figure 1 — CoderBot 1.0 PMMA version (speaker not shown).

Hardware.The robot is built on top of the Raspberry PI microcomputer, which is a powerful yet inexpensive full-fledged computer widely used in schools. A custom motor controller is plugged in the GPIOs (General Purpose IOs) connector of the Raspberry. The controller drives two small DC 5V motors via an H-bridge circuit (based on the IC L293D in the prototype used in the case study described below) using PWM (pulse width modulation) to control speed. A Wi-Fi adapter is plugged in one of the USB ports of the Raspberry. The energy source is provided by a Lithium-Ion battery of about 5000mAh capacity with dual separate DC-DC converters which provide 5V, 1.0A output. The dual power supply is mandatory: having a separate circuitry ensures that engine overload at start does not cause power jitters to the logic (Raspberry PI) which requires a stable, 5.0V power supply.

Camera and artificial vision. The robot is equipped with an integrated camera which can be used to capture live video and snapshots. In the current 1.0 release, the camera can be oriented in various ways, so as to “look” straight in front of the robot, or a little (30°) upward or downward. Once the images are captured (through the PiCamera interface), they can be accessed in the following ways.

  • Live stream: the frames are compressed as a MJPEG video stream and continuously sent to the controlling terminal.
  • Video recording: the frames are compressed as H264 videos and locally stored for successive replay.
  • Computer vision: the frames, uncompressed and scaled to low resolution (160x120 pixels), are processed by calling OpenCV image processing functions from CoderBot’s programming environment (see below).

Unit. CoderBot’s 1.0 main structure is made by custom design and produced using laser-cut technology (Figure 2). Several editions have been developed: the PMMA version allows the user to see the internal machinery of the robot, while the wood solution is more efficient in term of price and easier to assemble. A newly engineered unit is under development, as described in the concluding Section “Engineering Coderbot 2.0”.

Figure 2 — Main structure laser-cut design (PMMA version).

Software. The software stack is based on Linux, Python and several open source libraries and frameworks, most notably the OpenCV framework and the Blockly visual programming editor library. See Figure 3 for a schema of the software. A key feature of CoderBot is that the whole user interface is provided by the software installed in the robot itself, so there is no need to install anything on the controlling computer. Indeed, at system boot, the robot creates its own “AP” (Access Point) Wi-Fi network, acting as a router with DHCP service (it can also be configured to link as a client to an existing Wi-Fi network to increase the range). An embedded, Python-based, web server automatically starts listening for HTTP requests. To control the robot, the client computer must only 1) establish a connection to the robot Wi-Fi network, and 2) load the http://coderbot:8080/ URL in a browser order to access the full, HTML5-based, user interface. It follows that the software requirements for a client computer to control CoderBot 1.0 (and all the future versions) are limited to having Wi-Fi connection and being able to run an HTML-5 compliant browser.

Figure 3 — Software components diagram.

The user interface of the current release of CoderBot is composed by two main sections, “Control” and “Programming”.

Control interface. It is a drone-like cockpit, featuring a live view of the robot’s surroundings acquired through the front camera, which allows direct control of the robot main features (Figure 4). The available commands allow the user to tele-operate the robot through direct motor commands (forward, backward, turn left, turn right), to take snapshots, to record a video, to emit sounds or synthesized speech through the audio speaker. The direct-control HTML5 buttons send HTTP messages to the web-server, which trigger Python motor control functions. Any computer with an HTML5-compliant browser can therefore control CoderBot; no additional software is required.

Figure 4 — Screenshot of the control view.

Programming interface. The programming interface is based on Google’s Blockly library. Commands are available in the form of blocks to be dragged and dropped in the programming area, in a scratch-like fashion (Figure 5, left). “Power users”, such as teachers or parents, are allowed to customize the programming environment so as to show only a subset of the available direct control and programming instruction blocks (Figure 5, right). Unnecessary commands and functionalities can thus be hidden, so as to suitably constrain children’s reasoning processes towards the solution of the game or problem proposed and, more generally, towards the achievement of the target learning objectives (this point will be exemplified in the case-study described in Section “Mission to Mars game”).

Special custom-designed function blocks, not available in the original Blockly distribution, allow programmers to write computer vision algorithms to detect objects in the field of view of the camera and react accordingly. Blockly programs are directly stored in the Raspberry PI’s internal memory and called by the web-server on pressing the “Run” HTML-5 command in the interface. Given the size of the currently available Micro-SD cards, one can ideally build very large programs calling a variety of auxiliary multi-media files (e.g., MP3 files recorded by children and called at some points of the program). The same is not true with other educational programmable robots, e.g., the LEGO Mindstorms, which have very limited memory store.

Advantages over existing educational robots . It follows from the above description that CoderBot 1.0 lacks one of the most significant features of systems such as the LEGO Mindstorms or the Cubelets, namely the possibility for children to manipulate its physical structure (this feature will be partially added in CoderBot 2.0 as specified in Section “Engineering Coderbot 2.0”). This certainly constrains the space of the “robotic creatures” that can be obtained with it. Nevertheless, it has also a number of distinctive features which are lacking or under-developed in other commercially available systems.

The first one concerns programmability. As described above, once turned on, CoderBot automatically starts listening for user commands and can be easily and intuitively programmed through a simple drag-and-drop, scratch-like system. For this reason, children can focus more on finding the algorithm to solve a problem than on the technical complexities needed to implement it in the specific hardware. Other systems are not so easily and quickly programmable. Programming a LEGO Minstorms robot, for example, requires one to perform a number of auxiliary operations that have more to do with the specific hardware than with the algorithm proper, including establishing a Bluetooth or USB connection and checking if the software “sees” the robot, which are often rather frustrating for young children.

Another advantage of CoderBot 1.0 is the availability of a camera, which is notably absent in many other educational robots. A number of video processing and artificial vision functions, which may enable one to implement a variety of visuo-motor interaction algorithms, are currently available as blocks in the programming environment.

For example, the robot can be programmed to follow a ball through a blob-recognition function, or to associate particular motor behaviors to visually presented QR-Codes or symbols. Clearly, the camera can be also used as an environmental light sensor or, if oriented downward, to realize a line-following or path-following algorithm.

These are advanced algorithms, which are likely to be out of the reach of Primary School children. However, other camera-based didactic activities may be more suitable for young students.

As in the “Mission to Mars” activity described later, children can be asked to tele-operate a remote CoderBot using camera views to obtain sensory feedback.

Third, contrary to many commercially available educational robots, CoderBot 1.0 (and all the future versions) is fully open source. It is built on top of consumer-available and inexpensive hardware such as the Raspberry PI computer, and uses open-source programming libraries (such as Google’s Blockly library). All the hardware and software schematics, as well as the whole software, are currently available on a dedicated website (http://coderbot.org). This ensures that the robot can be customized for a wide variety of needs, and contributes to keeping the cost of the whole system low. As described in Section “Engineering Coderbot 2.0”, we are planning to expand the website into a community for exchanging ideas and didactic projects concerning the next releases of CoderBot.

Mission to Mars game

To exemplify the use of CoderBot 1.0 with Primary School children, here we briefly describe a pilot laboratory carried out in May 2015 in a 5° Primary School class in Milan, Italy.

The so-called “Mission to Mars” laboratory has lasted three one-hour sessions, occurred in different days. It was meant to stimulate acquisition of cross-disciplinary logical/abstract/scientific reasoning and problem solving abilities, such as the ability of observing; of systematizing diverse observations in a coherent framework; of formulating explanatory hypotheses; of performing inductions and deductions; of justifying hypotheses; of identifying and analyzing errors; of revising hypotheses; of properly understanding problems; of decomposing the main problem into sub-problems; of formulating a plan to solve the problem; of evaluating the plan before execution; of properly executing the plan; of evaluating the results; of identifying and analyzing errors; of revising the plan.

The structure of the laboratory has been carefully planned in the weeks preceding the first session, so as to make it functional to these learning objectives. The game “Mission to Mars” has been structured as follows.

“Martian” playground

A detailed satellite picture of the Martian territory, printed on a large (200cm X 140cm) sheet of paper, has been placed in a room adjacent to the classroom. CoderBot has been put on it, in a pre-defined starting position, its camera being oriented slightly downward 30° so that it can acquire images of a restricted portion of the Martian territory ahead of it (when CoderBot is in the starting position, the camera frames a little picture of a spaceship). Three pictures of aliens have been placed somewhere on the Martian territory, well outside Coderbot’s field of view. The children were not allowed to go to the adjacent room and to see the scenario: they only could see what Coderbot’s camera “saw”, broadcasted through the Wi-Fi connection.

Initially, then, they had no information on the structure of the territory and did not know where CoderBot (thus, the spaceship) was relatively to the aliens. The goal of the game was to find all the three aliens by remotely piloting the CoderBot. This game is clearly reminiscent of the recent robot-supported space missions to discover the territory of Mars.

The programming interface has been customized so as to show only four kinds of commands (go forward, go backwards, turn left, turn right; see Section “CoderBot 1.0” on the possibility to customize the web-based control interface). Familiarization with a richer set of commands, and with access to sensory and internal variables, would have required much more time than that available to us. At the same time, a lot of abstract reasoning and problem solving efforts are needed even to use a so small and apparently trivial set of commands. This constraint on the space of possible programs, made possible by the CoderBot customization functionality (which is not or only poorly provided by other educational robot programming environments, e.g., the LEGO Mindstorms) and imposed by the little time available, was then fully functional to the learning objectives of the laboratory and avoided unnecessary “cognitive load” on the students.

The class has been arranged in four groups. At the start of each trial, all the groups have been given some minutes to formulate a possible sequence of (max 5) commands. Then, in turn, each group has been asked to code the program in the programming environment. Before each program execution, the teacher or the laboratory coordinator asked the group “Why did you choose this program?” or an analogous question (e.g., “Why did you place two forward commands at the beginning of the program?”). These questions were meant to stimulate the group to provide a rational justification for their programming choices and, in some cases, had the effect of making explicit the lines of reasoning that children had followed to formulate the program. Then the program was executed, while CoderBot’s visual stream was visible on the monitor.

After execution, the group was asked to say if CoderBot had hit into one of the three aliens. Spontaneously, in nearly all cases, all the children have started to make hypotheses on the movements made by CoderBot and on the structure of the Martian territory (by integrating all the observations made before with the movements of the robot).

Those hypotheses and acquired information have been then taken into account in the subsequent reasoning activity leading to the formulation of a new 5-commands plan by the following group.

All the verbal interactions among children and between children and teachers/supervisors have been recorded for a future qualitative grounded-theory-inspired [5] analysis, aimed at 1) assessing the abstract reasoning and problem solving abilities exhibited by the children during the game, and 2) identifying correlations between specific kinds of interventions by the teachers/supervisors and the manifestation of specific kinds of abstract and problem solving ability by the children. The results of this analysis will be discussed in a separate paper: a more detailed reflection on the “Mission to Mars” game is out of the scope of this section, whose goal was to exemplify how CoderBot (and its camera) can be actually used to pursue some of the cross-disciplinary learning objectives of Primary School instruction.

Engineering CoderBot 2.0

In this article we have described CoderBot 1.0, a fresh, not expensive, open-source and easily programmable educational robot designed for Primary Schools. To conclude this paper we briefly mention that a 2.0 version of the robot is currently under development.

The new hardware and software characteristics of the system are being chosen starting from a selection of the learning objectives whose achievement we want CoderBot 2.0 to support. For example, games in which children have to program the robot to draw polygons or to drive certain distances, may contribute to improving student’s ability to measure distances and angles,which is among the learning objectives of Primary School instructions.

To be involved in such activity, CoderBot 2.0 must possess some technical features absent in the current version, including encoders to measure the exact distance travelled by the robot and pen’s accessories to draw lines during robot movements.

This “learning-objective-based” approach — which starts from a selection of learning objectives, proceeds with the description of didactic activities meant to pursue those objectives, and finally identifies the required features of the robot — will contribute to making CoderBot 2.0 a useful tool for Primary School curricular instruction.

An online catalog of didactical projects is also currently under development, which will allow users to share and discuss their CoderBot didactic projects. All these features will make CoderBot 2.0 not only a robot, but rather a set of robot-supported tools and contents to pursue the learning objectives of Primary School instruction.

Authors: Roberto Previtera, Emiliana Murgia, Marco Goffi, Giulia Colonna, Edoardo Datteri

References

[1] O. Mubin, C. J. Stevens, S. Shahid, A. Al Mahmud, and J.-J. Dong, “A Review of the Applicability of Robots in Education,” Technol. Educ. Learn., pp. 1–7, 2013.

[2] F. B. V. Benitti, “Exploring the educational potential of robotics in schools: A systematic review,” Comput. Educ., vol. 58, no. 3, pp. 978–988, 2012.

[3] A. Bredenfeld, A. Hofmann, and G. Steinbauer, “Robotics in Education Initiatives in Europe: Status , Shortcomings and Open Questions,” in Proceedings of SIMPAR 2010 Workshops, International Conference on Simmulation, Modeling and Programming for Autonomous Robots, 2010, pp. 568–574.

[4] T. Sapounidis and S. Demetriadis, “Tangible versus graphical user interfaces for robot programming: Exploring cross-age children’s preferences,” Pers. Ubiquitous Comput., vol. 17, no. 8, pp. 1775–1786, 2013.

[5] K. Charmaz, Constructing grounded theory: A practical guide through qualitative research. 2006.

[6] LEGO and LEGO MINDSTORMS are trademarks of the LEGO Group.

--

--

CoderBot
Student Voices

CoderBot is a didactical programmable robot based on Raspberry Pi