Prototyping a scaled-model of a bike, a camera mount, and, Visualising the Display Feedback for the bike-rider

Vineeta Rath
Srishti Labs
Published in
10 min readJul 25, 2018

My first real interaction with ReRide was about two weeks back, when I was a participant for one of the experiments that the team was conducting, to understand how they would position their Force Sensitive Resistors (FSRs)on a bike-rider’s seat to track his/her posture realtime. The iterative nature of how the team was building and testing their evolving prototype got me extremely interested in the project.

Initially, I started helping out with the industrial design aspects of the project along with Chakra, an Industrial Design student at Srishti, who was already working to build a physical, scaled model of a bike to enable the mounting and testing of the electronic prototypes that the team was developing. After joining the ReRide team, I signed up to work on visualising the Display Feedback for the bike-rider as well. Thus, this post is a documentation of the progress on both:

A: The making of a quick-and-dirty scaled model of a bike for enabling iterative testing and finalisation of the location of the Force Sensitive Resistors (FSRs), followed by the making of a camera mount for the camera, and,

B: A two hour visualisation exercise that Anchit and I did to revisit the learnings from working for the demo at INTERACT 2017 and after, to help me come up to speed on the progress in this regard until now, and also understand the current needs and possibilities when visualising the display feedback for the bike-rider.

A: Quick-and-dirty scaled model of bike and the camera mount

The current ReRide system collects the rider’s postural information by analysing data coming from two primary inputs — the upper body sensing unit (camera-based) and the weight distribution unit (FSR based). This camera based upper body sensing unit detects certain pre-fixed markers on the rider’s body, and sends the coordinates of the markers. From the location of these markers, different parameters like proximity of the head from the camera, head tilt, shoulder tilt, etc can be calculated. And through the FSR-based sensing unit, we get the weight distribution. And by combining data from both camera and FSRs, we get forward/backward lean, left/right lean.

Thus, the purpose of this quick-and-dirty mock-up was to be able to iterate on the combined working of the camera and FSRs before rigging them onto a rider and testing real-time.

For this, Chakra had modularly put together various parts of a standard bike. Initially, the front handle-bar was movable but the whole set-up being made of cardboard was not stable enough. Later, we adapted parts of an old swivel chair to mimic the turning-motion of the front handle of a bike, and used PVC pipes for handles. Chakra also put together a fuel-tank and windshield using cardboard, and mounted it all onto a standard stool. The rider’s seat, which was that of a Royal Enfield 350 Classic was similarly affixed onto another standard stool.

The quick-and-dirty scaled model of a standard bike for testing

To ascertain the most efficient configuration of the six FSRs, weight distribution on the seat was mapped by making a participant sit on the seat, and collecting multiple seat-impressions. By comparing these various pressure maps, specific regions were demarcated for further testing by actual placement of FSRs on the bike.

Comparison of weight distribution across multiple seat-impressions using pressure maps

For the camera, the initial plan was to mount it onto the windshield of the bike, since it is the highest point of the bike’s frontage and would be adequately far away to get a good field-of-view of the face and body of the rider. However, since any movement of the bike handlebar would turn the windshield and therefore the camera, changing the orientation and we would lose track of the markers, this idea was later dropped. Instead, a ‘pouch-like’ camera mount that could rest on the fuel-tank, the next highest stable point, was prototyped. This mount can be attached on-to the fuel tank and consists of an angled, elevated protrusion onto which the camera can be strapped. This also allows room for the Raspberry Pi boards, battery, etc.

Camera mount we built, with a mobile phone fitted onto it for quick-testing (later replaced by a Raspberry Pi)

We are now envisioning the whole setup as just one single unit — imagine a long sheet of fabric with a pouch-like portion that fits onto the fuel-tank and holds the camera and extends to become the seat cover that contains the FSRs, that are connected back to the raspberry pi board, housed in the pouch.

2: Visualising the Display Feedback

As mentioned earlier, Anchit and I decided to have a quick two hour display visualisation exercise to revisit the learnings from working for the demo at INTERACT 2017 and after, in order to come up to speed on the progress until now and the data available, and also to get a clearer understanding of the current visualisation requirements and possibilities.

To begin with, we asked this very fundamental question:

1: What is the purpose of displaying any data visually to our bike rider?

What we finally articulated was, in many ways capturing the larger vision for ReRide — i.e. the visualisation of what the bike-rider is doing realtime, without attaching any label of ‘good’ or ‘bad’ just yet, through making ‘visible’ to the rider his/her own posture. The visualisation would aid the intuitive (or subconscious) adaptation of a rider to a healthier posture over time, based on this visual feedback.

We then moved on to ask a few more questions like:

2: What all does really constitute as posture — in the context of our bike-rider? Which aspects are important for us?

3: What all can affect the posture of a user?

4: What all data can we possibly extract from our camera and FSRs? And, which of these would be potentially useful.

5: How can we simplify the visualisation further?

To answer these, we quickly mapped out the various possibilities under each.

Quick mapping to understand posture, factors that affect it, and data points that could be potentially useful

2: What all does really constitute as posture — in the context of our bike-rider? Which aspects are important for us?

For this, we called out the various terms such as ‘lean’, ‘turn’, ‘tilt’, ‘twist’, etc. that the team had been using, at times interchangeably, to refer to the many shifts and changes in the posture of a bike-rider. A need was felt to understand each of these terms more closely, and standardise them for future use, based on their suitability to our context. So, we went back to their literal/dictionary meanings to better grasp the origin of each term and context of use. We also fixed our own axes of reference, and drew out what each word referred to, to more clearly distinguish between the various terms. ‘Lean’ and ‘stoop’ emerged to be two of the most important aspects of a bike-rider’s posture that were being tracked by our system.

A quick sketch to understand and subsequently standardise terms used to refer to different aspects of a bike-rider’s posture

We also decided to stick to a standardised frame of reference, and fixed the axes as per the ISO standard coordinate system that is used to measure the human exposure to whole body vibration (Source: ISO 2631–1:1997) as in the following diagram:

Coordinate System to measure human exposure to whole body vibration (Source: ISO 2631–1:1997)

Thus, from now on the following words shall mean as defined below:

  • Roll — Rotation around the front-to-back axis. (Bike-rider’s leaning/tilt/stoop towards left or right in seated position)
  • Pitch — Rotation around the side-to-side axis. (Bike-rider’s lean/tilt/stoop towards the front or back in seated position)
  • Yaw — Rotation around the vertical axis. (Bike-rider’s twist/turn about his/her spine in seated position)

‘Lean’ in general implies the incline about a fixed pivot point. Imagine a straight line pivoted at one end of another line, closing in on it, with the angle between them at the pivoted end reducing.

‘Stoop’ however, refers to a bending, both forward and downward at the same time. If someone stoops, their head and shoulders are always bent forward and down. It could also mean to bend the top half of the body forward and down. ‘Stoop’ seems to be a more complex motion than lean.harder to capture or describe.

3: What all can affect the posture of a user?

The posture of a user could be affected by either the bike configuration, self/body configuration, or the environment. Hence, we needed to look at two essential kinds of interactions:

a) Bike-rider’s interaction with the bike.
b) Bike and bike-rider as a single unit, interacting with Environment.

During this process we realised two things:

a) What is the ‘most comfortable posture’ — when is it set?— and how is it reliable?
b) Can we zoom out and look at an entire journey and suggest a totally different route as a recommendation for better posture while on the road?

4: What all data can we possibly extract from our camera and FSRs? And, which of these would be potentially useful.

This discussion helped me understand how, with the use of a specific configurations, boundary limits, and some mathematical calculations, the team was making sense of the combined data from the camera and FSRs.

5: How can we simplify the visualisation further?

What we had also recognised was the need to really simplify the way we present real-time data to the bike-rider, since it is intended only to be glanced at during the ride, and actively paid attention to only during longer periods of halt, such as at traffic stops.

Towards this, Anchit had already started iterating on a visual representation for displaying the instantaneous weight concentration. It consists of a cross-hair, with a large moving dot in the centre. This dot represents the direction of lean of the instantaneous weight concentration. The distance from the centre represents the magnitude of the lean.

Instantaneous weight concentration shown as a moving dot with respect to the centre

This moving dot is actually mapped based on the result of a set of averages calculated from a certain configuration of FSRs in each case. For example, the average value of the left and right sensors are compared to map the vertical line of the cross-hair that give the Left and Right Lean. Average of first row and average of third row of sensors is compared to give the Forward and Backward Lean.

We brainstormed on other simplified forms of visual feedback as well. Some of which are described below:

Idea 1: Direct mapping of the posture indicated through point density in the direction of lean

Idea 1:

Superimposition of a simulated top-view of the bike-rider composed of points onto a reference grid (similar to the reference grid of the moving dot) with the weight concentration indicated by the density of points on a specific body region. On reflection we realised that this may be difficult to achieve since we do not have any FSRs beyond the seat and on the body of the rider tracking data, thereby rendering this visualisation slightly inaccurate.

Idea 2: Direct mapping of the posture indicated through line density and thickness in the direction of lean

Idea 2:

Display an increase in density of lines on a reference axis, in the direction of the lean. A direct mapping such as this will avoid cognitive loading for the user while riding. Further, the left or right lean could be indicated by a thickening of the line towards the lean direction, and can also be indicative of the relative location of the weight concentration with respect to the initial/rest posture.

Idea 3: Display of relative time spent in the current posture with respect to previous

Idea 3:

An insight that got us thinking was that showing an arbitrary duration of being in a single current posture using a timestamp may not be useful to the bike-rider by itself. What might be more useful is a relative understanding of the duration of one’s current posture with respect to the previous. This led us to the next idea of showing a real-time duration of current posture with respect to the duration of the posture the rider was in just before the current one. In essence, this is a visualisation of ‘relative time spent’, instead of an arbitrary ‘absolute time stamp’. The segment on the left is a history of the entire ride in a similar format, intentionally kept to one side while the realtime tracking takes up the largest segment of the screen.

Moving forward, I plan to iterate further on the various visualisation ideas we have, detail them, and then test them out with users.

--

--

Vineeta Rath
Srishti Labs

“To design is to simplify things… beautifully!"