Not a House but a Home: Using Online Learning to Bridge Educational Gaps

A'di Dust
Macalester HCI
Published in
12 min readMay 4, 2023

By: A'di Dust and Abigail Gunther

Screen with 3D house and written instructions.

Today, teachers are tasked with helping students build a strong foundation so that they can thrive in the real world. In the aftermath of COVID, however, much of that foundation-building has come through online learning. Lessons often utilize slide shows, worksheets, and quizzes and leave skills that are best taught through interactive experiences, like spatial thinking and social-emotional learning (SEL), behind. Even more concerning: these two skills are already divided unfairly among the population due to access and stereotypes regarding children’s play. These differences are further compounded by other factors like race and class.

So what can be done to help teachers bridge this accessibility gap and engage students in these topics? We chose to explore this question by creating a virtual tool that was easy to use for teachers and engaging for students. Our final design is a program for middle school students that incorporates map instructions, 2D-to-3D visualization, and conflict resolution skills in support of spatial thinking and SEL skill building.

Our prototype can be found here: https://adidust4.github.io/SEL-and-Spatial-Thinking/

Understanding the Problem Space

If current world events at the time of writing have taught us anything, it’s that school learning is an inherently involved and intimate space where the interests of various parties intersect. We thought it only right, then, that we began by considering what was important for teachers and students when it came to learning. As direct stakeholders, we made sure to involve both groups in our discussion to ensure we had as comprehensive a view as possible. This process is known as value-sensitive design, and revealed to us the following values:

Teachers: student engagement, ease of use, teacher involvement, efficiency of learning material, structure. Students: fun, accessibility, engagement, independence, accomplishment/achievement, imagination.

The values of engagement, accessibility, ease of use, and efficiency were particularly helpful in deciding how we wanted to organize and present our product. Trying to decide between a full VR experience and a more traditional online module format, we applied these values to the two methods and created a list of pros and cons.

VR (positive): engaging, good at teaching spatial thinking. VR (negative): may require special technology to run it, costly to obtain and maintain such technology. Traditional online module (positive): easier to figure out, only requires a computer and internet access. Traditional (negative):unimagtinitive format, risks being boring for students.

Since VR would excel in helping teach one of the skills but a more traditional format was much more accessible, we chose to combine the two: while we would build off a traditional online module format as the foundation, we would have sections with embedded VR that allowed for the same 3D interactive experience but on a smaller scale.

Building a “Curriculum”

Once we figured out the medium our product would take, we turned our attention to coming up with specific activities that would simulate online teaching of SEL and spatial thinking.

To get a better understanding of what goes into teaching and assessing SEL and spatial thinking, we first looked into existing activities and tests. Particularly helpful were this journal article on SEL and this paper on testing spatial thinking. Once we had an idea of the different parts that made up each skill set, we narrowed them down to 1–2 components each that we wanted to focus on.

For spatial thinking, we chose comprehending orientation and direction and transforming perceptions, representations, and images from one dimension to another because both components already had well-defined activities: navigating maps and comparing 2D layouts to a 3D model. In our case, we used a house blueprint that the user could “fold up” and compare to the completed 3D model.

Paper map with route highlighted and various obstacles drawn.
Folded oragami house.
Left: A map was drawn for the first component, orientation, and direction while planning the obstacles and correct path our users would have to navigate. Right: An example box house made of paper to experiment with transforming features from 2D to 3D.

As for SEL, we wanted to focus on conflict resolution because it would fit well with the spatial thinking component. If we created a story where two characters argued over what their house should look like, then our user could be asked to mediate their arguments. We planned for the characters to present different blueprints that created the same house, which would simultaneously reinforce the transforming perceptions of spatial thinking and create a larger conflict resolution demonstrating that sometimes both sides can be right.

We had three ideas as to how our user would mediate conflict. These were

  1. allowing users to think about what they would do without providing any physical interaction;
  2. giving users preset solutions to choose from; and
  3. allowing users to type their solutions.

While ideas 1 and 3 allowed the greatest freedom for reflection and conflict resolution, we also couldn’t be sure our users wouldn’t just skip through the sections without stopping to genuinely think about their responses. Thus we settled on idea 2, which while we were worried might be too limiting, did provide enough structure to ensure user engagement.

An early diagram for our prototype that integrates the house-building component of spatial thinking with the conflict resolution component of SEL.

Creating the Prototype

Once all of the planning was completed, it was finally time to work on the prototypes. We began with paper prototypes of the two blueprints, which allowed us to fold them up as we planned for the final prototype. These two prototypes were then tested with four students in a middle school coding club.

Using the two blueprints and a wooden 3D solution model, we asked each student to follow steps similar to how they would in the online model. Since there were two house plans, half of the students started with house plan A and the other with house plan B.

Paper prototype of house plan A (left) and house plan B (right) with Post-Its placed. The balsa wood model with marker drawings (middle) is the 3D solution house.

Most of the feedback from the students indicated that they enjoyed the activities. The students used words like “challenging” and “fun” to describe the activity, and after testing one student even tried to convince the whole club to test it because they had so much fun.

We also found that the order in which the students attempted each house map mattered. Those that began with plan A found it challenging but felt accomplished after completing the second. Those that started with B, however, expressed frustration about being able to get the first plan but not the second. Thus, putting the more challenging map first was the best way to balance a sense of challenge and a sense of accomplishment.

Hands of student folding 2D box while balsa wood house is in front.
A student folding up the 2D house map next to the 3D solution.

Other takeaways involved the kinds of instructions that were given. If the student was directed to interact with the 2D map (e.g. trace their fingers along the surface), the student was able to solve the problem quickly and get a better understanding of how 2D maps to 3D. This was a challenge for our intended solution since it is difficult to encourage physical interaction with the screen. However, we were confident that our use of embedded VR for the solution house would help bridge this gap between interacting with physical models versus digital models.

Once we had student feedback, we set to work coding it up with the help of the web platform A-Frame, which specializes in creating VR and other 3D environments. Here is an in-progress demo video of our prototype as we worked through implementing the house activity:

As we worked on our prototype, however, we realized we would have to make some concessions. Although we had initially planned for the user to work through two blueprints, we realized that with the size and experience of our team, it would be unrealistic. We could implement two blueprints that only partially worked, or implement one that fully worked. Feeling that partially functioning activities were detrimental to our goal of creating a tool that teachers would find useful, we chose to work with only one blueprint.

We also found that the logic and asset creation needed to make the blueprint “fold up” at any point during the process was too complex and time-consuming, and abandoned that idea as well. We made this concession in hopes that the ability to always have access to the completed 3D model would make up for the lack of folding; while our users may not be able to directly translate their own 2D design to a 3D space, they are not complete without a 3D reference.

A technical limitation also meant that we were not able to make our map interactive in the way we had first imagined: given directions, the user would click on streets to highlight the correct path. When we realized this, we considered asking the user to locate a specific place on the map, similar to locating and placing features in the house activity. However, we were worried that this would make the two activities too similar and shift the focus to be less on spatial thinking and more on point-and-click gaming.

Further Testing

Once we had a working online prototype, we could also ask teachers to test it. For this particular user group, we were less concerned about the functionality and more about the content: how well did our program support spatial thinking skill-building? How well did it support SEL skill-building? Would they find a tool such as ours useful in online learning environments? Would they themselves use it?

We remotely contacted a non-tech-savvy middle school teacher and a techy substitute teacher to help us understand their perspectives on our prototype, sending them the link to our prototype and observing via phone as they interacted with it. Some of the observations are listed here:

  • It was not clear that they were resolving conflict in our SEL component, the story. They hadn’t considered the arguments we’d created actual conflict, and none of the provided choices for the user were clear about which was the more SEL-conscious.
  • While one teacher thought our prototype handled spatial thinking well, the other made a comment that “[our prototype] doesn’t have anything to do with spatial thinking but looking at the 3D model and matching it.”
  • One expected the blueprint to fold up once all of the features were placed.
  • They liked the characters, particularly their names.
  • They immediately understood how to use the 3D model and loved its inclusion.
  • If there was more content, it would be useful.

An interesting note is the teacher’s comment on the third point. Definitions of SEL and spatial thinking were not provided before testing, but what that particular teacher is describing is indeed part of spatial thinking — transforming perceptions between dimensions — but they had conceptualized it as a logic-matching activity. It was also interesting that one of the teachers expected the blueprint to fold up, which we had at one point intended to happen. This gave us confidence that, even if teachers were not very familiar with the two concepts and even if we were unable to implement certain things, we were on the right path and our prototype was still doing its job.

In the second round of student testing, the same 4 students as the first round and one other student who had not tested in the first round participated. The student who had not participated in the first round actually ended up needing the least amount of guidance using the tool because they more thoroughly read instructions. The following are some takeaways from student testing:

  • Kids skimmed instructions, so when we allowed them to continue dialogue without selecting a part of the house they didn’t understand the context. The lack of a back button made it so they needed to start over each time they missed an instruction.
  • The students are given iPads by their school, so they were unfamiliar with using a laptop. We tried testing on iPads but the solution house could not be turned. The generation gap is important to think about when designing technology for the correct experience and device.
  • Some of the wording was too advanced for middle school students. For example, the dialogue included the phrase “to give the house more character,” which was misinterpreted as needing to add animal characters to the house.
  • The students also didn’t understand that the dialogue texts were clickable. Making each piece of clickable text a more obvious button would be better, even if it is less aesthetically pleasing.

There were also comments on their enjoyment of the game.

  • Even though the map wasn’t implemented, one student mentioned that they love maps and they thought the map was “really cool.”
  • Each student said that they enjoyed the game and encouraged other members of the class to participate in testing because it was “fun.”
  • There was quite a bit of laughter about the dialogue. Kids found the names of the Fluffikins funny and also enjoyed words like “fish sticks” or the extended relative relationship at the beginning of the game.

With a more informed view of our design, we turned our attention to improving it. Certain aspects of our design received similar feedback from both students and teachers alike: both groups got confused with the instructions, the pacing of which did not line up with the placement of the features. To address this, we added a feature that didn’t allow a user to continue through the instructions until a feature had been placed. Further, the non-tech-savvy teacher and students alike were confused about what was interactive and what wasn’t in the textbox. Although these elements changed color when hovered over, it wasn’t immediately obvious that they should hover over it so we added a black outline around all clickable text to bring attention to it.

Final Prototype

Our final prototype greets the user with a landing page briefly explaining our project so users aren’t immediately thrown into the activities. Because there is a bug in A-Frame that affects interaction with embedded assets and makes interacting with the house blueprint work only some of the time, we added a note about it to warn users.

Green screen with text on it and a start button.
The landing page screen welcomes users to our project and provides some background information as well as information about an existing bug.

The map is retained but the story dialogue does not prompt the user to interact with it. Instead, the main activity is placing features on a house. During this, the user is able to toggle between the rotatable completed house model and the 2D blueprint whenever they want with the click of a button.

The story is communicated through dialogue in a textbox along the bottom of the screen. An arrow in the top right allows the user to move through the dialogue. Clickable elements are visually set apart from normal text with a black border and a color change. When the user is prompted with a question, the arrow controlling the dialogue flow disappears to ensure the user interacts with the characters. If the wrong choice is selected, the program loops back to the choices.

2D house on top, bottom has dialog and buttons to respond to the dialog.
One of the questions the user is asked. The user is given three choices, which are boxed and changed to a darker blue when hovered over.

A Final Look Back

Despite the changes that had been made to accommodate our small team and technical issues, our meticulous planning at the very beginning allowed our final prototype to stay true to our overall vision. Given more time, we would have loved to make the map interactive. User testing with teachers who worked in areas familiar with SEL and spatial thinking would also be very interesting.

On that note, we acknowledge that we ended up focusing more on the spatial thinking component, which was much more technical. As a result, our prototype has a weak SEL component, which the teachers noted during user testing. A fix would not have been easy: to properly build conflict, we would need a longer story and more character building; to emphasize the conflict resolution, we would need more nuanced options (or, perhaps, no options at all and instead use one of our prior open-ended ideas instead). Creating meaningful SEL content is difficult, and we imagine that even if we had months we would still have found large areas for improvement.

Further, while we only identified teachers and students as our main stakeholders, as mentioned in the beginning there are many more parties involved: school administrations, parents, and the local government are all groups whose views and values we excluded from our design. Our design only takes into account a fraction of those groups, and also necessarily assumes access to a computer and a stable internet connection.

Nonetheless, our brief foray into the online education space has really highlighted the gaps in education and the complexity of its digitalization. More than anything, we hope our prototype can be a demonstration of how we as a society can begin to approach bridging these gaps for the sake of our students.

--

--