Meeting Room Booking System

Charlotte Li
jiaxinli92
Published in
22 min readJun 18, 2020

--

Type: Digital Screen
Timeline: 5 months (Jan. — May.2020)
Methods: User Interview, Persona, Journey Map, Competitor Analysis, Task Analysis, Prototype, Usability Test
Team: Jiaxin Li, Mats Guttormsen

01 Overview

This is a group project of the course IDG4113 User-Centered Design. The requirement is to design a meeting room booking system for a physical device that would be located outside of a specific meeting room. The digital device will be the only access to book the meeting room.

To gather insights about how users book a meeting room and their pain points, we have explored multiple research methods, namely interviews, affinity diagrams, persona, scenarios, user journey map, brainstorming and competitor analysis. After we finished the prototype, heuristic evaluation and cognitive walkthrough were used to gather inputs from experts, and usability testing was conducted with the end-users on several iterations of the prototype.

Background

Bright House is a company that has its own building in Mustad Næringspark. The public meeting rooms in Bright House are open for booking by anyone with access to their webpage. Many small companies rent offices in Bright House, some companies have their own meeting rooms in their offices, while some can’t afford. However, all of them can book public meeting rooms.

After exploring all their public meeting rooms we chose one room Regionsalen as our design target.

Fig.1 Regionsalen meeting room in Bright House

02 Problems

After having interviews with the end-users, we discovered several pain points with the current booking system at Bright House.

  1. Difficult to change meeting, and impossible to change it for more than once
  2. Difficult to figure out the option to book catering together with the room
  3. Difficult to get equipment for the meeting and to fix technical problems during the meeting.

So, we decided to focus on:

How might we make it easier to change the booking multiple times?

How might we make it easier to book catering and equipment together with a meeting room?

03 Final Design

Click to see our final prototype!

Ps: The home screen shows different states. for example green (available), orange (10 minutes before meeting) and red (in the meeting). To get to the correct state we have added a guidance page with four tasks. This page is not supposed to be in the final product. To go back to the guidance page you can click on the status bar on top.

3.1 Book New Meeting

Fig.2 Home screen

On the home page, the green circle clearly shows the room is available for 45 minutes. Beside the circle, there’s a list of upcoming meetings. Underneath there are three buttons, “book new meeting”, “change booking” and “ schedule”. The status bar at the top shows the name of the meeting room, date and time. At the bottom, there’s a problem report button, where you can report problems with the projector, internet, microphone etc. Also, there’s a device setting button which allows the administrator to change some default setting.

Fig.3 Report problem page
Fig.4 Schedule screen

When you click on “schedule” or “book new meeting”, it will take you to the schedule screen, here you can see when the meeting room is available within one week and you can simply drag on the time slot to create a new meeting. However, in our prototype, you need to click on the time slot to make a new booking.

Fig.5 New meeting detail screen

After you choose a time slot, you will arrive at the new meeting detail screen. On this page, you can select date and time, change meeting name, add equipment, meeting member and catering. After you confirm the booking, an email with the booking information and a schedule which you can download will be sent to you and all the attendees. In the organizer’s email, there will be a catering payment link which you need to pay within two hours.

To book a new meeting, you can choose to log in with a mustard card or account, in this way you don’t need to type in your personal information during booking, and you can choose attendees from your contact list. However, for people that don’t work in Bright House, they can choose to book meeting without log in, and after they confirm the booking, they will receive the meeting code and email. With the meeting code, they can change their meeting in the system, and check-in before the meeting starts.

3.2 Change Booking

Fig.6 Change the meeting page
Fig.7 Change the meeting page
Fig.8 Change meeting detail screen

For the quick access to the change booking interface, we put a button labelled “change booking” on the home screen. To ensure security, it’s required to log in or enter the meeting code to change the meeting. The change meeting interface was kept as similar to the booking detail interface as possible. This was done to keep the users familiar with the layout and to support the mental model for the user. A booking without catering is able to be changed 1 hour before the meeting starts, while one with catering is only able to be changed 24 hours before the meeting starts.

3.3 Meeting start — end

Fig.9 Before meeting starts
Fig.10 During meeting

Users can check-in, extend or end meeting based on different states of the home screen. If the meeting is about to start within 10 mins the interface will become yellow and a “check-in” button will appear. When the meeting starts, the screen turns into red, two buttons labelled “extend” and “end now” appear. The user can choose to extend for 30 minutes, 1 hour or 1.5 hours. To perform check-in, end or extend meetings users has to log in as we do not wish any random people walking by to be able to change the meeting.

04 Methods

4.1 User Research

4.1.1 Field Study

The Bright House representative gave us a tour of all the meeting rooms in Bright House. The public meeting rooms are open for anyone to book, but the main target group is people who work in Mustad Næringspark and NTNU Gjøvik. This means that our Physical device design has to cater to many different users. We chose “Regionsalen” meeting room as our starting point. The Bright House representative mentioned that one of the main issues about the Loop booking system, which Bright House currently uses, was that the users did not recognize that they can click on the date change it.

4.1.2 User Interview

To gather insight into how the end-users book a meeting room, we have conducted a semi-structured interview with three users, all of them work in Bright House and have used the Bright House website booking system before, one of them had also used the physical device outside of a meeting room. The interview questions wording was formulated based on the dos and don’ts list made by Kathy Baxter (Courage and Baxter, 2005, p. 233).

They were generally satisfied with the product and thought it was easy to use. However, there were some problems:

a. Impossible to change bookings for multiple times;

b. Difficult to find out catering options: in the current system, the users must click on a button labelled “customize” or “tilpass” in Norwegian besides the submit button. Only one of our three interviewed users knew about this.

c. One of them wants to copy the previous booking to make a new booking.

4.1.3 Persona

An affinity diagram is a tool used to organize ideas and data based on their natural relationships (Affinity diagram, 2020). After the interviews, we used an affinity diagram to structure the information we got from the users, we tried to group them based on some categories that we will use later in the persona, for example, problems they have during making a booking and their goals.

Fig.11 Affinity Map, for a clear version please click the link: https://whimsical.com/9pa4hA5VLdfG3w56bHCXec

We summarized the information in the affinity diagram and created a persona. A persona is a fictional individual who describes a specific user. It can help your team members feel connected to your end-users and focus on the same target during the product development (Courage and Baxter, 2005, p.41).

Fig.12 Persona

The persona above describes the main characteristics of our main user group, who work or rent an office in Bright House. However, people that don’t work in Bright House can also book the meeting rooms. Considering that this group of users use the meeting room much less than the main users and the time limit, we excluded them from our user interview as well as persona.

4.1.4 Scenarios & User Journey Map

A scenario describes the situation where the users conduct a task and their goals. In our cases we chose two main scenarios to create the user journey maps, they are to make a new booking and change booking respectively. In our journey maps, we focused on where the actor has the lowest emotion and selected opportunities there, such as how to make it easier to book catering and equipment and how to make it possible to change the meeting. We think this is a very helpful tool because by making it together, we shared the same understanding of the users, their frustrations and goals. We also discovered some important opportunities where we can create a better experience for our users.

Fig.13 User Journey Map 1
Fig.14 User Journey Map 2

4.2 Brainstorming

We used brainstorming methods to generate as many ideas as possible. After two rounds of brainstorming, we used NUF methods (Gray and Macanufo, 2010, p.244–245) to select the most promising ideas for further development. N in NUF stands for new, U means useful and F is for feasible. Each of us had three votes for each of these three categories, and we should vote for the ideas that we considered match these metrics most.

4.3 Competitor Analysis

We chose several famous digital devices as well as websites or PC applications to learn from their strengths. We consider the digital devices as our primary competitors, the booking websites and PC applications as secondary competitors. Secondary competitors are those that have fewer features in common and don’t compete directly with our product (Courage and Baxter, 2005, p.33).

Since we are designing a digital device booking system for Bright House, it’s necessary to look at their existing digital device Bright Works and their online booking system. Other applications were attained by searching online. Since we didn’t buy these devices, we can’t have a full experience of their functions. However, we looked through the product descriptions as well as some screenshots on their website and tried to have a perspective review of these products. Below is the result of our analysis, we used the same format as in the book (Courage and Baxter, 2005, p.34).

Fig.15 Competitive analysis, for a clear version please click the link: https://docs.google.com/spreadsheets/d/1baCvGJylU7xH9xcIPHBxhK5OfHmxVScBvTkq30vag58/edit?usp=sharing

We noticed many of theses devices support check-in, extend and end meetings, these functions seem very useful to us because if no one shows up in the meeting room it should be available again. Also, based on our own experience, meetings always need either more time or less time. Therefore, we decided to add these functions to our own solution. We also noticed that many of them use white text with a black background to reach higher contrast, and different colours to indicate the different status of the meeting room. Another notorious feature they have in commons is that they are integrated with popular calendar services like MS Exchange, Office 365 or Google Suite. However, we decided not to pursue a feature like this for our product as it would be resource-demanding to integrate all of these, and it would be hard to choose just one considering the variety of our user group.

4.4 Parallel Hierarchical Task Analysis

As our group consists of two members, we decided to do hierarchical task analysis separately so that we did not influence each other. Three main tasks were defined, they were to book a room, change the booking and attend/end a meeting. After the individual task analysis, we conducted a parallel task analysis, which meant we compared each other’s task analysis and took the best parts of both to make a final version. The final result can be seen in figure underneath. The dotted lines signify optional tasks.

Fig.16 Parallel hierarchical task analysis, for a clear version and the individual task analysis please check this link: https://whimsical.com/DNdU4mj3Bsw1Nj7a9eVyRk

4.5 Prototyping

4.5.1 Wireframe

We decided to draw the wireframe on paper and in figma first because it’s much easier and faster to visualize our ideas with wireframe. When drawing the wireframe, we don’t need to spend too much time on the image, colour or icons, so we are more comfortable to share honest feedback to each other. It’s also easier to do some big changes to the wireframe than a polished prototype. However, since wireframe doesn’t contain many details, it requires more explanation from the drawer and more imaginations from the reader, for the reader to understand it (Babich, 2018).

Fig.17 Paper Wireframe

4.5.2 Low-fidelity Prototype

After having agreed on the layout for the design, a low fidelity prototype was developed using Figma. We limited the use of colour because we wanted our initial testing to be focused on the functions more than aesthetics.

Fig.18 Low-fidelity Prototype

4.6 Usability Inspection

The low-fidelity prototype was evaluated using two usability inspection methods namely Heuristic Evaluation and Cognitive Walkthrough.

4.6.1 Heuristic Evaluation

Ideally, there should be 3–5 expert conducting the Heuristic Evaluation (Nielsen and Molich, 1990). However, due to our limited resource, we only found one interaction designer to do the evaluation. The person who conducted the Heuristic Evaluation was provided with a link to our prototype and given a worksheet with ten heuristics.

4.6.2 Cognitive Walkthrough

Two fellow interaction design students were invited to conduct cognitive walkthrough. Due to the coronavirus situation, we chose to do it online. The prototype was shown via screen sharing and the two students were asked four questions on the relevant screens in the prototype.

The tasks given to the students were:
a. Book a room, add members to the meeting list and book catering;
b. Change the date of the booking;
c. Check-in to a meeting;
d. Extend a meeting with 30 minutes.

The four questions asked were:
a. Is this what you expected to see;
b. Are you making progress toward your goal;
c. What would your next action be;
d. What do you expect to see next?

4.6.3 Feedback

We have received both positive and negative feedback from these inspections. For example, we discovered that the circle on the home screen caused confusion to the users when the time was more than one hour.

The Walkthrough also mentioned that it could be confusing or take extra attention from the user to figure out which day was today so we added a green colour marker to display this information right away. We also put the add a meeting member information behind another click as it was deemed overwhelming for the user to be flooded with all of the input boxes right away on the booking page. There were also some confusions with choosing date or time and add meeting members when they booked a new meeting. A checklist was created to summarise the findings and to give design suggestions.

Fig.19 Checklist from usability inspection

4.7 Usability Testing

4.7.1 Usability Testing 1 and results

After we have changed our prototype based on the results from cognitive walkthrough and heuristic evaluation, we decided to do a usability testing face to face with two NTNU students, who were also our end users but not the main user group. As shown in the figure below, the prototype was pre-installed on an iPad to simulate the digital device. IPad was put on the same height as eyesight.

Fig. 20 One team member is clicking the prototype on iPad

During the test, they were asked to perform 3–4 short tasks, following with a short interview. The tasks were as follows:

Task 1: book a new meeting, add meeting members and catering

Task 2: change the meeting “annual board meeting” from 21st to 28th

Task 3: check in the meeting and then extend it to 30 more minutes

During the tasks, they were asked to think aloud, one team member observed their behaviours, noted down their time to finish the task, the number of errors, complete rate and page views or clicks (how many pages they clicked compared to the most ideal path)(Courage and Baxter, 2005, p.442). The team member tried to give help as little as possible unless it’s due to the unclarity of task description or they couldn’t finish the tasks without further guidance.

During task 1, both participants tried to register first instead of clicking on the “book new meeting” button. After being told they had already an account and can log in right away, they then were confused with the ID card login option and tried to switch to log in with email and password since they didn’t have an ID card. Also due to the limit of figma, users need to click on the ID card image to log in instead of actually scanning a card.

Fig.21 log in with ID card

In task 2, the first participant made an error. He clicked on the edit button instead of the radio button to select a meeting first. He couldn’t finish the task without extra guidance. The second participant tried to click on the date to change it but found out it was unclickable. It was supposed to be clickable but we didn’t achieve it in our prototype.

Fig.22 My meeting List

In task 3, the first participant had no problem with check-in a meeting but was stuck in the next step, to extend the meeting for 30 more minutes. He wanted to go to the meeting list to change the meeting time instead of clicking on the “extend” button. After testing, we discussed this in the team and decided that there should be a time limit on when users were still allowed to change their meetings. We also thought task 3 should be divided into two tasks because check-in and extend are two different scenarios and require users to have different mindsets. Usually users will check-in 5–10 minutes before the meeting starts, but extend the meeting while they are already in the meeting and discover they need more time. In the testing with participant 2, we separated check-in and extend meeting tasks and she finished both without any problems.

Participant 1 gave us some feedback such as the colour contrast was too strong, the text and buttons were too small, adding all the people to the meeting seems very tedious. Participant 2 was a bit confused about who to add to the meeting since we didn’t clarify in the task. She also didn’t know whether the organizer, who booked the meeting, was included in the meeting members. In task 2, she was a bit surprised and confused when she saw so many bookings in her booking list since she only made one booking in task 1.

We have made a checklist to summarise their feedback and suggestions for changes on the prototype.

Fig.23 Checklist from usability testing 1

4.7.2 Usability Testing 2 and Results

Two people who worked in the Bright House have tested our prototype. The test was conducted online because of coronas virus. We asked them to share their screens during the task. The tasks were the same as the previous test, except for the first task. This time they needed to change the booking time from 14:00-15:00 to 14:00-16:00 because we believed this was an important task with high frequency.

The two participants gave us some positive feedback. They mentioned that the system looked simple and easy to understand. The different colours for different buttons, for example, green for book new meeting and red for end meeting, made it easy to understand the meaning of the different buttons. The big wheel clearly showed how much time was left for the next meeting. One mentioned that it’s a very good idea to scan the ID card to login since everyone working in the Bright House had a card to get in and out of the doors of the Bright House.

In the first task, both of them added catering first before adding meeting members. We were a bit surprised since the add meeting member button was at the top of add catering, and they should book catering based on the number of attendees. Both of them had problems in the second task, which was to change their previous booking. They wanted to click on the date to change it right away since our task is only to change the date. But it was unclickable in our current prototype. Participant 1struggled a bit to figure out which button to click. The “change” and “delete” button were a bit far away so it took some time for her to notice them (see Fig.12). Participant 2 couldn’t finish the task until we guided her step by step.

Fig.24 My meeting list

Based on the feedback from the testing, we realised that the biggest problem was about changing meeting and decided to change it back to the previous version of design.

05 Discussion

5.1 Reflect on methods

· Reflect on testing

Firstly, we tested our prototype with some fellow students and an interaction designer, this was more cost-effective as many design flaws could be discovered without bothering the end-users. Then we improved our prototype and tested it on two friends face to face. During testing, we had in mind that these users were not our main target group, which meant that they might use the product differently as our main user group. For example, our friends were confused about the login function while the main user group were quite familiar with the id cards. Finally, after having improved our prototype again we tested it with our main target group (people who worked in Bright House). The feedback was overwhelmingly positive, one of them even encouraged us to pursue this further and implemented it in Bright House! Some minor problems with change booking function were discovered in this last testing session and have been changed, however, we didn’t do one more testing to testify it.

· How much should we listen to the users?

This is a very difficult question, in our project we have listened to the users a lot and taken them into account without reflecting enough on their suggestions. One typical example was the change meeting function, in the first usability testing, one participant complained about the radio button being confused. We agreed with it and decided to change it even though another participant had no problem with it. Our new change meeting function in the second usability testing was found to have worse results than the previous design. We were very panicked when we heard something negative from the users and overreacted to it. We have been susceptible to our previous biases, and when our biases were confirmed by the users in the testing, we would make quick and rapid decisions (Grozny, 2018). Don Norman talks about listening to the users too much and brings up some good points (Norman, 2005, p. 17). He mentioned that designers should be authoritative and examine the suggestions by the users and compare them with the product requirements. “The best way to satisfy the users is to sometimes ignore them”, as he said. It is important for the designers to find the balance between listening to the users and ignoring them.

· How much guidance should we give in the testing and how clearly should we define the task?

In the cognitive walkthrough and the first usability testing, our tasks were described very vaguely. For example, we just asked them to book a new meeting, add meeting members and catering. During the testing, they asked for more details such as the date and time for the meeting, how many people and who they should add. If the task is too vague, they will ask you for more information and will want to confirm that they are on the right path (McCloskey, 2014). After discussion in the team, we agreed that we should give a clearer task description in the second testing. There should be a balance between too much and too little information, too much information will increase their mental workload. Another example is that one team member suggested that we should tell them to add members from the CONTACT LIST, while another team member believed that users decide themselves how to add members. According to McCloskey, task description shouldn’t describe each step in detail, and it shouldn’t include terms used in the interface which will bias the users’ behaviours and give us less useful results (McCloskey, 2014). Therefore, we should avoid mentioning the contact list in our task description.

· How to keep design consistency in the team

To keep consistency within the design team, a set of guidelines or templates for the system should be created and the design members should stick to this. Because of the limited resources in this product, and that our meeting room system is not very big, we decided to create some simple guidelines, such as different sizes of text and colour. We believed that a well-defined design system would be more cost-effective for a more complicated project.

5.2 Limitation and Future Work

· User Research

Due to the time limit, we didn’t have much time to interview more users or to have follow up interviews with them. It would be beneficial if we can do a second interview with them, and ask more detailed questions on how they attend meetings and how often they book the catering. And we also needed to conduct the interviews online due to the coronavirus. It would be better if we can interview them face to face, and ask them to show us how they use the Bright House booking system, which will improve our understanding of their behaviours and needs. Also, due to the time limit, we didn’t include more types of users in our research, such as people who don’t work in Bright House. Although we have put ourselves into their shoes during design and tested them with our prototype, we would have a better understanding of their needs if we could include them in the earlier stage. In the future, we would like to recruit different kinds of users and have a second-round meeting with them for a better understanding.

· Usability Inspection and Usability Testing

Again due to the time limit, we didn’t find enough experts and users in our usability inspection and usability testing. We have one participant in the heuristic evaluation, who is currently working as an interaction designer in Norway, and two fellow students in our cognitive walkthrough. They were not familiar with these methods and not qualified as experts. Therefore, they might not be able to find out the most important problems in the system or even gave wrong suggestions. According to NNGroup, one evaluator can find only 35% of the usability problems in the interfaces. Since different evaluators tend to focus on different aspects of the system and can find different problems, more usability problems will be found with an increasing number of evaluators. When there are 5 evaluators, they are able to discover around 75% of the system problems. Therefore, 5 evaluators are highly recommended and have a very nice payoff, 3 is the threshold that should be reached (Nielsen, 1994). Ideally, we would have liked to perform the usability inspection with more people, 3–5 members is what the literature said for these two methods. We would also have liked to conduct these tests with real experts in the field and not students however we only had students available to do it for this project. We have done two rounds of usability testing with 2 participants for each round. In Nielsen and Landauer’s paper, they found that for a small project size like ours, the optimal number of test users is 7. You can get a better return on your investment if you conduct multiple rounds of testing with five participants per round (Nielsen and Landauer, 1993). The limited size of our sample might make their comments not be generalizable to the other users. Another challenge was that because of the coronavirus situation we had to perform most of our user testing online. In this case, we could not read the user’s body language. However, we got the users to share their screens as they used our prototype, so we were able to see how they interacted with the prototype. If possible, we would like to test our prototype with more users multiple time with at least 3 users per time. Then we were able to compare the time and errors between different testings and see if our design can improve their experience. If possible, it’s best to test them face to face with an iPad hanging on the wall, because some interactions are different on iPad as on PC.

· Prototype

Due to the time limit, we couldn’t make all the functions work in our prototype, for example, users can’t book the meeting without login, they can’t add catering first without adding a meeting member. Also, due to the limited functions of figma, we can’t create the same interactions that we expected on the launched product, such as drag on the time slot to create a meeting and log in with an ID card. Due to these limitations, we can’t provide our users with a full experience of our product. During the test, they often clicked on something we didn’t plan and couldn’t finish their tasks until they returned and followed our predefined steps. These shortages also limited our finding of usability problems. In the future, we will improve our prototype and provide more clickable functions so that users can perform the tasks in their preferred ways.

Reference

Affinity Diagram. (2020) Wikipedia. Available at: https://en.wikipedia.org/wiki/Affinity_diagram (Accessed: 20 May 2020).

Babich, N. (2018) The Magic Of Paper Prototyping. Medium. Available at: https://uxplanet.org/the-magic-of-paper-prototyping-51693eac6bc3 (Accessed 20 May 2020).

Courage, C. and Baxter, K., 2005. Understanding your users: A practical guide to user requirements methods, tools, and techniques. Gulf Professional Publishing.

Gray, D., Brown, S. & Macanufo, J., 2010. Gamestorming : a playbook for innovators, rulebreakers, and changemakers, Bejing: O’Reilly.

Grozny, Maxim, (2018) To listen or not to listen to your users? Available at: https://uxdesign.cc/to-listen-or-not-to-listen-users-d67108facff9 (accessed 22 May 2020).

Nielsen, J. (1994) Heuristic Evaluation: How-To: Article By Jakob Nielsen. Nielsen Norman Group. Available at: https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/ (Accessed 22 May 2020).

Nielsen, J. & Landauer, T., 1993. A mathematical model of the finding of usability problems. Proceedings of the INTERACT ’93 and CHI ’93 Conference on human factors in computing systems, pp.206–213.

Norman, D.A., 2005. Human-centered design considered harmful. interactions, 12(4), pp.14–19.

NIELSEN, J. & MOLICH, R. Heuristic evaluation of user interfaces. Proceedings of the SIGCHI conference on Human factors in computing systems, 1990. pp.249–256.

McCloskey, M. (2014) Task Scenarios For Usability Testing. Nielsen Norman Group. Available at: https://www.nngroup.com/articles/task-scenarios-usability-testing/ (Accessed 22 May 2020).

--

--

Charlotte Li
jiaxinli92

Currently, I am doing research on Human-Computer Interaction at Sintef. I am a fan of clean, elegant designs with attention to detail and values.