Voting Recommendation System

Eunsun Jeong, Isabel Whittaker-Walker, Jillian Howarth, Samantha Levy, and Susan Soe


Our group was asked to create a voting recommendation system that would give users suggestions on how to vote on a particular ballot issue. We chose to create a system for Massachusetts Ballet Question 1, which focuses on patient limits for nurses. Rather than educate the user or provide them with arguments for and/or against the ballot question, the system is supposed to be quick and accessible to users and tell them definitively whether they should vote yes or no.

Ballot Question 1

This ballot question, at a high level, limits the number of patients each nurse can be responsible for at any given time. Voting yes supports universal patient assignment limits for nurses working in hospitals in Massachusetts. Voting no opposes the initiative and allows the existing regulations to remain in place, which largely leaves determining patient limits up to the individual hospitals.

The Proposed System

Massachusetts Ballot Question 1 Voter Recommendation System

We made the prototype with Qualtrics primarily because of its ability to include survey logic. Ideally, it would have its own platform that could reach many individuals. Such a platform would be more aesthetically appealing and also have a results page that is more visually dependent on the results.

System Prototype Link:


In order to understand the main factors that make up the decision making process in Ballot Question 1, we read through numerous articles about nurses’ views on the question and what it entails. When we had an understanding of the important components and implications of the Nurse-Patient Assignment Limits Initiative, we began listing out factors that we felt would impact how people voted on the ballot question. From here, we began mapping out and iterating on the survey questions. We ended up setting up a survey with a point-scoring system so we could weight various factors.

After creating the initial survey, we conducted interviews with users as they took the survey and asked them for feedback. The questions that we asked users during the interview consisted of their likes and dislikes of the survey, improvements that could be made, the helpfulness of the voting recommendation system, and the initial voting decision of the user so that we could compare our recommendation system’s result to their decision.

Survey & Recommendation Algorithm

Our survey is ten questions total, split into two parts. Because the system is meant for users who either may not know very much about the ballot question or may not want to learn very much about the ballot question, but still want to know how to vote, we tried to keep the survey as simple as possible.

Our questions are straight forward and have discrete yes/no answers. We intentionally gave each question only 2 answer possibilities and most of the questions live on their own pages within the survey. We made these choices to account for the Hick-Hyman Law, since we knew users would already be spending cognitive effort while they consider each factor. Another thing we heavily considered was Signal Detection Theory. While we hoped our system would be accurate the vast majority of the time, we also understood that errors would be made and had to balance incorrectly suggesting no with incorrectly suggesting yes. We used the second part of the survey, which scores the responses in an attempt to better reflect our users’ priorities, to mitigate this risk.

The first part of the recommendation system is comprised of four questions. This portion of the survey serves as an elimination system, as seen in the diagram. If the survey taker believes that a) nurses perform less work than is appropriate, b) there should not be a decrease in the nurses’ workload, c) there should be no government regulation of hospitals and healthcare, or d) government should not restrict maximum workload of nurses, then the survey will end there and suggest that the person should vote for “no”.

Logic Flow for Part 1

The remaining six questions comprise the second part of the survey and are scored. The user is asked if they believe that (if Ballot Question 1 passes):

  • nurses’ average workload will be decreased
  • quality of individual patient care will be improved
  • the same restrictions can be applied to all hospitals
  • Massachusetts hospitals will be able to hire enough nurses
  • hospitals will fund this bill themselves
  • whether the user prioritizes working conditions or functionality of hospitals.

The user receives a point for each “yes” answer in the second part. These points are then totaled and the score is used to determine how the user should vote. If the user scored 0–4, they are recommended to vote “no”, and if the user scored 4, they are recommended to vote “yes”. This scoring system is exposed to users at the end of the survey and brief explanations are given, to provide clarification and context for why the system is recommending to vote a certain way.

Results Screens

The result display has 4 different components: 1) what the user should vote and their preference in percentage, 2) the questions the user answered and each answers’ impact on voting yes or no. 3) a description of the system’s reasoning and 4) picture of a nurse. We chose to display the results as an infographic to make it more intuitive and easier to read. While designing the results, one major issue we encountered was how broad the audience was. Our system should be usable for everyone, regardless of their educational backgrounds, social class, or investment in politics and the issue. By keeping the interface simple and visualization based (by using colors, images, and graphics rather than only words), our display is easy for users to understand and accessible to users, regardless of how much they care about the system’s reasoning.

The first component shows the system’s voting recommendation for this ballot question (yes or no). It also gives the user an idea of how extreme their answers were with a radial bar chart. The second component displays what the user answered for each question and how the answer impacted the recommendation algorithm. In this way, the user can understand and trust the voting recommendation system more because they can go through each answer and question. The third component shows a written description of why the user should vote the way the system is recommending. Even though the user can see the result clearly from first and second components, they may want more explanation and clarification, especially with how complex the this ballot question is. Last part of the result is the image of a nurse which we believe easily implies and reenforces which ballot question this is among three ballot questions.

User Interviews

After reviewing our proposed system with five non-college aged voters, we obtained some helpful insights on what we have developed thus far.

Specifically, interviewees reported that they liked how the system:

  • Addressed many different aspects to consider when voting on Ballot Question 1
  • Introduced and highlighted (if user did not already know) important topics to consider when voting on Ballot Question 1
  • Guided the user to a recommendation based on user input
  • Reinforced user’s initial opinion on how to vote
  • Provided both textual reasoning and visualizations on final screen to present recommendation to user

Alternatively, interviewees reported they did not like that the system:

  • Quickly directed the user to the end of the survey because of a specific response in the first few questions (However, the interviewee noted his skepticism was alleviated after reading the reasoning provided with the recommendation at the end of the survey.)
  • Included questions that required more context (especially for uninformed voter)
  • Used unfamiliar terminology

In unanimous agreement, interviewees felt the system could be improved in the future by featuring more educational materials on relevant information (e.g., Massachusetts nurses’ experiences, mandate proposal) with each question as means to provide more context. Doing so, many interviewees reported would help in ensuring the voter is as informed as possible. Additionally, one interviewee thought the survey could be improved with adding more answer choices, such as a neutral option.

Overall, all five voters agreed the survey would help voters in deciding on Question 1 on the Massachusetts 2018 ballot. A few of the voters reported it would help in the way it introduced and emphasized topics to consider when deciding how to vote. One voter claimed it helped in that it reduced the user’s confusion on how to decide how to vote. Another voter felt that the minimal steps and immediate feedback made it easier to understand own preference on the ballot question.

Of the five voters interviewed, three are voting no whereas two are voting yes.

Future Directions

We put quite a bit of research into this project and felt, based on our user interviews, that our system was fairly accurate in its recommendations. That being said, five users is not a particularly large sample size and the system would likely make errors as more people used it. One thing that would largely eliminate this risk is machine learning. We could get feedback from users on how they actually voted and combine that with their survey data to create very informative datasets that machine learning could use to tweak our weighting and algorithm and make the system more accurate.

In conjunction with machine learning, using big data would both reduce the amount of input needed from users and likely would make the system more accurate. A user’s social media posts, for example, could be analyzed for language and content that might inform the algorithm. Similarly, a user’s friend list and social media feed could tell the system about who they interact with, whether they know a lot of people involved in the healthcare system, and many more factors that all influence how the user thinks and feels.

Lastly, our group thought it would be interesting if our system took the form of a chatbot. A mobile chatbot would be more accessible to users who are actively at polling locations or on their way and trying to decide how to vote. It could also be combined with machine learning to analyze not just what the user says, but also how they’re saying it, to understand nuances in how users see this issue (much like IBM Watson does).