UCD Charrette Process Blog — Entry 4 Ethics interlude 1: Beneficence
October 22, 2016
Part 1: Critical Understanding
The principle of beneficence might be misunderstood by someone who has not read the original text of the Belmont report by mistaking beneficence as an idea that does not involve any form of harm. However, the Belmont report states that although the general rules of beneficence are “(1) do not harm and (2) maximize possible benefits and minimize possible harms,” avoiding harm requires learning what is harmful. Thus, a way to clarify that misunderstanding may be to explain that learning what is harmful involves exposing one to risk of harm.
2. Not Easy
Given the ethical principle of beneficence, it may be “not easy” to apply this principle because of difficulties and dilemmas that might arise. For example, in eye tracking, there may be more than minimal risk associated with the activity. There may be damage to the user’s eye over time, which would not be found unless there is a human tester to show that possibility. However, without the human tester, future improvements and benefits to people may not be possible. This can become both a discovery and a great mishap especially when the researchers do not expect this to happen. The difficulty in this is that the research can involve more harm than bargained for, and that the discovered harm can help researchers create products safer for others.
Part 2: Application
- Research Example
The principles of beneficence can be applied to my usability testing sprint of week three. The risks in the test include burning the user with hot water or coffee, and the coffee maker may even break down physically harming the user. The benefits would be when the user has successfully brewed coffee, which would satisfy the user by completing the task and have freshly made coffee. Also, whether the user successfully made the coffee or not, it would be beneficial because the test would help researchers (in this case, my research group and me) to make valuable observations which would allow for future improvement of usability testing and product development. What I believe is involved in deciding whether risks outweigh the benefits is the damage that can potentially be inflicted on the user. If the harm presented for the user is a minor burn or injury, I would say the risks do not outweigh the benefits. However, if the user experiences chronic injury as in the eye tracking example, or a major injury, I believe that the risks do outweigh the benefits. In my usability test, if the users had the potential to experience chronic injury or a second/third degree burn, I would state some precautions to the user. If the user still accepts, then I would proceed with the test.
2. Design Example
The Belmont report’s beneficence principle can be applied in my interaction sprint from week two. The risks in my citizen science app would include risk of dangers such as hacking into other individual’s accounts, meeting and greeting strangers who may not be a safe person, and stealing personal information through hacking into another user’s account or the system. The benefits would include helping researchers to gather data for their projects, giving users opportunities to make new friends, giving users opportunities to spend more time with friends, and helping users get involved in research. When deciding whether the risks outweigh the benefits, I would focus on the user’s safety when using the app. The risk of having personal information taken by hackers is a significant potential threat to the users. However, the risk can be minimized by connecting the app to Google or Facebook accounts, which have systems that track who has logged into the account, where, and at what time. Also, if the user is making an account in the app, the risk of hackers hacking into the account can also be minimized if the system prompts the users to create a password that has certain characteristics for higher security. Examples of characteristics are longer passwords, and mixing in capital letters and numbers. However, when considering the risk of users meeting unsafe strangers, that responsibility will have to be on the user. The user can decide who to meet when, but the app cannot decipher which person is considered “safe” or “dangerous.” Thus, the user must be cautious when adding friends to collaborate on projects.