Andrew Dresser
Computers and Society @ Bucknell
8 min readApr 27, 2020

--

Discovery Machine Healthcare and Military Simulations

By: Megan Koczur and Andrew Dresser

Discovery Machine Inc. is a local company located in Williamsport, PA. This company makes custom training simulations for anything from healthcare training simulations to military simulations. In every single simulation the company creates, all different demographics need to be considered and need to be proportionally represented in order to ensure an accurate simulation. Two simulations focused on are in regards to healthcare and military. The mission of our project is to address these potential issues and why they are so important and then provide some potential ideas to mitigate these potential issues.

The owner, employees, and those that use the simulations are the most important stakeholders involved. The owner and employees of Discovery Machine could run into legal issues for incorrect simulations permitting they fail to meet the ACM code of conducts; they need to make sure all information they are using is credible and accurate. Those that buy the product most likely have to sign a contract stating that they are responsible for any issues that arise as a result from using the simulation. Those buying the product should double check information learned in the simulation before using it on someone if they have never used it before. Discovery Machine has the public and their safety in their best interest but errors can always occur by accident.

Discovery Machine Inc. creates scenarios for assessing patients, using medical tools, as well as interacting with patients. This allows healthcare workers to practice important everyday tasks without doing so on real patients. They can sharpen their skills or quickly review a task if they are unsure of how to handle something. These simulations have the potential to be extremely helpful but could also cause several problems if not created correctly.

Simulations for healthcare need to be exceptionally accurate as an incorrect simulation can lead to a staff member providing a patient with a false diagnosis. If workers are using these simulations for learning new things or as a refresher, this educational information needs to be accurate or it could lead to false learning. This could then lead staff to perform a task incorrectly which could harm not only the patient but others as well. Lawsuits would likely come with this if malpractice was involved as well as the loss of jobs. There seems to be several legal risks that arise with simulations for healthcare which could potentially be tied back to Discovery Machine Inc. Section 1.4 of the ACM code of ethics states to “be fair and take action not to discriminate.” By not considering all demographics, this code would be directly violated.

One thing that the simulations need to do is take into account specific illnesses in order to ensure accuracy. Certain illnesses are more commonly seen in specific genders, races, and ages. For example, 40–50% of African American men and women typically develop a form of heart disease. A simulation may incorrectly diagnose a patient with heart disease if it does not have specific questions that would identify other less common diagnoses. In addition, Duchenne Muscular Dystrophy is seen in boys and not generally diagnosed until around the age of 4. It will be difficult for the simulation to diagnose the cause of clumsiness, weakness, and loss of function initially.

As mentioned earlier, Discovery Machine Inc also creates military simulations. When creating these simulations, it is very important that genders are represented proportionality. This is important from a tactics stand point. In the simulations that Discovery Machine Inc creates, they do troop tactics and negotiation skills. Men and Women tend to excel in different ways as a result of using different ways of thinking. This is not an analysis that we are able to do, since the ACM code of ethics states to only “perform work only in areas of competence.” Since physiological differences here are not an area of competence for us, it is unfair for us to do this analysis. However, we still have recommendations of how to go about this issue.

These military simulations need to use an accumulation of past data so they can be as accurate as possible. Having a representative simulation is important here because the distribution of gender in the military is not even. The military is roughly 17% female and 83% male. And since discrepancies in gender strengths could lead to an inaccurate simulation, we need to make sure that this 17 and 83% are represented proportionally, so that the simulation operates much like a normal military would.

The simulations make it possible for those in the military to practice without being in a real war. Without accurate data, those using the simulations may not be as prepared for war as they should be because they have been practicing on incorrect data. It is also impossible for the simulations to account for every scenario that might take place in the military. People may be confident in the events they have seen but when going to war, might see something new and freeze.

In regards to healthcare, there are illnesses that are most common or only seen in specific races and/or genders. These illnesses need to be accounted for in the simulations so healthcare workers know what they need to look out for when assessing patients. It may seem offensive that due to a specific race or gender, a patient will be more prone to a certain illness, but it has been seen in studies and makes for the most accurate scenario.

As stated before, there tends to be an unequal amount of women in the military compared to men. For the simulations to be most accurate, certain scenarios will have to have exceptionally more men than women in them. Some people may get offended by this as there is not an equal representation among genders. Another ethical concern may come with race. Studies show that there tends to be more White people in the military (both men and women) than all other races. Some may get offended because of the unequal ratio of races, but having simulations with accurate percentages of each race will lead to the most realistic scenarios for practice.

Distribution of active-duty enlisted women and men in the U.S. Military in 2018, by race and ethnicity*

Those working at Discovery Machine need to guarantee that the data they are using for their simulations is as accurate as can be to their knowledge. They are obligated to use correct information to ensure the safety of the public. They need to be unbiased in their work and use the facts to create these simulations. They cannot make changes due to emotions.

The military simulations that have unequal representations of gender and races could be seen as discriminating to some. In order to make the simulations most accurate, there needs to be an unequal amount based on past data. This will help with the preparedness of those using the products. The owner and employees would be responsible for such discriminations as they are the ones creating the simulations and choosing who is in what.

Some may believe that one is being discriminated against if they are told they are more prone to a specific illness based on race or gender, but a significant amount of study in these fields has shown this is the case for several conditions. The owner, employees, and healthcare workers could all be responsible for the disparate impacts as Discovery Machine is creating the simulations and the medical workers are interacting with the patients and sharing this information.

For healthcare, the best-case scenario would be a patient coming in with an extremely rare condition that is difficult to diagnose. Many times doctors have a hard time determining certain diseases because they are so uncommon, but hopefully, the simulation has accumulated information for almost every illness and is able to detect it. One example would be the ability to more commonly diagnose Duchenne Muscular Dystrophy in boys younger than the age of 4.

The worst-case scenario would be an incorrect simulation that would lead to a false diagnosis and as a result, a patient’s death. This could lead to both the medical center as well as Discovery Machine being sued for the death. Both businesses could potentially be forced to shut down as well.

For the military, the best-case scenario would be a war in real life that plays out almost the same way as one of the simulations. Those working would be extremely prepared and hopefully, as a result, keep everyone safe and healthy.

The worst-case scenario would be a situation that has never been seen before. Those fighting are unsure of how to proceed and as a result, many people are killed or injured.

In regards to healthcare, the worst-case scenario could hopefully be avoided by having an excessive amount of professional medical workers check each scenario to ensure the accuracy of the simulations. This would in turn lead to little or no false diagnoses due to Discovery Machine’s work. If a false diagnosis were to occur after using Discovery Machine’s simulations, one response would be for Discovery Machine to close down their healthcare simulations.

For the military, risk of the worst-case scenario could be reduced by pulling as much information from around the globe as possible to make the simulations as well as teaching those in the military how to best respond to a situation they have never seen before and protect as many people as possible. People in the military are constantly being caught off guard; once this occurs, it is best to add that situation to the simulations and work on better ways of reacting to it.

Use a combination of information from the past for each scenario in order to create the most accurate simulations.

For military simulations, be consistent with both race and gender representations. Do not have an equal amount of men and women in the simulation but an unequal number of each person of each race.

For healthcare simulations, run through all of the possibilities for illnesses for each scenario. Do not make assumptions based on looks or actions.

One piece of the ACM code of ethics that specifically caught our attention, was to “ensure that the public good is the central concern during all professional computing work.” If these practices that we talked about are not done, then the public good would not be the central concern. Through ensuring that the race, gender and other demographic distributions are accurate, it is not only more culturally appropriate, but is beneficial to them and the corporations that they are selling these simulations too.

References

  1. Duffin, E. (2020, April 7). Distribution of race and ethnicity among the U.S. military.
  2. Ledford, Heidi. “Millions of Black People Affected by Racial Bias in Health-Care Algorithms,” October 24, 2019.
  3. “Women, Regardless: Understanding Gender Bias in U.S. Military Integrat.” National Defense University Press, January 9, 2018.

--

--