The Tech World’s Ethical Crisis

Can we right the ship before we no longer have any choices or control?

Laurence Hart

--

In 1932, a human research study into syphilis was begun in Tuskegee, Alabama. The goal was to study the progression of syphilis in rural African American men. The study ran for 40 years, during which time, the participants thought they were being treated.

They weren’t.

By 1947, penicillin had become the standard treatment for syphilis. None of the participants were provided penicillin, not even a control group. The scientists prevented the participants from receiving penicillin from other sources in order to “protect” their study.

The study ended when its existence was leaked and it was shutdown in 1972. The largest result of the unethical study was the establishment of Institutional Review Boards (IRBs) and protections for human research subjects. One of the core protections is the concept of informed consent.

I attended Auburn University down in Alabama. When you attend a school one county over from Tuskegee and routinely enjoy the Tuskegee National Forest, when you take an ethics class, this study is discussed in depth. While it is shocking that any person would do such a thing to another human being, we learned that part of it tied to the belief that the subjects were inferior to the scientists. The spread of syphilis was their fault, not the fault of the scientists or the disease.

Fast forward to 2012.

Facebook conducts research on a subset of its membership. This subset, small in percentage to the overall population, still manages to number 700,000. That is more than 30 times the current population of Macon County, Alabama where Tuskegee is located.

The goal, see if the Facebook News Feed can be manipulated to influence the emotions of Facebook members. They have some ideas about the impact, but they don’t know what they will find. After running their experiment, they determine that there is a small impact. While not large, they can make someone share happier posts by showing them happy things and conversely, have them share more negative posts by showing them sad things.

Weeks later, they update the Terms of Service to include research. We do not know what conversations triggered the change at this time. I am not betting on coincidence.

Which brings us forward to today.

Over the weekend, the results of the Facebook study was published, causing an outcry. Many in the tech world shrugged it off as nothing more than the standard A/B testing that happens on websites all the time. Academics were more outraged because of the ethics rules brought about after the Tuskegee study.

The academics are right. This was completely unethical on the part of Facebook. The participation of academic researchers makes it even more problematic.

The worst part? I can only blame myself.

Why It Was Wrong

There have been many claims and debates of the past few days. People are trying to draw absolute boundaries around what is, and is not, ethical. The technical world won’t like this, but that is impossible. Ethics are not absolute. What is considered right one decade can be considered inconceivable in another decade. Just look at the Civil Rights movement and current feminist efforts for women to be treated equally and fairly as lessons in the constantly evolving state of ethics.

One of the first defenses I read about was the minimal impact of the effect. Moods didn’t change that much. This is immaterial. It was an experiment to change the emotions of the subjects. If it was known ahead of time how much those emotions would have changed, there would have been no need to run the experiment.

Even if it was known that only little changes would transpire, that can still be a problem. How many participants, of the 700,000 were depressed? How many of those were nudged downward? With some reports saying that 1 in 10 American adults report being depressed, that would seem like a large number.

What are the odds that one of them may have been pushed towards suicidal thoughts?

This is why you have informed consent. Participants have to know that they are participating in a study, what is being studied, and given an option to not participate. It does not matter what you put into a Terms of Service, if a participant doesn’t know about the study and cannot choose to opt-out of the study, then it crosses a line.

Facebook crossed that line.

This is more than just basic ethics, this is codified into law. There are Federal rules for federally funded studie. There are also the California Health and Safety Code, Section 24172, which states (emphasis added):

24172. As used in the chapter, “experimental subject’s bill of rights,” means a list of the rights of a subject in a medical experiment, written in a language in which the subject is fluent. Except as otherwise provided in Section 24175, this list shall include, but not be limited to the subject’s right to:

(a) Be informed of the nature and purpose of the experiment.

(b) Be given an explanation of the procedures to be followed in the medical experiment, and any drug or device to be utilized.

(c) Be given a description of any attendant discomforts and risks reasonably to be expected from the experiment.

(d) Be given an explanation of any benefits to the subject reasonably to be expected from the experiment, if applicable.

(e) Be given a disclosure of any appropriate alternative procedures, drugs or devices that might be advantageous to the subject, and their relative risks and benefits.

(f) Be informed of the avenues of medical treatment, if any, available to the subject after the experiment if complications should arise.

(g) Be given an opportunity to ask any questions concerning the experiment or the procedures involved.

(h) Be instructed that consent to participate in the medical experiment may be withdrawn at any time and the subject may discontinue participation in the medical experiment without prejudice.

(i) Be given a copy of the signed and dated written consent form as provided for by Section 24173 or 24178.

(j) Be given the opportunity to decide to consent or not to consent to a medical experiment without the intervention of any element of force, fraud, deceit, duress, coercion, or undue influence on the subject’s decision.

No matter how you slice it, Facebook violated the law, not just basic ethical principles.

Of course, how would they have phrased the question for consent?

Hey, we are going to try and see if we can make you more depressed by showing you depressing posts. Is that okay?

I would have said no.

The most prominent defense of the experiment comes from people asking how this is different from standard A/B testing used in website and other digital marketing efforts. There is a difference, but the similarities should give us pause.

The Rise of A/B Testing

For those not familiar with A/B testing, it is quite simple. Let’s say you are running an ad campaign. You are wondering if changing one element in the ad will improve sales. In the days of Mad Men, you would run the new campaign in a couple of cities while running the old campaign in the other cities. After seeing which campaign performed better, you could adjust all future campaigns.

In the world of the Internet, this is much faster and more precise. Email mailing lists can be sliced and diced into different groups with each group receiving slightly different communications. When you receive an email from an organization, you may be part of an A/B test. The organization may be using you to determine the best way to sell their product or services. They want to know the best way to manipulate you effectively.

You never know.

A/B testing is done with websites, emails, phone scripts, or news feeds. Social networks are constantly tinkering with their user interfaces to determine how to keep you on the site longer. Media sites are always looking to see what is likely to increase the odds of you clicking on an ad.

If you haven’t been exposed to A/B testing in the last week, then please let me know what books you were reading to keep yourself out of the digital world.

There is are two important aspects to A/B testing that should not be lost on anyone. The first is that it is advertising. We all know it is advertising. If your parents were even remotely like mine, they taught you to not trust advertising. Ads want to sell you something so they will share the good, hide the bad, and try to manipulate you into acting.

Now.

While supplies last.

The second is that ads have a goal. Everything they do is geared towards that goal. Even Facebook sells itself to people. They know that members are their greatest asset and that the more members they have, the more members they have to offer advertisers. Facebook may not be collecting money, but Facebook, like Google and Twitter, are trying to get you to buy into them.

Psychology plays a large part in advertising. Knowing how people think and behave so they can be manipulated into clicking a particular link is a science. A/B can cross emotional lines. It can do a lot of things, but Facebook did more than any advertiser ever could.

Facebook provided participants with an emotionally charged view of their friends and the world.

This wasn’t just filtering friends based on interests or past interactions. During the experiment, your friends appeared to all be having good times, or bad times. It presented a defined world view using your own friends as the proof.

They did it to see how much they could impact its members with no regard for the effects. There was no goal other than to see if they could.

Reevaluating Ethics in Technology

The worst part of this is that I have nobody but myself to blame for this. You have nobody but yourself to blame for this. Anyone who sees the ethical issues can only blame themselves.

When I was in school, I belonged to the Association of Computing Machinery (ACM). An international organization, ACM promotes computing technology and seeks to advance the profession. During this process, ACM has developed a Code of Ethics. When I became a Professional Member in 1997, I read the Code of Ethics and have since referred back to it. The very first paragraph includes the following,

When designing or implementing systems, computing professionals must attempt to ensure that the products of their efforts will be used in socially responsible ways, will meet social needs, and will avoid harmful effects to health and welfare.

Simple and straightforward. The problem is that I have never shown the Code of Ethics to anyone. I assumed that everyone was taught these ethics. The Facebook ethical breach show that my assumption was false.

I should have known it was false. It is rare to see ethical discussions at computing events. When I say rare, I mean to say only one event that I have attended over the last 20 years has discussed ethics as a primary topic in a session. I have also not had a single colleague ever discuss the ethics of what we can do with computers except in conversations that I instigated after we stayed out too late.

In college, I was required to take Ethics. In addition, like many college students, we stayed up late, typically on a Sunday night, discussing deep, “important” issues. These often included the ethics of many situations as we transitioned from the black and white ethical world of childhood to a world where everything ethical situation varied. Everything became shades of gray and we spent hours debating where within that sea of gray we should draw a line.

Many startups today are founded by people that did not finish college. Others are founded by people that spent their free time in college working on their grand designs. They all burst onto the scene and are mentored by people who focused on how to impress the VCs or build a team that works well together. This typically means hiring people exactly the same as the founder.

The importance of ethics is lost in this environment. It is not taught. When experienced engineers are brought into the fold, they are brought in to solve tough technical problems, not to think about the ethical nature of the work. Everyone is working for the big payday coming from going public or selling to the highest bidder.

It is all about the money. Greed is, once again, good.

This is why I and my peers have failed. We have failed to make sure that everything that is done in the industry is viewed from an ethical angle. We haven’t even tried. Everyone is busy trying to find the right talent that can adapt to rapidly changing technology. We forget about the people on the other end of our products. We may take time to think about how they use a product but we never take time to think about what the product is doing to them and society.

Which brings us back to A/B testing.

Do we need to revisit A/B testing from an ethical perspective? Maybe. That is a debate that should take place within the marketing industry. Advertising has always been an ethical quandary and it needs constant evaluation.
We also need to consider that all these firms with either questionable, or unknown, ethical direction do not start to abuse their power. Facebook has already proven it can influence people to go out and vote. What if someone offers them money to make supporters of one political party go vote and supporters of other parties to stay home?

Who would ever know?

Facebook, and other platforms, are standing on the edge of a slippery slope. They can step back or begin a slow descent. It is up to us to help them make the decision. We not only have to make sure they see the ethical problems in what they have done, but we have to help Facebook learn how properly navigate future ethical issues.

Otherwise we’ll go back to the good old days when everyone’s future was decided by a few powerful men in a back room smoking cigars and drinking whiskey.

--

--

Laurence Hart

I am me, myself, and I. I want the world to be a better place & I have opinions on how. Actually, I have opinions on everything. Working on a more human picture