Should Robots Pay Taxes?

PCMag
PC Magazine
Published in
6 min readJul 14, 2020

Federal taxes are due this week in the US. But what about our synthetic coworkers? Should they cough up, too? Jordan Harrod, a Harvard-MIT PhD student, has something to say on the subject.

By S.C. Stuart

COVID-19 means the US delayed this year’s federal tax deadline to July 15; if you’ve procrastinated, it’s time to get to it. But while tax collectors will only accept payment from humans in 2020, will we soon be sending tax bills to robots, too?

That’s the question posed by Jordan Harrod, a medical engineering and neurobiology PhD student at Harvard by day and YouTube creator by night. On her channel, she digs into geeky topics like whether it’s possible to make artificial intelligence speak and the aforementioned android taxation.

As she explains, US companies are taxed based on how many employees they have. More machines and fewer people mean less money paid in taxes. So the argument is that companies that lay off human workers and move to automation should not necessarily get a big tax break, and the taxes they do pay should go to retrain or support people who are now out of a job.

“I’m certainly not an economist or tax professional, so I spent a lot of time digging into research and current policy proposals on this topic to make that video,” Harrod tells PCMag. “Personally, I like the idea of an automation tax, where the money is redirected to retraining programs that help people re-integrate into the modern workforce and unemployment payments to support them through their training.”

She acknowledges reports that say automation won’t have as dire an effect on the US employment rate as some predict, thanks in part to the emergence of new industries and technology. And even if a robot tax is put in place, it “wouldn’t necessarily solve the income inequality issue without other policies implemented in parallel to make sure that money gets to the people who need it,” she says. “You can take money from the rich, but if you don’t give it to the poor, it doesn’t really solve that problem.”

Still, it’s an “interesting question” to ponder, she says. How did Harrod get thinking about these and other complex issues? We spoke to her recently about her neurobiology studies, distinguishing AI hype from reality, algorithm bias, and why she started a YouTube channel.

PCMag: You’re scheduled to complete a joint PhD from Harvard Medical School and MIT in 2023. What brought you to this field?
Jordan Harrod:
I came to neuroscience somewhat by accident. I’ve done research in a few different fields since starting college, and used each past experience as a way of narrowing down what I wanted to do next. By the time I started my PhD, I’d narrowed my research interests down to something that would let me use machine learning and build devices for medicine, which is obviously still pretty broad.

And, interestingly, you’re working with not one but two labs.
At the end of the day, I settled on a project that I both found interesting and that was in a lab run by faculty who I felt matched my mentorship preferences, which actually happened to be two labs: the Neuroscience Statistics Research Lab and the Synthetic Neurobiology group.

Tell us the story behind your YouTube channel.
I was about to start my PhD, and knew I wanted machine learning to be a significant part of my research. However, given that my program focuses on translational and clinical research, I wanted to learn more about how people interact with algorithms, and I wasn’t able to find a lot of resources that were geared towards the average person. A lot of people supplement their education in one way or another by watching Crash Course or Khan Academy videos, so YouTube seemed like the place to be. And I’ve been involved in science communication since high school as both student, and teacher. So I put my plan to do science communication on YouTube together with my interest in AI, and the channel was born.

How do you come up with your sometimes strange and philosophical subject ideas?
They’re often random ideas that pop into my head when I walk down the street and see a piece of technology that I hadn’t really considered before. My goal is to give people the tools and information they need to interact with the algorithms that often govern significant aspects of our lives, even if we don’t always know it.

In your TedX talk you say that few humans have ‘AI literacy’ and, as such, are at a disadvantage when interacting with expert systems.
It’s true. Many people can’t distinguish AI hype from reality, and I don’t think you need to be an expert researcher in order to do so. You also may not realize that you are interacting with an AI system in the first place, which can become problematic when these systems are making life-or-death decisions. More broadly, a lack of AI literacy can lead to policies and regulations that don’t effectively govern new technologies, and individuals engaging with biased systems without understanding the risks associated with that.

You also point out that AI isn’t just handling the big stuff.
It also affects your day-to-day experiences in small ways online. You may be missing content from friends and family members because an algorithm is prioritizing content that you react to instead, which, in turn, causes stress and negatively impacts your mental health.

What do you want people to do after watching your videos?
I think it depends on the video. Most of my videos are purely for education and awareness, so that people can walk away from it having learned something new that might help them the next time they encounter a similar system. On the other hand, I hope that people watching my AI 101 series might continue to improve their programming skills and look for additional resources to do so if they found the tutorial interesting.

On Juneteenth, you did a video explaining how AI preserves systemic racism through social systems like education, healthcare and the law. How can we make AI explicitly anti-racist to rectify this?
That’s the goal, isn’t it? Unfortunately, one of the many things that I’ve realized in making videos on AI fairness and bias research is that there’s no one way to ‘fix’ it, because the fix will always be predicated on your definition of fairness, which comes with your own personal biases.

Which is why diversity in AI is so important, to have checks and balances on so-called ‘hidden biases.’
Right. Balancing datasets and regularizing distributions during training are also steps you can take, but what a lot of the research has shown me is that community engagement is both extremely important, and often skipped, while developing AI systems. By involving the communities that will be affected by these technologies in every stage of the design and development process, you’re more likely to tailor your solution to the needs of the community and consider on- or off-target effects that you may not have been aware of otherwise.

Good point. What are your plans for post-doctoral life?
In terms of post-doctoral life, I haven’t settled on a path quite yet. Luckily, I just finished my second year of my PhD, so I have a good amount of time before I have to decide. I’d like to continue working on high-risk/high-reward projects, likely in industry, so places like Google X are interesting to me. But, I also wouldn’t be surprised if my dream job hasn’t been created yet. Some of these technologies move so quickly that there might be a whole new field for me to consider by the time I finish my PhD.

Finally, you hint that sometimes you use AI to curate YouTube videos for you. Tell us more.
Well, if I told you that secret, everyone would be able to do it!

Originally published at https://www.pcmag.com.

--

--