New Insights Focus on AI, the Evolving Digital Economy

MIT researchers explain five key trends they’re studying now

MIT IDE
MIT Initiative on the Digital Economy
4 min readAug 14, 2024

--

By Sara Brown, MIT Sloan

Businesses and individual users alike are grappling with how to use generative artificial intelligence in responsible and beneficial ways. To help guide them, researchers at the MIT Initiative on the Digital Economy are looking at how AI is being developed and used and exploring its potential and limitations.

At the 2024 MIT IDE Annual Conference in May, researchers shared insights and updates about their work. Topics ranged from quantum computing and responsible data use to how generative AI learns, how it affects hiring, and how it can help fight disinformation.

[Read more about event presentations on GenAI and other topics here and here.]

A new report from the conference offers a closer look at some of the researchers’ key findings. Among them:

1. People have complicated perceptions of AI-generated content.

As generative AI is increasingly used to create content, researchers are looking to understand how this content is perceived. According to a study by MIT Sloan senior lecturer Renée Richardson Gosline and MIT Sloan postdoc Yunhao “Jerry” Zhang, humans generally express a preference for content created by humans. Yet when people were presented with examples of AI-generated and human-created content, people did not express an aversion to AI-generated content. When people were not told how content was created, they ended up preferring AI-generated content.

Read the research: “Human Favoritism, Not AI Aversion” Watch the conference session: “Human-First AI”

2. Data provenance is increasingly important.

AI models are trained on data — and it’s important to understand how that data was collected. Otherwise, it could be inappropriate for an application or have been gathered illegally, or it might not include the right information. This is why a group of researchers, including Sandy Pentland and others from MIT, have collaborated on the Data Provenance Initiative, which audits the datasets used to train large language models. Another project, the Data Provenance Explorer, lets users select different criteria for — and see information about — data they might use.

Read the research: “The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing and Attribution in AI” Watch the conference session: “Building a Distributed Economy”

3. The democratization of AI has a long way to go.

AI research used to be evenly divided between academia and industry. This is no longer the case, according to a team of researchers that includes MIT research scientist Neil Thompson and postdoc Nur Ahmed. They found that over the past decade, industry has gained the upper hand when it comes to computing power and access to data, making it easier for businesses to hire talent, develop AI benchmarks, and invest in research. But that also means that industry is influencing the direction of basic AI research, raising concerns about whether future AI developments will be in the public interest.

Read the research: “The Growing Influence of Industry in AI Research” Watch the conference session: “Artificial Intelligence, Quantum, and Beyond”

4. Companies managed by “geeks” are more agile than traditional organizations.

In his new book, “ The Geek Way,” IDE co-director Andrew McAfee looks at how geeky companies such as Netflix successfully developed new management techniques. Geek companies “move faster, are a lot more egalitarian, give a great deal of autonomy, and try to settle their arguments via evidence,” McAfee said.

Read more about the research: “New Book Explains the ‘Geek Way’ to Run a Company” Watch the conference session: “Technology-Driven Organizations and Culture”

5. Job loss from AI might not be as bad as some feared — at least, not right away.

In another study co-authored by Thompson, the researchers created a new AI task automation model to more accurately predict the pace of automation. Looking specifically at computer vision, they found that technical and cost barriers could leave about three-quarters of jobs unchanged in the near term. In the short term, Thompson said, businesses can perform cost-benefit analyses to determine which tasks it would make sense to automate with AI.

Read the research: “Beyond AI Exposure” Watch the conference session: “Artificial Intelligence, Quantum, and Beyond” Watch all the conference sessions videos.

Originally published at https://mitsloan.mit.edu on August 14, 2024.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.