Are We Ready to Live in a Digital World with AI?

Do they do more good or bad?

fireagama
Digital Society
7 min readMay 1, 2024

--

Photo by PIRO4D on Pixabay

Artificial Intelligence (AI) and automation have reshaped what it is like to live in a digital world. People complain about them stealing jobs, raising privacy issues, amplifying biases, challenging ethics, thus sparking confusion about the implications of living in this digital reality.

Age of Automation and AI

“The Industrial Revolution and Its Consequences”

A peer wrote about Industry 4.0 will be the product of: “people, data and, machines” in their medium post and I would like to touch on too. Industry 4.0 uses AI algorithms to analyse vast amounts of data collected from connected machines using the Internet of Things (IoT), predicting maintenance needs and optimising processes, leading to increased automation.

Exploring the Week 5 topic of the industrial revolution and disruptive technologies, automation has impressive capabilities of self-driving cars like Google Cars which uses Lidar sensors to map surrounding data. This automated data processing provides a real-time picture for safe navigation.

Automation can be involved with chatbots that answer customer service inquiries on websites and algorithms that recommend products to online shoppers. Workplace automation and human collaboration can complement each other, leading to increased productivity and innovation.

For example, a study shows AI’s potential to flag potential issues, and even suggest treatment options. However, the human doctor’s empathy and judgment remain crucial for diagnosis and patient care.

Social Interaction with AI and Chatbots

Photo by Mariia Shalabaieva on Unsplash

Imagine explaining to someone from 80 years ago that we can have conversations with machines and that they can help us at the same time — people would be baffled. Well, except perhaps for Alan Turing.

It’s really interesting to see how Snapchat recently incorporated a conversational AI, called My AI. It shows up just like a friend in your chat list, and you can talk to it anytime. It enhances user engagement and provides personalised assistance like deciding what to cook and planning trips.

Airlines like KLM offer chatbots (BlueBot) that assist passengers with booking flights, checking in, and managing travel details. Hotels use chatbots to answer guest inquiries about amenities, directions, and local recommendations.

Is it Ethical?

Photo By Lukas on Unsplash

AI’s influence extends beyond technology, shaping our societies and even our ideas of morality. The question is, can AI ever truly make moral judgments?

Recent studies highlight the inherent bias present in Large Language Models (LLM). These biases can manifest in ways like generating racist or offensive text when prompted which perpetuates stereotypes in media.

Privacy Issues

AI require ever-increasing amounts of data to function. This data can be personal, including details about our browsing habits, online purchases, and even health information. The use of these data raises concerns about privacy violations, potential misuse, and our control over our personal information.

“For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound.”

Tim Cook — Apple CEO

(from Cook’s keynote address2018 EU Ethics Conference, Brussels)

Would you want your job to be taken by AI?

According to a 2023 report by McKinsey Global Institute, up to 30% of working hours globally could be impacted by automation. The effects will vary dramatically by country, sector, and occupation. Jobs with repetitive tasks in predictable environments are most at risk such as data processing, back-office tasks, and manufacturing positions.

Critical Views

Photo by Brett Jordan on Unsplash

Hey ChatGPT, finish this building…

A nuanced reality would be, automation won’t simply eliminate jobs. While some tasks may become automated, jobs requiring human skills like management, expertise, social interaction, or working in unpredictable environments are less susceptible.

There is also a pressing need for regulatory and ethical frameworks that can guide the development and deployment of AI and automation systems.

Can we trust the fox to guard the henhouse?

A difficult question is that can we trust AI creators to regulate themselves? Tech companies are likely to resist regulations that curb innovation or impact their profits.

But how powerful are the companies that hold our data or the rights to it?

The concept of privacy in the digital age is blurring. Tech companies are reportedly to have lobbying power. They can exert influence on policymakers through lobbying and campaign contributions.

The ethical obligation to avoid automation and AI?

My view would be a complete boycott of AI might be impractical. Most AI is already integrated into our lives, and avoiding it entirely could hinder productivity or access to services.

Our Responsibilities

Photo by Brett Jordan on Unsplash

As digital citizens, we can review our privacy settings and press tech companies to disclose their data sources. Supporting entities that prioritise digital rights and privacy helps influence public policy and legislation.

Most importantly, educating other citizens not to spread hateful biases and stereotypes that AI could train from is vital. Changing a question a bit from a comment box:

Will well-trained AI judgements might be useful in mitigating human bias?

As discussed in Week 10, for instance, a well-trained AI could screen resumes by relevant skills and experience alone, reducing unconscious bias. For example, a candidate named ‘Muhammad’ would have the same interview opportunity as one named ‘Adam’, without bias.

As a responsible digital citizen, we should future-proof our career against automation. It is suggested that we should:

  • Focus on developing skills that are difficult to automate, such as critical thinking, empathy, and interpersonal communication.
  • Identify areas where human skills are essential in areas like data analysis or cybersecurity.
  • Learn cross-disciplinary skills that can be applied in various roles and industries.

Conclusion

With AI advancing rapidly without necessary changes, the drawbacks might outweigh the benefits. What good are these benefits if we lose control over our personal data and face ethical dilemmas? Thus, it is crucial for us digital citizens to adapt, learn, and use technology ethically to ensure a fair and progressive digital future.

Reflection

Photo by Михаил Секацкий on Unsplash

Course Experience

As an actuarial science and mathematics student (and as someone who’s also borderline chronically online), I’m usually comfortable with numbers. However, I wanted to step out of my numerical comfort zone to assess and write about the deep underlying issues of the online activities I regularly engage in.

Initially, I find that I am challenged by writing for an online audience while maintaining a critical approach to my points. Thus, this course challenged me to critically evaluate digital content that I see daily, from digital engagement, recognising fake news, ethics of AI and the internet as a whole.

Thoughts

I have this mixed feelings seeing how rapid society has changed into today’s digital world. It all started with advancements in machinery and communication, eventually leading to the internet and now the ‘IoT’ where everything seems connected.

I feel proud and inspired by our achievements, such as the rise of simulated spaces in VR and the development of smart cities. However, seeing how AI is beginning to spiral out of control or develop too rapidly, often in unethical ways has made me a little scared too.

Evaluation

The course was wonderful, all the 9 topics covered were holistic and helped me understand the importance of viewing the digital world from a bigger picture. I appreciated the interactive comment boxes and polls that allowed me to engage and share my own ideas and thoughts.

However, I sometimes find myself struggling to firmly grasp about a particular theme. I wish there was weekly quizzes and answers to test our understanding of the material, this would be great to strengthen what we have learned.

Analysis

Reflecting on my work in Digisoc1 and its feedback, it taught the importance of maintaining a critical viewpoint online for an individual or organisation, regardless of whether the behaviour observed is positive or negative.

In Digisoc2, I was reminded not only of the importance of image attribution but also that simply listing pros and cons is insufficient without critical judgment. Reaching a well-reasoned conclusion with clear evidence and reason is also a must.

Conclusion and Action Plan

I like how rather than just highlighting the dangers of AI, the course considered how can they be useful as well. The ethics theme struck me the most, which made me realise the impact of these ethical challenges on future generations, such as the way AI might limit our intellectual creativity.

Learning how social media use algorithms and persuasive design to keep us engaged, this course became an eye-opener for me to avoid getting sucked into ‘doom-scrolling’ while also being mindful of my digital footprint.

In the future, I will always remember the themes taught in this course because ultimately, what we do online definitely influence future generations, for better or worse, depending on our choices. It’s important to be conscious of how our contributions shape the digital society we’re building for the future.

--

--