Demystifying AI for Business Leaders: Current Trends and Challenges Part 1

Douglas Joseph Pecot
Automation Architech
4 min readApr 19, 2023

Everyone and their mother has something to say about AI these days. Whether you’re trying to garner media attention by signing a moratorium or making AI music videos, the entire world is focused on grappling with this horrifically powerful and sometimes down right silly and fun technology. Individuals and organizations both are looking towards how to best integrate AI into an existing project or start a totally new venture in this brave new world.

Here are some insights that can help you navigate the AI landscape with confidence.

Machine Dreams

Without providing an exhaustive list of examples (which can generally be found with a quick Google search), hallucination refers to AI responses that are incoherent given their training data. Unfortunately (or fortunately?) at the time of writing, the best practice of detecting hallucinations involves an actual human fact-checking the output of the AI.

Prepare for takeoff

Let me share a personal experience I had with the gpt-4 model while using it for data analysis. I provided it with a CSV file to run calculations, and as I checked the intermediary steps, I discovered that it had generated a completely fake dataset to "answer" my question. At first, it seemed to be on the right track, stating: I need to compare the two datasets to find the rows in the temp_sheet.tsv "Location" column that has corresponding cities in the demo.csv "City" column. However, things quickly went downhill from there.

```python
import pandas as pd

data1 = {'Name': ['John', 'Paul', 'Ringo', 'George'],
'Age': [20, 21, 22, 23]}

data2 = {'Name': ['John', 'Paul', 'George', 'Ringo'],
'Height': [180, 170, 175, 165]}

df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)

pd.merge(df1, df2, on='Name').head(3)

The Rise of Agent Behavior

With the launch of ChatGPT Plugins (most notably the ability to book flights) came the concept of “agent” behavior. An Intelligent Agent is essentially an AI that can receive environmental data and perform actions based on contextual information. In theory, it’s quite simple to “string” the inputs and outputs of different AI models to each other in a way that allows a main AI to interact with different assets in an agentic manner. LangChain is a popular open-source library for integrating this kind of behavior in your Python or JS project. There are many pre-built tools, clear examples in the documentation, and a highly-active developer community to help you integrate advanced AI usage into your project.

Bond, James Bond

Interested in implementing this AI solution for your business? Contact us for a consultation!

On a side note, OpenAI, if you happen to come across this article, I’m eagerly awaiting access to the plugin SDK! 😉

Data Leakage

Due to the nature of powering Large Language Models (LLM) and the advent of Reinforcement Learning from Human Feedback (RLHF), proprietary information has already been “accidentally” leaked into the verification and training data for OpenAI. This trend is likely to continue to get worse before it gets better as researchers and developers seek a balance between powering these feedback and resource-intensive engines while also securing user-submitted data. Unfortunately, the line between AI “power” and privacy is very thin due to the nature of how ML models work.

Data Leakage vs. Data Breaches

One way that organizations (such as Samsung) are combatting this is by developing in-house LLMs, given enough data, that allow their users to keep user prompts and interactions on internal servers. The downside here is that this relies on:

  • A large enough supply of data to fine-tune an open-source model
  • Enough users and verifiers to implement RLHF within organizational processes

Next Steps

Armed with these insights, I hope you can embark on your AI journey with greater confidence, even in this rapidly changing landscape. Remember the timeless machine learning mantra: “Garbage In = Garbage Out.” Always double-check your work and keep humans in the loop to minimize any potential negative side effects of utilizing AI in your project.

📻 Stay tuned for Part 2 of this series, where we’ll dive into privacy and security issues related to AI. In the meantime, follow Automation Architech for more great content!

🧙‍♂️ We are AI application experts! If you want to collaborate on a project, drop an inquiry here, stop by our website, or shoot us a direct email.

📚 Check out some of our other content:

--

--