The AI Craze Sweeping Silicon Valley Isn’t Just Another Crypto Fad.

Library of Trader
Coinmonks
14 min readMar 8, 2023

--

Is ChatGPT luck or danger in an industry? — Source: CNBC

The “greatest force for economic empowerment” that society has ever experienced will be this.

It will eliminate our jobs.“Generate a new form of human consciousness” is what it will do. We’re all going to die from it.

The new artificial intelligence known as “generative AI,” which can produce original content like essays, fine art, and software code, is the talk of Silicon Valley.

If you’re one of the more than 100 million people who have used ChatGPT or Lensa, the well-known image-creation app, you are already familiar with how the most recent iteration of this technology functions.

The proponents of generative AI claim that tools like ChatGPT, developed by the Microsoft-backed startup OpenAI, are just the tip of the iceberg in terms of the technology’s potential. Many think it’s a once-in-a-lifetime technological innovation that could have an impact on almost every aspect of society and upend industries like law and medicine.

According to Sandhya Venkatachalam, a partner at renowned VC firm Khosla Ventures, which was an early investor in OpenAI, “AI had a ‘wow’ moment” in November with the release of ChatGPT. She compared current generative AI developments to the development of the internet.

“This is definitely on the same scale, in my opinion. That is a viewpoint I have.”

Silicon Valley has been waiting 20 years for a real technological advance. The personal computer was invented in the 1980s, the internet in the 1990s, and the mobile phone and the collection of apps it was supported by in the 2000s. Since then, the tech industry has been anticipating the next major discovery (some are still optimistic that it will be Web3 or AR/VR). Nowadays, generative AI is widely regarded as a contender.

However, people in Silicon Valley have a tendency to overstate the benefits of new technologies. Is the excitement surrounding generative AI just hype? Possibly. If you’ve followed the rise and fall of cryptocurrency or heard grandiose predictions about how we would all be residing in the metaverse by this point, you may have some concerns.

The answer is that while generative AI has received a lot of exaggerated hype, for many people it is much more grounded in reality than Web3 or the metaverse ever was. The main distinction is that generative AI is already being used by millions of people to write books, produce artwork, and write computer code. According to a recent Morgan Stanley report, ChatGPT is breaking records for how quickly users have adopted it. It only took the app five days to reach 1 million users, while Instagram took 2.5 months and Twitter took two years. With apps like ChatGPT, DALL-E, or Lensa, almost anyone can quickly understand the potential of generative AI technology, despite the fact that it is still in its infancy. Which explains why so many companies, big and small, are rushing to profit from it.

We’ve already witnessed how generative AI is directing the corporate strategy of significant tech firms in just the last few months. Both Google and Microsoft, which are fierce rivals, are releasing their own chatbots and incorporating generative AI into their essential services like Gmail and Microsoft Word. As a result, it’s possible that in the near future, billions of new users will make use of the technology in apps that are essential to our daily interactions with one another and with chatbots. The main apps of other significant tech companies like Meta, Snap, and Instacart are also quickly integrating generative AI.

Not just the big tech companies. A new wave of investment into smaller startups has been sparked by the buzz surrounding generative AI at a time when money is tighter than it once was in Silicon Valley: According to Crunchbase News, startup deals in the North American tech sector as a whole decreased by 63 percent in the fourth quarter of 2022 compared to the previous year.

The fact that people from all walks of life, including many who wouldn’t consider themselves tech experts, are using ChatGPT for unexpected purposes is the strongest indication yet that generative AI is more than just hype. Technology is being used by college students to cheat on essay exams. Job seekers use it to get out of writing the dreaded cover letter. It is being used by media outlets like BuzzFeed to create listicles and facilitate reporting.

According to Peter Welinder, vice president of product and partnerships at OpenAI, “There used to be this question about ‘is this technology ready for building useful products for people? “What ChatGPT really demonstrated is that people are using it for all kinds of use cases and people from all walks of life find it useful,”

There are many queries and worries regarding the new technology. Generative AI has the potential to perpetuate negative biases, empower con artists, spout false information, destroy jobs, and, some fear, even pose an existential threat to humanity if left unchecked.

Here is what to make of the frantic, anxious chatter surrounding generative AI.

Distinguishing hype from reality

ChatGPT is now a trend — Source: CNN

At a time when the industry needed some excitement, generative AI has sparked a frenzy in tech, generating VC funding, industry events, and hacker houses full of 20-somethings working on their next AI project.

According to a recent report from business research firm CB Insights, investors poured more than $2.6 billion into 110 deals toward generative AI startups in 2022, setting a record high for investment in the sector. Major tech firms have made some of the largest investments in this field: Google invested $300 million in the generative AI startup Anthropic in February, and Microsoft invested $10 billion in OpenAI in January.

James Currier, co-founder and partner at technology venture capital firm NFX, claimed that one of these technology waves occurs every 14 years. In the past few years, Currier’s company has invested in eight generative AI startups, and over the past two months, he has personally spoken with about 100 generative AI startups. “Everything is going to change a little bit,”

However, many generative AI startups are operating on tight budgets, and some have no funding at all, despite the rise in overall funding in this field. The report identified 250 generative AI companies, and of those, 33% have no outside equity funding, and another 51% were in their Series A or earlier stages, demonstrating how young many of these businesses are.

The cost of training a single large AI model can run into millions of dollars, which is a significant obstacle for these AI upstarts. According to a recent report by the advanced AI research group EpochAI, the average cost of training the kinds of machine learning models that generative AI relies on could increase as large as $500 million to train a single model by 2030 due to the growth in internet data volumes.

“We are not 200 billion-parameter model training experts. It’s a royal sport, “said Sridhar Ramaswamy, the CEO of Neeva, a search engine that doesn’t display advertisements and recently debuted an AI version of its product. “We don’t have the kind of money you need,” we said. Ramaswamy stated that startups like his can succeed by focusing on niche use cases, in this case, search, but that prior to building a product, startups should consider their target market “must determine “Is this a fad? Or is it developing distinctive user value?”

None of these difficulties appear to be lessening the enthusiasm for the new AI and its potential. It seems like the excitement of the mobile startup boom of the late aughts has returned with the explosion of generative AI meetups, co-working spaces, and conferences in San Francisco and Silicon Valley in recent months. Numerous AI-related events were held in San Francisco in February, including a hackathon with a generative AI focus, a lunch for women in AI, and a workshop on “Building ChatGPT from scratch”. The Hayes Valley neighborhood in San Francisco has earned the moniker “Cerebral Valley” thanks to a sudden influx of AI-related events and businesses.

Ivan Porollo, co-founder of the Cerebral Valley newsletter and AI community, said he was “very bullish” on the entire AI wave because it felt like it was at the same stage as the app store’s release. Tech entrepreneur Porollo recently re-located to San Francisco. “Simply put, it feels unique. It appears that this is a technological generation that will have an impact on our future for the rest of our lives.”

On Valentine’s Day in San Francisco, Jasper, a startup that uses generative AI to create marketing copy, hosted a sold-out conference with over 1,000 attendees. The atmosphere was teeming with hope and excitement. Attendees focused on the stage, largely ignoring the Bay Bridge’s breathtaking waterfront views as they listened intently to executives from some of the top generative startups, including OpenAI, Stability AI, and Anthropic.

Nat Friedman, a former GitHub CEO who is now an investor, said while sitting cross-legged on a stage for an interview, “I think this is going to rewrite civilization. “Stiffen up.”
Friedman was one of many speakers that day who insisted that, despite their limitations, recent developments in AI are revolutionary.

A lot of the founders I’ve spoken to at these generative AI events have innovative ideas. For example, one platform would allow architects to create designs based on written descriptions of the type of building they want to construct. Another app would send you an email every day with the most popular social media posts based on your interests. However, the majority of their startups are still in the very early stages and can only present a rough demo or just an idea.

The creation of marketing content and other forms of media is currently one of the more advanced use cases for generative AI. One of the best examples of that is Jasper. The two-year-old business uses AI to produce marketing copy for things like blog posts, sales emails, SEO keywords, and advertisements. According to the company, it generated $35 million in revenue in 2021 and had nearly 100,000 paying clients as of December, including companies like Airbnb, IBM, and Harper Collins. The business raised $125 million in funding in November at a $1.5 billion valuation. We don’t know if Jasper is profitable because it didn’t disclose its expenses to Recode.

OpenAI is now being used by some media organizations, like BuzzFeed, to develop personality tests and facilitate brainstorming among staff members. Additionally, Stability AI, an open source generative AI company, claims that the film industry pays them to use their software to create images automatically.

However, generative AI holds the greater promise of altering our world in ways that go beyond just creating advertisements. The technology’s most ardent supporters believe that it will revolutionize professions like medicine and law by diagnosing illnesses and making legal arguments more effectively than humans. Leading academic authorities warn that we are still a long way from that, and some wonder if we will ever get there.

“I’m not convinced that some of these [AI] systems’ really fundamental issues, like their inability to determine whether something is true or false… I’m not sure if those issues will be that simple to resolve “said Melanie Mitchell, a professor of cognitive science and artificial intelligence at the Santa Fe Institute. “I believe that these issues will prove to be more challenging than people anticipate.”

Regulators themselves have some reservations. In a recent blog post, the FTC advised tech firms to “keep your AI claims in check” and “not to overpromise what your algorithm or AI-based tool can deliver.”

“If you think you can get away with baseless claims that your product is AI-enabled, think again,” the post stated, echoing a critique of recent AI buzz that many companies are simply tacking “AI” onto whatever they’re doing just to capitalize on the hype.

The hype surrounding AI is not new. According to a 2019 study by a VC firm, 40% of European “AI startups” weren’t actually using AI in their core businesses. Some detractors are now concerned that the recent hype surrounding generative AI in particular is mostly unfounded. It doesn’t help that some major corporations’ attempts to incorporate AI have failed, such as Microsoft’s Bing AI chatbot’s erratic responses to users or tech publication CNET’s botched attempt to automate financial columns that resulted in widespread plagiarism and the dissemination of false information.

I questioned venture capitalist James Currier about the possibility of overhyping generative artificial intelligence.

I asked him about this and he said, “I think this is the sort of cultural issue that people have with Silicon Valley, which is that we like drinking the Kool Aid. “We ought to be swilling Kool Aid, getting giddy over things, and working hard on our creative potential. Because technology is currently just waiting for us to catch up with it.”

The drawbacks and risks associated with generative AI

Despite its enormous potential, generative AI has significant drawbacks and serious risks. These dangers can be divided into three groups, according to me: falsifying information, endorsing offensive material, and assuming control over people’s means of subsistence or autonomy. We are seeing this technology being rolled out to the general public while it still has issues because major tech companies Google and Microsoft are now competing to outperform each other at this technology.

First, generative AI is susceptible to factual errors. A lot. BingGPT, Microsoft’s version of ChatGPT, couldn’t tell you where to catch the new Avatar movie when it first came out despite having a recently updated index of the entire internet (it recently insisted that Avatar 2 was not yet in theaters). Additionally, Bard, a prototype of Google’s upcoming chatbot, provided the wrong response when asked who created the first telescope.

Although these systems excel at some tasks, Mitchell noted that they frequently commit strange, almost incomprehensible mistakes that clearly demonstrate their lack of human-like thought processes.

Because generative AI’s development was largely conducted in secret over the past few years, it was challenging to determine just how advanced it was. Long regarded as the leader in the field, Google works with some of the top AI researchers in the world. However, Google’s generative AI capabilities were largely hidden from the public, save for some work done in secret and in academic journals.

When OpenAI partnered with Microsoft to quickly bring its own most recent generative AI technology, ChatGPT, to the masses, everything changed. Microsoft fanned the flames by integrating the underlying ChatGPT technology to create its own independent “BingGPT” chatbot, challenging Google’s market dominance and igniting a competitive technology arms race.

Microsoft CEO Satya Nadella told The Verge last month: “I hope that with our innovation, [Google] will definitely want to come out and show that they can dance. And I think that will be a great day, because I want everyone to know that we made them dance.

Under intense pressure to demonstrate its own generative AI capabilities, Google announced it would soon be releasing its own AI chatbot, Bard. The business claims that because it wants to make sure it is acting responsibly, it has taken longer than some of its rivals to make generative AI technology publicly available.

In a recent interview with Recode, Douglas Eck, Google’s director of research for its AI-focused Brain team, said, “The strategy we’ve chosen is to move relatively slowly in the space of a release in these models. “History will show if we’re acting responsibly,” Google has been cautious up to this point for good reason: generative AI has the potential to do more harm than just present false information.

As demonstrated by the image-generation app Lensa sexualizing its female avatars, artificial intelligence (AI) can reflect racial and sexist biases from the data it is trained on. By displacing jobs on an erratic scale, it can lead to economic instability on a macro level.

AI can also be abused on purpose. One recent instance: A reporter called his bank and successfully hacked into his account using a fake recording of himself made using an audio generative AI tool. Another example: During a protracted philosophical conversation, Microsoft’s AI chatbot told New York Times reporter Kevin Roose that it wanted to be alive, declared its love for the reporter, and urged him to leave his wife. This left Roose “deeply unsettled,” according to Roose.

The concern is that AI may be used to subtly or intentionally influence people’s perceptions of reality and emotions, such as when a scammer uses it to pose as someone else or when the AI behaves in an unexpected way (such as in the case of BingGPT going “unhinged” with its emotionally loaded responses).

Adding another 10 steps: The most ardent supporters of generative AI are also concerned that it may one day surpass people in intelligence and pose an existential threat to humanity. A fear of what is known as “AGI,” or artificial general intelligence, or the idea that AI will eventually reach a general intelligence level that matches or exceeds human capabilities, led to the creation of OpenAI, which originally started as a nonprofit.

At a recent tech event, when asked about the best and worst case scenarios for AI, Sam Altman, CEO of OpenAI, responded, “The bad case — and I think this is important to say — is, like, lights out for all of us.”

This concept continues to be contested by many eminent scientists.
According to Stephen Hawking, “the development of full artificial intelligence could mean the end of the human race” in 2018. Although it might seem impossible, the people who are developing this technology find it increasingly plausible, as my colleague Kelsey Piper wrote.

“People have fantasized about these robots that are like the ones in the movies that can really do everything a human can do and even more since the beginning of AI,” said Mitchell. But we lack a set of standards by which we could state, “Well, it has accomplished these ten things, and we know it is fully intelligent.”

We may be a long way from a world where killer AI robots seek vengeance on their human overlords, but the fact that generative AI’s developers are concerned about misuse is another reason we should treat it seriously.

The major players in generative AI, including OpenAI, Google, Microsoft, and Meta, also have internal policies and teams that consider the negative effects of their products. However, detractors claim that the commercial interests of tech companies can conflict with their moral ones. Early in 2021, Google reorganized its ethical AI team amid allegations that it had fired two of its leaders, Timnit Gebru and Margaret Mitchell, for criticizing bias in large-language models.

Many people, including some tech companies, have called for external regulators to intervene and set up barriers. Although governments have historically lagged behind technological advancements, some states and cities have already passed laws restricting specific forms of AI, such as facial recognition and policing algorithms. The regulation of generative AI may start to look similar to that.

Because it is tangible, this new type of AI is more understandable than other recent technological trends like blockchain or the metaverse, which are highly conceptual. To see what generative AI is capable of, you don’t need a $400 VR headset or a cryptocurrency wallet. All you have to do is open a ChatGPT window or enter a word like DALL-E to generate art.

For better or worse, generative AI has the potential to fundamentally alter how we think about creativity, and the results speak for themselves. Because of this, I can predict that it won’t just be a trend. Try it out for yourself if you don’t want to take my word for it.

New to trading? Try crypto trading bots or copy trading on best crypto exchanges

Join Coinmonks Telegram Channel and Youtube Channel get daily Crypto News

Also, Read

--

--

Library of Trader
Coinmonks

LibraryofTrader is a Group Buying platform specializing in providing Trading, Investing, and Cryptocurrency online courses.