ChatGPT scores 70% on a sample United States Medical Licensing Exam
Plus: commercial text-to-3D generators have officially arrived
Welcome to The Cusp: cutting-edge AI news (and its implications) explained in simple English.
In this week’s issue:
- Commercial text-to-3D generators have officially arrived (& what it means for the industry)
- ChatGPT scores 70% on a sample USMLE, leading many to wonder what’s next
- Deepfakes improving rapidly in both quality and deployment capability
Let’s dive in.
Commercial text-to-3D generators have officially arrived
Text-to-3D, little more than a pipedream just two years ago, has now been implemented in a commercial product.
Luma AI, a company that specializes in applying AI to 3D modeling, just released Imagine in alpha: their product that enables users to create 3D assets entirely with text.
These aren’t simple renders, either; they’re full-fledged 3D models that can be imported into popular CGI software platforms like Blender or Maya.
For instance, spider man
yields a lifelike model of the popular cartoon character — and it takes just a few seconds to generate. Bonus: Imagine also features a revolving showcase of users' generations. Check out the quality for yourself in your browser.
How can we take advantage of this?
The results are extremely impressive: Imagine is able to generate high-quality models in minutes that previously might have taken professionals hours (or days) to produce.
Obviously, there are economic ramifications:
- 3D assets, already slowly commodifying as a result of improved software, will become even cheaper to obtain.
- CGI artists can take on more jobs in less time, and serve markets they previously couldn’t access due to cost constraints.
- Businesses will soon be able to create 3D assets on-demand via similar services, with no need to hire professional designers.
Taken together, this points to a few opportunities.
First, forward-thinking 3D artists (i.e those that embrace progress rather than abhor it) will be able to multiply their leverage. Individual freelancers or consultants can make a quick buck on gig platforms like Fiverr or Upwork, or through asset sale marketplaces like Turbosquid.
Second, businesses and studios that routinely leverage 3D assets will begin to look into more ambitious projects, which will impact market dynamics.
Games are already being created that employ procedural environment design, primarily due to excessive asset costs (currently, design and animation costs constitute up to ~half of total budget).
But what happens to the game marketplace when that drops to <1%?
Expect industry velocity to shift from years to months, and current “highly-anticipated” titles to quickly grow irrelevant (looking at you, Star Citizen).
ChatGPT scores 70% on sample USMLE
The USMLE, or United States Medical Licensing Exam, is a series of standardized tests designed to evaluate the knowledge and skills of doctors in training. It’s widely considered to be one of the most challenging exams ever.
ChatGPT just scored 70% on it.
To be clear: ChatGPT wasn’t given a real, live USMLE. Those typically take up to 7 hours and involve strict monitoring in isolated physical conditions.
Instead, Kenneth Goodman, a staff engineer at Google, fed ChatGPT a series of 119-questions from a USMLE Step 1 Sample Exam and scored it using the provided answer sheet.
Its phenomenal performance is indicative of just how shaky the foundation of modern medicinal education is — and how, in just a few years, LLMs are going to have a massive impact on both medicine and the healthcare industry as a whole.
What does this mean for society?
A strong preclinical education in medicine rests almost entirely on the capability of doctors-in-training to memorize a litany of facts, symptoms, and diagnoses.
Think facts like: if a patient expresses symptom A, B, and C, then the most likely diagnosis is D. Or if a patient has condition X, is approximately Y years old, and has a blood pressure of 120/80, then the most likely treatment is Z.
This makes up the majority of preclinical medical school (usually years 1 and 2). And it turns out to be exactly the kind of task that ChatGPT and other LLMs excel at.
In less than a decade, models will probably reach the point where they score significantly higher than most doctors do in consultative contexts.
Theoretically, AI will then be able to deliver far better objective diagnosis and care — at a fraction of the speed/price.
But will we begin using them? Medical regulatory boards are among the most conservative and resistant to change. Doubly so when it’s their careers on the line.
Considering that general practitioners comprise close to 59% of all physicians in the United States, and their role is precisely the one that faces the greatest risk of automation over the near term, I suspect we’re running headfirst into a monstrous ethical dilemma.
Another thing to consider is the economics of such a transition. If the supply of high-quality medical diagnoses increases by several orders of magnitude, what will happen to the demand?
Medicine is expensive and prestigious precisely because it’s exceedingly difficult for humans to do well. Long training times and tight regulations add to its coveted nature.
The sociocultural balancing act has led dozens of billion-dollar industries to rest firmly on medicine as their foundation — think insurance, pharmaceutical companies, etc. But when market dynamics get flipped on their head, what do you think will happen to them?
The opportunities in this space are ripe for forward thinkers.
Deepfakes are getting really, really good
Deepfakes are improving at a breakneck pace.
But with generative media exploding in both popularity and utility, it’s worth defining what “deepfakes” are a little more clearly.
In contrast to, say, AI art, deepfakes are computer generated media that (usually) superimpose a photorealistic target image on top of a source video.
The target is mapped onto the source’s movements using a model that’s been specifically trained to correlate the two.
Results from a few years ago were pretty choppy. Earlier this year, deepfakes of Putin were all the rage — but even these were often off-target, and many viewers could tell the difference.
But over the last few months, novel approaches have significantly improved deepfake quality; improved photorealism, more accurate facial and body mapping, etc.
What does this mean for society?
I’m going to run counter to most of the alarmist media here and say that deepfakes are a net positive.
Yes, they’ll be used to spread disinformation — but reasonable countermeasures are being rapidly developed. And regardless of how much they’re abused, deepfakes will also revolutionize video production, gaming, and media in general.
Most people are quickly growing used to the idea that what they see may or may not be real anyway. Within a few years, video will drop off as the gold standard for court evidence due to its proven corrigibility, meaning deepfakes have an incredibly short window of opportunity to cause global chaos (and we’re probably mostly out of that danger zone already).
Here are some other impacts and opportunities:
- Actors and artists are already selling their likenesses to game studios (see: Keanu Reeves in Cyberpunk), but their faces are often manually mapped in a painstaking and expensive process. High-quality deepfakes will make that cost marginal, greatly accelerating this trend and leading to an explosion in branded content.
- Deepfakes will help people with disabilities (think ALS or Parkinson’s Disease) communicate more effectively using avatars that accurately mimic their intended facial expressions.
- CEOs, executives, and other high-profile individuals will use deepfakes to personalize their outreach and increase their influence. First day on the job at Google? How about a personalized training video from Sundar Pichai?
There are undoubtedly going to be issues — like deepfake pornography of actresses, or very slight alterations to political footage that will lie undetected for long periods of time. But, like most disruptive technologies, the benefits will outweigh the costs.
That’s a wrap!
Enjoyed this? Consider sharing with someone you know. And if you’re reading this because someone sent it to you, get the next newsletter by signing up here.
You can also follow me on Twitter if you’d prefer a shorter format.
See you next week.
– Nick