Kids Love AI: And Teachers Should Too
This editorial is 100% inspired by Steven Sinofsky’s essay explaining why using AI in school is not cheating.
Regular readers know I spent two weeks in Europe this summer. My two youngest sons were with me. The youngest, 17, has a medical condition that means he missed a lot of 11th grade. He had summer school and then, while on vacation, had to do two semesters of English using an accredited online course.
We were super disciplined. Daily, he did at least one module (10 questions from 200 required). At the end of each module, we fed his answers into ChatGPT and asked it to score its answer and justify its view.
He’s a good student, so ChatGPT only disagreed with a few times over 200 questions. He stuck to his answer twice and agreed with ChatGPT several times. A few times, he was right to change; a couple, he was wrong to change. Twice, they were both wrong.
His overall score of around 90% was unaffected by the collaboration. But he sat with me, reading the reasoning behind all the questions. What came out was a deeper understanding of the issues due to the reasoning articulation.
He spent a lot of time considering the reasoning before decisions were made.
It was like having the teacher sitting right there with us. When appropriately utilized, ChatGPT is an excellent, scalable virtual teacher for each child in a class.
Steven Sinofsky explains how technology is always initially considered to enable cheating before eventually, and usually quickly, becoming an accepted practice. The same is likely to happen with AI.
There is much to read in this week’s selections below pertinent to last week’s newsletter on regulation in Europe and the USA. Benedict Kelly writes about Google and search. He explains how monopolies are defeated by innovative new approaches. In the case of search, AI seems to be that innovative new thing:
… one rather deterministic lesson we might draw from all the previous waves of tech monopolies is that once a company has won, and network effects have become self-perpetuating and insurmountable, then you don’t beat that by making the same thing but slightly better, and getting a judge to give you an entry point. You win by making the old thing irrelevant.
More broadly, the EU is following one gaff with another. It has delayed its findings on Meta’s use of public posts to train AI. Why? The Information tells us:
Meta says it has been told by European regulators to delay using public posts on Facebook and Instagram to train its AI models “not because any law has been violated but because regulators haven’t agreed on how to proceed.”
And Rohan Silva in The Times weighs in:
Given the pivotal role that technology plays in driving economic growth and productivity gains, you’d assume Brussels would be pulling out all the stops to help digital businesses and close the growth gap with the US. Mais non. Europe seems hellbent on going the other way, becoming ever more statist and anti-innovation.
In light of this, I can only applaud Martin Casado of Andreessen Horowitz in this week’s ‘X of the Week.’ He publishes OpenAI’s letter to the California Government:
While we believe the federal government should lead in regulating frontier Al models to account for implications to national security and competitiveness, we recognize there is also a role for states to play. States can develop targeted Al policies to address issues like potential bias in hiring, deepfakes, and help build essential Al infrastructure, such as data centers and power plants, to drive economic growth and job creation. OpenAl is ready to engage with state lawmakers in California and elsewhere in the country who are working to craft this kind of Al-specific legislation and regulation.
He ridicules OpenAi for this openness to regulation and points out that it is self-serving.
OpenAI’s letter opposing SB 1047. Wonderful to see them protect California’s AI interests broadly. And doing so as an incumbent who is in position to gain from regulation at the expense of others. Thank you OpenAI!
My critique is that OpenAI reinforces the developments that place regulators in the role of product managers, governing what innovation is and is not permitted. There are only bad outcomes for civilization down that road. The lack of faith in science and scientists and the belief that we can trust regulators more represent an abandonment of reason. And worse, it delays innovation pending regulatory approval, as Meta’s EU adventure is showing. The delay may be a long one. I side with Steven Sinofsky, Martin Casado, and Rohan Silva.
Hat Tip to this week’s creators: @stevesi, @jaredhecht, @benedictevans, @chudson, @Silva, @SylviaVarnham, @geneteare, @ZeffMax, martin_casado
Contents
- Editorial: Kids Love AI
- Essays of the Week
- Using AI for School is NOT Cheating
- Uber Appreciation
- Competing in search
- A16z And Founders Fund Lead The Way In Defense Venture Capital
- Video of the Week
- Charles Hudson on Venture Strategies at Different Stages
- AI of the Week
- We can’t let EU’s anti-AI rules drag us down too
- CEOs of Meta, Spotify Argue EU Regulation Hampering Open-Source AI
- News Of the Week
- Lucky 13: A Baker’s Dozen Join Unicorn List In July
- The Most-Active US Series A And B Investors Made More Bets In H1 2024
- Startup of the Week
- Meet Black Forest Labs, the startup powering Elon Musk’s unhinged AI image generator
- X of the Week
- Martin Casado from A16Z on OpenAI and SB 1047