There's a discussion around the Stanford Alpaca model and whether their (low cost - $600?) success shows that LLMs don't have moats. Eliezer kicked off the discussion I think. Might add to your next post as it raises interesting questions, not to mention might explain why these models are likely now fully embroiled in corporate grab for territory, and the ring fencing that goes with it. If a smaller model can use another, larger, model to teach itself, thus in essence transferring the capabilities of the larger model, do we enter a period where AI's pilfer from one another, and therefore are forced to mask their designs from one another, etc etc?