OpenAI recently released GPT-3, the third generation of its generative pre-trained transformer, showcasing incredible capabilities in many different applications. After the impressive text generation performance of GPT-2, GPT-3 is now showing the ability of performing new tasks after having seen only a few examples, or no examples at all!
All the evidence seems to point to the realization that simply scaling up these models is the key to unlock transformative AI. However, the path towards the massive scale required for the next generation GPT-X is hindered by the availability of compute and the hardware infrastructure. We believe that photonic computing is uniquely positioned to address the limits of the current hardware stack in scaling to models with trillions of parameters.
Photonic computing can revolutionize the world. Here are five examples to illustrate how the future could look like.
Create new kind of media companies
The discussion about the future of journalism and AI started more than one year ago with GPT-2. The impressive capabilities of human-like text generation shown by GPT-3 have shined a new light on the subject: you now need only a good title and introduction to obtain an article good enough for publication. Photonic computing can help run these models more efficiently, with their industrialization the price per query will become affordable enough to give birth to new “fast-food” media companies, with experts in prompting GPT-X complementing writers. This means that journalists, writers, and bloggers will be able to focus more on the story they want to tell. They will spend less time thinking about how to write their article and will be able to produce more of them, therefore journals will have more content or will be able to decrease their costs.
Help design new pharmaceuticals
Unlocking the information in protein sequence variation can be seen as an analogous problem to natural language understanding. In the same way as a word’s semantics can be derived from its context, in biology there is the idea that function and structure are recorded in the statistics of protein sequences selected through evolution. The success of generative pre-training in natural language processing has motivated the exploration of large scale models on massive datasets in protein sequencing, with good results. Leveraging the model knowledge, it is possible to engineer drugs with a higher probability of success. The expected growth of sequencing in the life sciences assures an ever-growing database of protein sequences. The ability of photonic computing to process large amounts of data faster will drive down costs and allow companies to explore more possible drug candidates.
Transform the programming paradigm
GPT-3 demos have shown the ability of AI to automate common programming tasks such as the mock-up of a website or the writing of a single working React component. Menial programming tasks will be reduced to prompting a model instead of having a human write complete code. There will be so-called GPT-3 Developers trained to give instructions to a model to obtain code in a programming language. Photonic computing will allow performing this efficiently and lightning fast, giving access to this feature to programmers in their IDE, thereby improving their productivity.
Accelerate the evolution of search engines
Search engines are complex adaptive systems that continuously evolve in time in a sort of adversarial game with people trying to get their web page ranked higher, to be the first users see when they ask a question to the search engine. This effort requires large language models, first to be able to answer the query but also to be able to answer in different languages. Photonic computing can help the industrialization of large language models, enabling search engines to answer more complex queries, and queries in more languages, spreading faster around the world.
Traditional recommendation systems generate suggestions based on user feedback: the clicks or ratings of a user on the items they interacted with. These new massive language models give the possibility to query through a natural language interface, solving a few problems in the current recommendation approach: recommendations become context-aware (your mood may be different today and tomorrow), and it becomes possible to generate suggestions for users with no data (the so-called cold-start problem). The recommender system can also find items based on very abstruse clues, becoming more similar to an expert knowledgeable movie critic than a movie theater manager.
Such a recommendation engine is already out there, but it is likely that scaling this approach to a Netflix-sized user base will require rethinking the hardware underlying these models, and photonic computing is a candidate to look at closely. Better recommendation systems improve the ability of companies to reach their target customers, and also the engagement of customers: the relevance of the content users see drives their involvement. The impact on revenues is easy to see and could be significant.
LightOn is a hardware company that develops new optical processors that considerably speed up Machine Learning computation. LightOn’s processors open new horizons in computing and engineering fields that are facing computational limits. Interested in speeding your computations up? Try out our solution on LightOn Cloud! 🌈