OpenAI’s DevDay 2023: How does it reshape the future of AI?

A recap of everything OpenAI announced at DevDay.

Chandler K
4 min readNov 8, 2023
[Image by Author and DALL-E]

The last 24 hours have set the stage for a transformative journey into the future of AI technology. OpenAI’s 2023 DevDay has ushered in a new era for ChatGPT, redefining our interactions with AI — from unique advancements in ChatGPT’s abilities to groundbreaking accessibility in GPT development.

GPT-4 Turbo: The New and Improved GPT-4

The highlight was the introduction of GPT-4 Turbo, an enhanced model offering an unprecedented 128K context window, enabling 300 pages of text. This new model also has an updated training cut off of April 2023 alongside more affordable pricing for developers. It’s currently available to all ChatGPT Plus members as of November 7, 2023 and in the API for developers.

The interface of ChatGPT has also been altered significantly. Users no longer have to select and start a new chat with DALL-E, Code Interpreter / Advanced Data Analysis, or Browsing with Bing; these will be enabled by default. This is more than a convenience change, users now have the ability to use all three in ONE chat session.

GPT-3.5 Turbo: Improving Costs and Context

Equally impactful, the updated GPT-3.5 Turbo now supports an extended 16K context window by default. This change comes with the added benefit of 60% lower prices for developers. Find-tuning is also now available for certain applications.

Assistants API: Bringing AI to Application Development

The newly announced Assistants API is an incredible step for AI capabilities, offering a robust framework for building sophisticated, task-oriented AI assistants. These are equipped to handle an array of specialized tasks, from complex data interactions to conversational customer support, thanks to persistent threads and enhanced contextual understanding.

This new API not only simplifies the integration of AI into applications but also provides a more seamless and efficient way to manage and deploy AI-driven interactions in a variety of settings. The DevDay stream showcases an impressive example in action.

Text-to-Speech Model: Additional Voices

The Text-to-Speech tool now includes a diverse set of 6 voices, providing users with additional personas to help customize AI interactions. Based on the demos shown, the new TTS can help with storytelling, virtual assistance, or content creation. While the demo was short, the voice sounded natural and indistinguishable from a human.

Copyright Shield: Protecting Users

In a move to build user trust and confidence, OpenAI introduced the Copyright Shield, offering to support and cover legal costs for customers facing copyright infringement claims. While we still don’t know all the details, this aims to provide a safer environment for creators and businesses to utilize ChatGPT.

Visual Capabilities: Enhanced Creation and Understanding

The advancements extended to multimodal capabilities as well, with ChatGPT facilitating visual inputs and DALL·E 3 integration for image generation. Being able to comprehend images is a massive leap toward applications that can analyze images for educational content, create art, or provide accessibility features, effectively broadening ChatGPT’s utility beyond text generation.

Custom GPTs: Tailoring GPT-4 to Your Needs

Most significantly, OpenAI introduced customizable GPTs, allowing developers and users with no coding experience the ability to create and tailor specialized GPTs. This lets users instruct AI with greater specificity, resulting in more relevant responses and actions. With developer-defined actions, ChatGPT can now perform functions beyond its original scope, providing a more personalized experience. (We will publish further content on custom GPTs soon!)

These announcements from OpenAI’s DevDay signal a transformation in ChatGPT’s role across various areas, enhancing both its capabilities and ease of use. (For more detailed insights, check the OpenAI blog and DevDay keynote stream).

Below is a simple GPT example that I created.

https://chat.openai.com/g/g-1ykE8oTIK-weather

Keymate’s Next Steps with GPTs

OpenAI’s DevDay has allowed Keymate users to personalize their browsing experience even more:

Browsing with Google:

After the changes introduced at DevDay, Keymate users can now directly integrate Google’s impressive search capabilities into GPTs without needing to code anything. This enables users to switch to Google for a more familiar browsing experience.

Utilizing Keymate Functions in GPTs:

Keymate’s custom actions, including its robust search, link browsing, and long-term memory, can be incorporated within a single GPT, enhancing functionality and the user experience.

Data-Driven GPTs with Keymate:

The Keymate API, optimized for Large Language Models, is designed to fetch real-time, personalized search data, laying the groundwork for GPT applications that require current and accurate web information.

Keymate’s integration with OpenAI’s new GPTs paves the way for users to create powerful, data and browsing focused GPT applications for personalized use. Here’s the link to the Keymate.AI GPT. If you’re interested in connecting Keymate to your own GPT, follow the instructions we’ve put together here.

https://chat.openai.com/g/g-veSrMmasJ-keymate-ai-gpt-beta

While this was a lot to cover, future articles will cover each of the previously mentioned updates in greater detail.

--

--

Chandler K

Harvard, UPenn, prev NASA , writing about AI, game development, and more..