Seven reasons why technology leaders are paying attention to OpenAI’s DevDay

konrad bachusz
Credera Engineering
5 min readNov 14, 2023
Photo by Hatice Baran: https://www.pexels.com/photo/open-laptop-on-desk-16037283/

On 6th November 6 2023, the city of San Francisco hosted the highly anticipated OpenAI DevDay - an immersive event that brought together innovators, developers, and enthusiasts at the forefront of Generative AI. This was an iPhone moment for AI as a field, and given the rapid development of the area, it’s important to stay up-to-date.

This blog highlights the major announcements from the event.

1. GPT-4 Turbo model

A new GPT model was made available on the OpenAI platform. It supports the following:

  • Context Length: GPT-4 Turbo supports a context window of up to 128,000 tokens, which is equivalent to around 300 pages of text, enabling it to process much larger amounts of information in a single prompt.
  • Knowledge Cutoff: The model has knowledge of events up to April 2023, ensuring that its responses are up-to-date with recent developments.
  • Performance: It is more capable than the standard GPT-4 model, with optimised performance for better task handling.
  • Cost: GPT-4 Turbo is significantly cheaper, costing three times less for input tokens and two times less for output tokens compared to the standard GPT-4 model.
  • JSON Mode: New JSON mode ensures that the model’s responses are in valid JSON. The response_format API parameter allows users to constrain the model’s output, ensuring the generation of syntactically correct JSON objects. This feature is particularly useful for developers using the Chat Completions API to generate JSON outside of function calling.
  • Reproducibility: A new seed parameter allows for reproducible outputs, providing consistent completions that are particularly useful for debugging and unit testing. Learn more about reproducible outputs here. In the coming weeks, OpenAI will introduce a functionality that provides log probabilities for the most probable output tokens generated by both GPT-4 Turbo and GPT-3.5 Turbo.
  • Multimodal Capabilities: GPT-4 Turbo can accept images as inputs, expanding its use to tasks like generating captions, analysing images, reading documents with figures, creating images, and transforming text to speech.

Developers with a paid subscription now have the opportunity to experiment with GPT-4 Turbo by passing gpt-4–1106-preview in the API. OpenAI is on track to launch the stable, production-ready model in the upcoming weeks.

2. GPTs

OpenAI GPTs

OpenAI has introduced GPTs, allowing users to create custom versions of ChatGPT tailored for specific purposes, enhancing its utility in daily life, tasks, work, or home. GPTs enable easy customisation without coding, allowing users to build and share their creations, such as learning board game rules, teaching maths, or designing stickers. Users can try GPTs available for ChatGPT Plus and Enterprise users, including Canva and Zapier AI Actions. OpenAI plans to launch the GPT Store later this month, showcasing creations by verified builders. Privacy and safety measures are emphasised, with user control over data and systems in place to review GPTs against usage policies. Developers can connect GPTs to the real world through custom actions, and enterprises can deploy internal-only GPTs for specific use cases. OpenAI emphasises community involvement in shaping AI behaviour and announces updates to ChatGPT Plus for fresher information and a simplified user experience.

Creating one is as easy as starting a chat, where users can provide instructions and extra knowledge, and choose what it can do, like making images, searching the web, or analysing data. Try it out at https://chat.openai.com/create

3. Assistants API:

This marks a significant step in facilitating developers to create agent-like experiences within their applications. This API empowers purpose-built AI assistants with specific instructions, additional knowledge, and the ability to call models and tools for task execution. The API beta is available for exploration on the Assistants playground, without the need for coding.

4. Code Interpreter:

Code Interpreter writes and runs Python code in a sandboxed execution environment, generating graphs and charts, and processing files with diverse data and formatting. It allows your assistants to run code iteratively to solve challenging code, maths problems, and more.

5. Built-in Retrieval

Built-in retrieval augments the assistant with knowledge from outside our models, such as proprietary domain data, product information, or documents provided by your users. This means you don’t need to compute and store embeddings for your documents or implement chunking and search algorithms. The Assistants API optimises what retrieval technique to use based on our experience building knowledge retrieval in ChatGPT.

6. Function calling updates

The function calling feature enables the description of your app or external APIs’ functions to models, allowing the model to intelligently generate a JSON object containing arguments for calling those functions. In the latest updates, users can now request multiple actions in a single message, streamlining the process. For example, a user can send a message like “open the car window and turn off the A/C” in one go, which previously would have required multiple interactions with the model.

7. Lower usage prices and increased rate limits

A reduction in prices (per 1,000 tokens) has been introduced to make the platform more accessible to developers. In addition, the tokens per minute limit has been doubled to help with scaling your applications. The current pricing is shown below:

OpenAI November 2023 pricing

Summary

  • GPT-4 Turbo: Enhanced version with a larger context window, up-to-date knowledge, optimised performance, lower costs, JSON mode, reproducibility, multimodal capabilities, and a preview available for developers.
  • GPTs Introduction: Users can create custom ChatGPT versions for specific tasks without coding, with a GPT Store launching soon.
  • Assistants API: New API for building AI assistants with specific capabilities in applications, available for beta testing.
  • Code Interpreter: A tool for writing and running Python code with support for graphs, charts, and diverse data processing.
  • Built-in Retrieval: Enhances the assistant with external knowledge without the need for complex search algorithms.
  • Function Calling Updates: Allows for multiple actions in a single message and improved function calling accuracy.
  • Lower Usage Prices and Increased Rate Limits: Reduced prices and higher token rate limits to support developers.

If you are exploring Generative AI within your organisation and would like to hear about Credera’s wealth of experience in implementing Generative AI to increase the productivity of clients, please don’t hesitate to get in touch. You can find more information on our website: Credera AI

Got a question?

Please get in touch to speak to a member of our team.

--

--

konrad bachusz
Credera Engineering

Senior Data Engineer @ Credera. I’m passionate about all things to do with AI, data and analytics 📈