On Monday at the OpenAI DevDay event, company CEO Sam Altman announced a major update to its GPT-4 language model called GPT-4 Turbo, which can process a much larger amount of text than GPT-4 and features a knowledge cutoff of April 2023. He also introduced APIs for DALL-E 3, GPT-4 Vision, and text-to-speech—and launched an "Assistants API" that makes it easier for developers to build assistive AI apps.
OpenAI hosted its first-ever developer event on November 6 in San Francisco called DevDay. During the opening keynote delivered by Altman in front of a small audience, the CEO showcased the wider impacts of its AI technology in the world, including helping people with tech accessibility. Altman shared some stats, saying that over 2 million developers are building apps using its APIs, over 92 percent of Fortune 500 companies are building on their platform, and that ChatGPT has over 100 million active weekly users.
At one point, Microsoft CEO Satya Nadella made a surprise appearance on the stage, talking with Altman about the deepening partnership between Microsoft and OpenAI and sharing some general thoughts about the future of the technology, which he thinks will empower people.
GPT-4 gets an upgrade
During the keynote, Altman dropped several major announcements, including "GPTs," which are custom, shareable, user-defined ChatGPT AI roles that we covered separately in another article. He also launched the aforementioned GPT-4 Turbo model, which is perhaps most notable for three properties: context length, more up-to-date knowledge, and price.
Large language models (LLM) like GPT-4 rely on a context length or "context window" that defines how much text they can process at once. That window is often measured in tokens, which are chunks of words. According to OpenAI, one token corresponds roughly to about four characters of English text, or about three-quarters of a word. That means GPT-4 Turbo can consider around 96,000 words in one go, which is longer than many novels. Also, a 128K context length can lead to much longer conversations without having the AI assistant lose its short-term memory of the topic at hand.