OpenAI GPT-3.5-Turbo API: Moving from GPT3.0 to GPT-3.5-Turbo

Developers are always on the lookout for innovative tools and technologies that can streamline their workflow and make their development process more efficient. One such technology that has been making waves in the world of AI and NLP is the GPT-3 API. The latest version GPT-3.5-Turbo model has been fine tuned as a general chat bot. It better understand the context behind requests producing better results. Just like GPT-3, GPT-3.5-Turbo, can help you write code as well as text. The GPT-3.5-Turbo also 10x cheaper that OpenAI’s existing GPT-3.5 models. Now priced at $0.002 per 1k tokens.

This API is a powerful language model that is designed to help developers create intelligent and human-like chatbots, virtual assistants, and other conversational interfaces. It is an advanced version of the GPT-3 language model that is known for its impressive natural language processing capabilities. In this blog post, we will go over the changes in the API for calling the completion API. Let"s get started!

To understand the changes in calling the completion API, I updated the openai-quickstart-node sample from model text-davinci-003 (GPT-3) to GPT-3.5-Turbo.

Note: To run the sample you"ll need an API key from openai. You can sign up for an OpenAI account and get an API key here.

Library

The first thing I did after cloning and installing the sample was updating the sample (project.json) use the latest version of the openai API (3.2.0).

1
"openai": "^3.2.0"

Completion API

The completion API has changed from createCompletion to createChatCompletion.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
const completion = await openai.createChatCompletion({
  model:"gpt-3.5-turbo-0301",
  messages: [
    // The system message helps set the behavior of the assistant. In the example above, the assistant was instructed with “You are a helpful assistant"
    {"role": "system", "content": "You are a helpful assistant. Keep responses brief."},
    // The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior
    {"role": "assistant","content": "Animal: Cat Names: Captain Sharpclaw, Agent Fluffball, The Incredible Feline"},
    // The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.
    {"role":"user", "content":"Suggest three names for an animal that is a superhero. Animal: " + capitalizedAnimal },
  ],
  // The temperature (Range 0 - 1). The closer to 0, the more deterministic, the closer to 1 the more diverse the response.
  temperature: 0.8,
});

Microsoft 365 for Business

Prompt

The prompt is now an array of message objects. Each message containing a role and content field. Briefly here is what each of the roles do:

Role Description
system The system message helps set the behavior of the assistant.
assistant The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior
user The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.

Response

The response is now returned as an array of choices. Each choice contains a role, content, finish_reason, and an index.

1
2
3
4
5
6
7
8
9
"choices": [
   {
    "message": {
      "role": "assistant",
      "content": "Captain Sharpclaw, Agent Fluffball, The Incredible Feline"},
    "finish_reason": "stop",
    "index": 0
   }
]

The sample was changed to:

1
completion.data.choices[0].message.content

You can see these change at my fork of the sample repo here: https://github.com/mjfusa/openai-quickstart-node

Check out the Chat GPT-3-5 Playground

The OpenAI chat Playground has been updated with support for the gpt-3.5-turbo model. You can quickly compose your prompts in the new format here. https://platform.openai.com/playground/?mode=chat

Licensed under CC BY-NC-SA 4.0
comments powered by Disqus