Output truncated when calling the OpenAI API

I had some Python code written that called the OpenAI API. For some reason, during execution of this code I noticed that the output was truncated.

This was the snippet of the Python code that I had written that called the OpenAI API:
response = await openai.ChatCompletion.acreate (
model=model,
messages=messages,
temperature=0.7,
max_tokens=200,
n=1,
stop=None,
)
The truncation of the output was due to the max_tokens
parameter in my OpenAI API call. The max_tokens
parameter limits the number of tokens (words and punctuation marks) in the response.
- The maximum allowed tokens for
gpt-3.5-turbo
is 4096. - The maximum allowed tokens for
gpt-4o
is 124,000.
I was using gpt-4o
and upped my max tokens to 4000 and my issue was resolved.
It is possible to look into the stream=True
option if the response was too large, which was unnecessary in my case.