The Problem:
You are working with the OpenAI API to process slides text from a PowerPoint presentation. You have extracted the text from each slide and written prompts for each one. You want to make asynchronous API calls so that all the slides are processed at the same time.
You have the following code in your async main function:
```
for prompt in prompted_slides_text:
task = asyncio.create_task(api_manager.generate_answer(prompt))
tasks.append(task)
results = await asyncio.gather(*tasks)
```
And the following code in your generate_answer
function:
```
@staticmethod
async def generate_answer(prompt):
"""
Send a prompt to OpenAI API and get the answer.
:param prompt: the prompt to send.
:return: the answer.
"""
completion = await openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return completion.choices[0].message.content
```
However, you are getting the error message:
object OpenAIObject can’t be used in ‘await’ expression
The problem is that you cannot await an OpenAIObject
. You need to await the result of the create
method.
Here is the corrected code for your generate_answer
function:
```
@staticmethod
async def generate_answer(prompt):
"""
Send a prompt to OpenAI API and get the answer.
:param prompt: the prompt to send.
:return: the answer.
"""
result = await openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return result.choices[0].message.content
```
The Solutions:
Solution 1: Asynchronous OpenAI API Calls
To resolve the asynchronous API call issue and await the response, make the following changes:
- Instantiate the OpenAI client as:
- In the
generate_answer
function, use the asynchronous methodchat.completions.create
to send the prompt and await the response:
client = AsyncOpenAI(api_key=api_key)
@staticmethod
async def generate_answer(prompt):
"""
Send a prompt to OpenAI API and get the answer.
:param prompt: the prompt to send.
:return: the answer.
"""
custom_prompt = [{"role": "user", "content": prompt}]
response = await client.chat.completions.create(
model="gpt-4",
messages=custom_prompt,
temperature=0.9,
)
return response["choices"][0]["message"]["content"]
These modifications will allow you to make asynchronous API calls effectively, ensuring that all the slides are processed concurrently.
Solution 2: Use `openai.ChatCompletion.acreate` to use the API asynchronously.
The OpenAI API has changed since the provided solution was written. To use the API asynchronously with the current version, you need to use the openai.ChatCompletion.acreate
method instead of openai.ChatCompletion.create
.
Here is an updated version of the generate_answer
function that uses acreate
:
@staticmethod
async def generate_answer(prompt):
"""
Send a prompt to OpenAI API and get the answer.
:param prompt: the prompt to send.
:return: the answer.
"""
completion = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return completion.choices[0].content
You can then use this function in your async main function as follows:
async def main():
tasks = []
for prompt in prompted_slides_text:
task = asyncio.create_task(api_manager.generate_answer(prompt))
tasks.append(task)
results = await asyncio.gather(*tasks)
This will make asynchronous API calls for all of the prompts in prompted_slides_text
and store the results in the results
list.
Q&A
When working with OpenAI, how do I make async API calls to process multiple slides at once?
Create tasks using asyncio.create_task() and gather results using asyncio.gather(*) within an async main function.
What error might occur when trying to await for the response in the generate_answer function?
‘object OpenAIObject can’t be used in ‘await’ expression’ error can occur due to incorrect object instantiation.
Video Explanation:
The following video, titled "Langchain Async explained. Make multiple OpenAI chatgpt API calls ...", provides additional insights and in-depth exploration related to the topics discussed in this post.
Learn about how you can use async support in langchain to make multiple parallel OpenAI gpt 3 or gpt-3.5-turbo(chat gpt) API calls at the ...
The following video, titled "Langchain Async explained. Make multiple OpenAI chatgpt API calls ...", provides additional insights and in-depth exploration related to the topics discussed in this post.
Learn about how you can use async support in langchain to make multiple parallel OpenAI gpt 3 or gpt-3.5-turbo(chat gpt) API calls at the ...