How to run async methods in langchain? – Langchain

by
Ali Hasan
langchain llama-cpp-python py-langchain

Quick Fix: Modify the aapply method to use the async keyword. The corrected code would be:

res_aa = await chain.aapply(texts)

This change ensures that the method runs asynchronously, allowing the program to continue executing other tasks while waiting for the aapply method to complete.

The Problem:

The user is attempting to measure the performance difference between apply and aapply methods of langchain.chains.LLMChain. However, aapply method is not working as expected, resulting in a coroutine object instead of a list of results. User seeks assistance in identifying and resolving the issue to successfully execute aapply and compare its performance with apply.

The Solutions:

Solution 1: Using await to run async methods

The issue here is that `chain.aapply` returns a coroutine object, which is not awaited in the code. To run the coroutine and get the result, one needs to use the `await` keyword. The corrected code should look like this:

start = time()
res_aa = await chain.aapply(texts)
print(res_aa)
print(f"aapply time taken: {time() - start:.2f} seconds")

Q&A

Why res_aa not showing the output of the code?

To get the output of the async chain.aapply() method, it needs to be awaited.

How to fix res_aa = chain.aapply(texts)?

Change res_aa = chain.aapply(texts) to res_aa = await chain.aapply(texts).

Video Explanation:

The following video, titled "Langchain Async explained. Make multiple OpenAI chatgpt API calls ...", provides additional insights and in-depth exploration related to the topics discussed in this post.

Play video

Learn about how you can use async support in langchain to make multiple parallel OpenAI gpt 3 or gpt-3.5-turbo(chat ...