docs/tutorials/qa_chat_history/ #27956
Replies: 7 comments 8 replies
-
it can be a silly question, but please help me: |
Beta Was this translation helpful? Give feedback.
-
The import code: vector_store = InMemoryVectorStore(embeddings) vector_store = InMemoryVectorStore(embeddings) I think you wrongly added a "_" that will cause import error |
Beta Was this translation helpful? Give feedback.
-
Hello!
|
Beta Was this translation helpful? Give feedback.
-
Hi there, |
Beta Was this translation helpful? Give feedback.
-
Hi |
Beta Was this translation helpful? Give feedback.
-
The memory saver resend all the previous messages (Human and AI message) to LLM so the LLM has context. Wouldn't it be easy and consume less resource if we could the multi-turn chat session from LLM itself like this: https://ai.google.dev/gemini-api/docs/text-generation?lang=node#chat? |
Beta Was this translation helpful? Give feedback.
-
docs/tutorials/qa_chat_history/
This guide assumes familiarity with the following concepts:
https://python.langchain.com/docs/tutorials/qa_chat_history/
Beta Was this translation helpful? Give feedback.
All reactions