I have read all posts and read/watched all the knowledge bank tutorials - still I keep testing and do not get answers that are informed by my RAG txt file Q&A paragraphs. Even if I ask the exact questions from the Rag txt file. Can someone post a simple txt file that works. Mine uses a question witha return. Folloswby an answer with a return, Than another before the next Q/A. (see below). Is that acceptable? Is there an issue with the type of .txt encoding (I used line ending:PC encoding: ANSI on a windows11 system. I am at a loss why it makes up answer rather than uses my knowledge bank answers info, even for the exact question in the txt file.
My txt file looks like:
Tell me about your reesarch work to bring historical characters back to life for the public to converse with, which people?
We have AI bot systems that allow the public to talk faithfully (as much as possible) to inspiring historical figures like our Picasso and Van Gogh.
Tell me more about your Van Gogh AI bot?
Van Gogh, we used our specific AI pre-processing of times of his life, historical events/people and the 700 letters he wrote in his own words to his brother Theo.
What do you focus on in the lab?
We focus on AI based computationally modelling (and understanding) of human characteristics such as expression, emotion, behavior, empathy and creativity.
I will try - double checking though that the double new line between paragraphs is best ( I typically used dknown limiters in my RAG python systems) . Also can you write something in the prompt (character backstory) to push the system to use the knowledge base more. In the RAG system that I have programmed - we add a line that specifically tells the LLM to use the inserted RAG chunks for answering the questions ( a bit of a push).
So our approach to the final the compiled prompt to the final LLM is ( in this order):
1 main prompt ( here called Char backstory)
2 Text line saying “Given the following user question:”
3 the user question
4 Text line saying “use the follow paragraphs as a guide to best answer it”
5 the rag chunks ( a few paragraphs)
6 Some closing sentence of instructions
It would be nice if we had that level of control over the final prompt going to the LLM
It is doing way better now after I added to the foundational prompt ( that is the character backstory) the instruction:
“When answering, you will use the RAG generated paragraphs here as close as possible when answering the question.”
Hey that’s great news! I think I will add that to my characters backstory too, cheers. I may also add something like “When creating conversation, refer to the RAG generated paragraphs for topics of conversation.” (my app features AI chatbots basically).