All embeddings were joined into one
-
I entered 46 Questions and Answers in FAQ mode by using your extension.
Nevertheless, I have 1 vector at Pinecone. When I ask questions, my GPT tells “This model’s maximum context length is 4097 tokens, however you requested 7201 tokens (5701 in your prompt; 1500 for the completion). Please reduce your prompt; or completion length.”
So, all 46 q/a joined into one long vector that overflows the GPT prompt.
Viewing 1 replies (of 1 total)
Viewing 1 replies (of 1 total)
- The topic ‘All embeddings were joined into one’ is closed to new replies.