• Resolved marcfest

    (@marcfest)


    I’m currently utilizing AIEngine as a mentor to guide a user through a 20-step process to construct a compelling pitch for a product or service. At present, I furnish ChatGPT with the 20 steps as the context. I’m wondering if it would be more efficient to use embedding instead. My current setup appears to prompt AIEngine to send the context to OpenAI each time a query is made (every interaction with the user), along with all prior user interactions up to that point. This inflates the token count and cost with each question or exchange. Is there a way to prevent this? Submitting my 20-step guide context repeatedly with every query seems unnecessary. Can this redundancy be eliminated?

Viewing 2 replies - 1 through 2 (of 2 total)
  • Plugin Support Val Meow

    (@valwa)

    Hey @marcfest ! ??

    The thing called “Context” in the AI Engine’s chatbot dashboard main settings is actually the system message. It helps determine how the assistant behaves. If you craft it properly, the system message can set the tone and dictate the kind of response from the model.

    Using embeddings might not give the desired results because we can’t guarantee that the chatbot will solely rely on the data provided from this source. It might make changes to the data.

    To track where your user is in the conversation, you can use our filters. Then, you can modify the params, query, or response to best suit your specific use case (like manually adding information for each of your steps). However, implementing this might require some technical knowledge.

    Hope this helps ! ??

    Thread Starter marcfest

    (@marcfest)

    Thank you!

Viewing 2 replies - 1 through 2 (of 2 total)
  • The topic ‘Encoding 20-step method’ is closed to new replies.