Optimizing Response Time with Frontline Copilot for Static Data

Hi everyone,

I’ve been using the Frontline Copilot and I’m really impressed with its capabilities. However, I’ve encountered a challenge that I’d like to discuss and hopefully find a solution for.

When I use Frontline Copilot to answer questions based on a static dataset (like a table), I’ve noticed that the AI re-reads the entire dataset every time I run the function. This significantly increases the wait time for responses, which can be quite frustrating.

Given that the data I’m working with is static and doesn’t change, I’m wondering if there’s a way to optimize this process. Ideally, I’d like the AI to remember the data or cache it somehow, so it doesn’t have to reload and process the entire table with each query.

Has anyone else experienced this challenge? If so, have you found any effective workarounds or solutions to reduce the response time? I appreciate any insights or suggestions you might have!

Thanks in advance for your help.
kind regards,

Hey @Horesh -

A few thoughts here -

  1. We do caching for documents to limit latency, but we don’t yet do this for arrays, I will make sure to write a feature request for this.
  2. We are moving thematically to approaches where we can preprocess the data, to reduce latency even when the datasets are massive (like thousands of pages of PDFs or millions of rows, massive). This is first seen with the operator chat widget - Operator Chat Widget. All the training here happens initially and then latency for each call is far far far lower.


Hey @Pete_Hartnett
Thanks for the reply. A few questions

  1. Is the document caching automatic? Or do we need to activate it?
  2. What would be the best way to follow feature requests to know when they are planned to be released?

Thanks - Maya