Documentation

{{docApp.title}}

{{docApp.description}}

How can we help?

{{docApp.searchError}}
{{product.name}}

Searching in {{docApp.searchFilterBySpecificBookTitle}}

{{docApp.searchResultFilteredItems.length}} results for: {{docApp.currentResultsSearchText}} in {{docApp.searchFilterBySpecificBookTitle}}
Search results have been limited. There are a total of {{docApp.searchResponse.totalResultsAvailable}} matches.

You have an odd number of " characters in your search terms - each one needs closing with a matching " character!

{{docApp.libraryHomeViewProduct.title || docApp.libraryHomeViewProduct.id}}

{{docApp.libraryHomeViewProduct.description}}

  1. {{book.title}}

{{group.title || group.id}}

{{group.description}}

  1. {{book.title}}

{{group.title}}

Configure HAi - Request Summariser

Important

HAi is currently in a closed beta, speak to customer success should you want to take part.

Setup

Once enabled there are some settings explained in the table below that allow for some customization on the amount of date and what timeline update types are used when generating the request summary, these can be updated by access application settings for service manager.

Setting Description
generativeAi.requestSummary.availablePostTypes List of timeline update types sent to the summariser, this allows you to add in or remove certain update types to be included or excluded from the request summary by default this is set to Authorization,Customer,Email,Escalate,Task,update
generativeAi.limits.activitySteamPosts Limit on the number of posts returned and passed to the summariser, by default this is set to 100 ordered by most recent activity
generativeAi.limits.activitySteamComments Limit on the number of comments per post returned and passed to the summariser, by default this is set to 100
generativeAi.limits.activitySteamContentLength Limit on the maximum content length of a post or comment, anything longer is truncated and passed to the summariser, by default this is set to 1000

Limits

The reason for the limits is to minimize the likely hood of a request summary request to our AI Services reaching the total number of input tokens allowed, currently 128,000. Very long timelines with lots of large posts can go over this limit. Internally there is a safe guard that even if you crank the limits up a very large timeline will still end up truncated in an effort to prevent the AI Service returning an error. The maximum input tokens has been slowly increasing over the months and some services allow for much much larger inputs, something that may be investigated should the limits prove problematic for customers.

In This Document