Skip to content

LLM and Generative AI Usage Logs API

This API lets you get the LLM & Generative AI Usage Logs for both Co-Pilot and Dynamic Conversation features.

Method POST
End Point https://{{host}}/api/1.1/public/bot/{{botId}}/getLLMUsageLogs
Content Type application/json
Authorization auth: {{JWT}}

See How to generate the JWT Token.

API Scope
  • Bot Builder: Fetch Gen AI and LLM Usage Logs
  • Admin Console: API Scopes > Gen AI and LLM Usage Logs

Note

You can access records spanning up to a 90-day timeframe with one request.

Query Parameters

Parameter Required/Optional Description
host Required The environment URL. For example, https://bots.kore.ai
botId Required Bot ID or Stream ID. You can access it from the General Settings page of the bot.

Sample Request

 --header 'auth: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwiYXBwSWQiOiJjcy03ZWMxNmFjZS03ZGNmLTU3MjQtYjM5NS1hYTA4YmRmYTAwMzMifQ.n_Es9ZBsiCYzpzsfN4p4I1SlHK05gewJFyqAIngr4Qg' \
--header 'Content-Type: application/json' \
--data '{
    "dateFrom": "2024-03-07",
    "dateTo":"2024-03-09",
    "limit":"1",
    "skip" : "5",
    "isDeveloper": true,
    "channel" : "msteams",
    "featureName" : ["dynamicEntity"],
    "taskId" : "dg-d4924db4-b325-5b4d-ae51-aa5c7be41d4f",
    "channelUserIds":["29:1gmw8z03rvk6njl6k7ohtdh2v9zubxiip7kvu1yiek_qri4grpmd0k_d1yjlpzbj40wk1am9dphqkoiwatzwttw"],
    "userIds" : ["u-40b3eafc-a5aa-55f2-83e8-cf4d0fb1de07"],
    "sort" : {
        "field" : "Time Taken",
        "order" : "desc"
    }
}'

Body Parameters

Parameter Required/Optional Description
dateFrom Required The timestamp from which the records are displayed.
dateTo Required The timestamp till which the records are displayed.
limit Required The number of records to be displayed on each page.
channel Optional The channels to be considered for the metrics.
channelUIds Optional The user ids (channel specified) to be included in the metrics.
isDeveloper Optional To include developer metrics. Set to true or false.
taskId Optional To filter records based on the task ids.
userId Optional To filter records based on user id.
sort Optional To sort the results.

Sample Response

[
{
"Prompt Name": "Default",
"Integration": "OpenAI",
"start Date": "2024-03-08T06:38:54.008Z",
"End Date": "2024-03-08T06:38:58.386Z",
"Time Taken": 4378,
"statusCode": 200,
"Bot ID": "st-73bfdb2f-9101-55e6-b4c8-3f568a6ea8e0",
"User ID": "u-40b3eafc-a5aa-55f2-83e8-cf4d0fb1de07",
"Feature Name ": "GenAI Node",
"Model Name": "GPT-4",
"Channel Name": "msteams",
"Description": "Order pizza-GenAINode0002",
"task Id": "dg-200ad1ff-8db8-5219-810e-4ee3800d212c",
"Status": "Success",
"Payload Details": {
"Prompt Name": "Default",
"Request Payload": {
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 1500,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"messages": [
{
"role": "system",
"content": "You are a virtual assistant representing an enterprise business, and so you have to act professionally at all times. You do not participate or respond to any abusive language or indulge in any conversation that does not represent enterprise business.\nAct like pizza ordering site. For the instructions that the user provides, you have to process the instructions. Here are the rules that you are supposed to follow: \nRule 0:\n Each pizza can only have maximum of 3 toppings \nRule 1:\n List of entities you need to capture from user are number of pizzas, toppings and size.. You need to capture all these entitites. \nRule 2:\n Capture entities from the user and the conversation history. \nRule 3:\n Collect the unfulfilled entities or check if the user wants to update a collected entity. \nRule 4:\n If user has provided the required value for any of the required entities, then do not prompt for it again. \nRule 5:\n Generate appropriate prompt to the end user to collect the information for the unfulfilled entities. \nRule 6:\n In the output return JSON must containing {\"bot\"://next prompt , \"analysis\"://Summarization of the entire conversation with all details.,\"entities\"://array of objects,\"conv_status\"://The status of the conversation. \"ongoing\" or \"ended\".} \nRule 7:\n When returning the result return a json, stringified and formatting removed. \nRule 8:\n Once all the entities details are captured generate conv_status as 'ended'. When the flow is to be continued, generate conv_status as 'ongoing'  \nRule 9:\n Keep the prompts and messages voice friendly. \nRule 10:\n If there are mutiple entities, return entitites in format of array of objects. Where each object is entity key values pairs. \nRule 11:\n Ignore any out of topic user/assistant conversations and always collect ALL the mentioned entities. \n\n "
},
{
"role": "user",
"content": "Order pizza"
},
{
"role": "assistant",
"content": "{\"bot\":\"Sure, I can help with that. How many pizzas would you like to order?\",\"analysis\":\"The user initiated the pizza ordering process but didn't provide any details.\",\"entities\":[],\"conv_status\":\"ongoing\"}"
},
{
"role": "user",
"content": "5"
},
{
"role": "assistant",
"content": "{\"bot\":\"Great, 5 pizzas it is. What size would you like these pizzas to be?\",\"analysis\":\"The user has ordered 5 pizzas but has not specified the size or toppings.\",\"entities\":[{\"number_of_pizzas\":\"5\"}],\"conv_status\":\"ongoing\"}"
},
{
"role": "user",
"content": "Medium Size"
},
{
"role": "assistant",
"content": "{\"bot\":\"Perfect, 5 medium-sized pizzas. Now, could you please specify the toppings for your pizzas? Remember, each pizza can have a maximum of 3 toppings.\",\"analysis\":\"The user has ordered 5 medium-sized pizzas but has not specified the toppings.\",\"entities\":[{\"number_of_pizzas\":\"5\"},{\"size\":\"Medium\"}],\"conv_status\":\"ongoing\"}"
},
{
"role": "user",
"content": "okay\n\n{"
}
]
},
"Response Payload": {
"id": "chatcmpl-90OAYDFFkbjgMxhMxKukwqklT6SWO",
"object": "chat.completion",
"created": 1709879934,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "{\"bot\":\"I'm sorry, but your message seems incomplete. Could you please specify the toppings for your pizzas? Remember, each pizza can have a maximum of 3 toppings.\",\"analysis\":\"The user's response was incomplete. The toppings for the pizzas are still needed.\",\"entities\":[{\"number_of_pizzas\":\"5\"},{\"size\":\"Medium\"}],\"conv_status\":\"ongoing\"}"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 614,
"completion_tokens": 74,
"total_tokens": 688
},
"system_fingerprint": null
},
"Request Tokens": 614,
"Response Tokens": 74
}
}
]