Skip to main content

Integrating with DDNA Studio Insights

visualizing conversation nodes in DDNA Studio Insights

Overview

Skills can be integrated with DDNA Studio Insights to allow project owners to visualize conversations in their projects that are using them.

This can be achieved by returning the Conversation Variables in your Skill's execute endpoint response.

These variables are usually sent from the service provider that the Skill uses, whose Corpus Conversations have been annotated at the relevant points of the conversations.

info

Please refer to Annotating Corpuses if you are also intending to annotate the Corpus of the service provider that your Skill uses. Otherwise, please reach out to your respective personnel who is in charge of getting the annotations.

Once annotated, the currentSpeechContext can be extracted from those conversation points and processed by DDNA Studio Insights into Conversation Nodes.

Conversation Variables

In order to get the currentSpeechContext lines to show up, your Skill will need to return Conversation Variables in the execute endpoint response.

You will need to coordinate with the Corpus maintainer, of the service provider that your Skill uses, to have these Variables set and sent to your Skill.

Your Skill will then have to return in its execute endpoint response these Variables.

info

We use the same naming convention in both our Python and JS SDKs at the moment so these Variables must be sent as they are.

conv_id (required)

string

This is a required property. Your response must at least include this in order to get the currentSpeechContext and subsequently, the Conversation Variables to show up.

Unique ID to symbolize the unique position in the Corpus.

This is strongly recommended to be the same as the name of the Intent to be returned in the response.

If an Intent is not present, consider setting this to be a human-readable string that is unique to this conversation to allow you to distinguish various responses.

Examples

  • When Intent is Welcome, the conv_id should also be labelled as Welcome
  • When Intent is not present, the conv_id should be labelled based on the position of the conversation at that time:
    • Welcoming the user - eg. Welcome
    • Answering the user's question - eg. QuestionA
    • Saying goodbye to the user - eg. Goodbye

conv_intent

string

Similarly to conv_id, this takes the form of the Intent or a string based on the position of the conversation.

conv_type

string ['Default', 'Entry', 'Terminal','Chit-Chat', 'Conversation Management']

This is a predefined classification for a given conversation node type.

  • Default: Default node value.
  • Entry: Very first node in conversation, usually a greeting.
  • Terminal: Node that marks conversation as completed, could be a good-bye, escalation or similar node that does not need a follow-up by persona. This would be useful to identify conversations that are abandoned mid-path.
  • Chit-Chat: Parts of conversation that mimic small-talk, usually would not have a clear goal and are more suited to keep user engaged. e.g. "How are you today?"
  • Conversation Management: Nodes that ask for clarification from the user. Most common option here would be speech to text management "Could you repeat that?", "I did not understand"

conv_tag

string

User-defined, more granular classification for a given node type. This is intended to be used in conjunction with conv_type.

When passing this value into the Skill, it is strongly encouraged to have Skill.MySkillName. as the prefix for this value.

Examples

  • When the conv_tag sent by the Corpus is GreetUser and the Skill name is My Skill
  • conv_tag to be returned by the Skill should be Skill.MySkill.GreetUser.

Execute Endpoint

In order to include these Conversation Variables in the response of your Skill's execute endpoint, you will need to extract these Variables from the service provider and add it to your Skill's response.

app.ts
app.post('/execute', (req: Request, res: Response) => {
// other code here

// Make request to third party service
const fakeNLPService = new FakeNLPService(firstCredentials, secondCredentials);

// Extract relevant response data from the third party service
const { spokenResponse, intent, convVariables } = await fakeNLPService.send(userInput);

/**
* convVariables is in an object for this example
* Example:
* const convVariables: Variables = {
* conv_id: intent.name, // using Intent name from third party service
* conv_intent: intent.name,
* conv_type: "Greeting"
* conv_tag: "Skill.MySkill.GreetUser"
* };
*/

// Construct SM-formatted response body
const smResponse = {
intent,
output: {
text: spokenResponse,
variables: { ...convVariables },
},
endConversation: true,
} as ExecuteResponse;

res.send(smResponse);
});

Annotating Corpuses

Visualizing Conversation Nodes

NLP Adapter Skill