NLP Adapter Skill
This type of Skill provides a mapping between the Soul Machines Skill API and a third-party NLP system or chatbot platform. The mapping allows a Soul Machines Digital Person to present content from a third-party service as a natural conversation.
Generally an NLP Adapter Skill requires an API key or other form of credentials to be configured by a project creator in DDNA Studio, and then uses those credentials to connect to a third-party service in order to supply the actual content.
Adapter Skills can be registered in DDNA Studio as any of the following Skill types:
- Base Corpus:
skillType: "BASE_CORPUS"
- Fallback:
skillType: "DEFAULT"
withmatchType: "FALLBACK"
It is the responsibility of the conversation engineer authoring the conversational content to ensure that their conversation correctly adheres to the requirements of either a Base Skill or Fallback Skill as needed.
In a hurry? You can download or clone an NLP Adapter Skill template app from GitHub:
These template apps contain the same code that is walked through in this document.
Create a new Web API project
Skills are implemented as HTTP Services that send and receive JSON data. You can use whicher Web Application framework you are most comfortable with. You may even implement your Skill as a microservice (lambda) if you are comfortable with building them.
If you are unsure, we recommend choosing one of the following:
Install SkillSDK
The SkillSDK provides type support for Skill Development.
- NodeJS
- Python
npm i @soulmachines/smskillsdk
pip install smskillsdk
Create Your Skill Definition
For starters, we will create a Skill Definition file to configure the settings for our new Base Skill.
In your project root, create skill-definition.json
with the following content:
{
"name": "My NLP Adapter",
"summary": "Connects a third-party platform as a conversation provider",
"description": "",
"status": "ACTIVE",
"serviceProvider": "SKILL_API",
"config": {
"skillType": "BASE_CORPUS"
}
}
Create the Execute endpoint
When a user speaks to a Digital Person, that speech is transcribed into text and sent to the Skills system to respond.
The Skills system will send the user input to your Skill's execute
endpoint as an HTTP POST
Request.
In your web app, implement an execute
route that receives a POST
request and returns a Soul Machines ExecuteResponse
object.
For more information about this endpoint, please refer to the API reference - Execute.
- ExpressJS
- Python
app.post('/execute', (req: Request, res: Response) => {
// Construct SM-formatted response body
const smResponse: ExecuteResponse = {
output: {
text: 'Hello world!',
},
endConversation: true,
};
res.send(smResponse);
});
@router.post("/execute", status_code=200, response_model=ExecuteResponse, response_model_exclude_unset=True)
async def execute(request: ExecuteRequest) -> ExecuteResponse:
# Construct SM-formatted response body
output = Output(
text="Hello world!",
)
response = ExecuteResponse(
output=output,
endConversation=True,
)
return response
Add the execute endpoint to your skill-definition.json
:
{
"name": "My NLP Adapter",
"summary": "Connects a third-party platform as a conversation provider",
"description": "",
"status": "ACTIVE",
"serviceProvider": "SKILL_API",
"endpointExecute": "https://yourname.loca.lt/execute",
"config": {
"skillType": "BASE_CORPUS",
"configMeta": []
}
}
Test the Execute endpoint
Once the endpoint has been set up, you can now serve your app and test it.
- ExpressJS
- Python
Replace PORT with the port number that you want to serve your app on.
PORT=3000 npm start
python3 app.py run
Once your app is up and running, call the following endpoint and verify that it returns the intended Soul Machines ExecuteResponse
object.
Replace PORT with the port that your app is running on.
curl -X POST http://localhost:PORT/execute
Alternatively, you may also use API platforms such as Postman to send a POST
request to the endpoint and verify the response.
Capture Credentials
An Adapter Skill will need to capture some kind of credentials in order to authenticate and interact with a third-party platform. These can be configured in the skill definition, and will then be presented as form inputs to Studio users who select this Skill when configuring a project.
Under the config.configMeta
property, add the required fields corresponding to the credentials required for your Skill to communicate with the third-party platform.
Using "PASSWORD" as opposed to "TEXT" for the type will mask the value on Studio UI when the user is keying in the value.
{
"name": "My NLP Adapter",
"summary": "Connects a third-party platform as a conversation provider",
"description": "",
"status": "ACTIVE",
"serviceProvider": "SKILL_API",
"config": {
"skillType": "BASE_CORPUS",
"configMeta": [
{
"label": "Third Party Credentials 1",
// when developing in Python, you may choose to use snake_case for 'name' here for convention
"name": "firstCredentials",
"required": true,
"type": "TEXT"
},
{
"label": "Third Party Credentials 2",
// when developing in Python, you may choose to use snake_case for 'name' here for convention
"name": "secondCredentials",
"required": true,
"type": "PASSWORD"
}
]
}
}
The configuration provided by a DDNA Studio user is subsequently sent with each request body under its config
property and is private to your Skill.
In our Skill execute endpoint, the config can be retrieved via req.body.config
.
- ExpressJS
- Python
app.post('/execute', (req: Request, res: Response) => {
// Get the Soul Machines request object
const smRequest = req.body as ExecuteRequest;
// Extract relevant data (eg credentials) from skill config
const { firstCredentials, secondCredentials } = smRequest.config;
// Construct SM-formatted response body
const smResponse: ExecuteResponse = {
output: {
text: 'Hello world!',
},
endConversation: true,
};
res.send(smResponse);
});
@router.post("/execute", status_code=200, response_model=ExecuteResponse, response_model_exclude_unset=True)
async def execute(request: ExecuteRequest) -> ExecuteResponse:
# Get the skill config from request object
skill_config = request.config
# Extract relevant data (eg credentials) from skill config
credentials = itemgetter("first_credentials", "second_credentials")(skill_config)
# Construct SM-formatted response body
output = Output(
text="Hello world!",
)
response = ExecuteResponse(
output=output,
endConversation=True,
)
return response
Integrate third-party API
The primary purpose of an NLP Adapter Skill is to map requests / responses between SM Skills and a third-party NLP Platform. This part of the implementation will be different for every third-party platform.
The Skill must take the user's input and send it to the third-party platform in the correct format for that platform.
The Skill must then take the response from the third-party platform, and map that back to a Skill ExecuteResponse format.
The example below shows how you might achieve this with a made-up platform called "Fake NLP Service".
- ExpressJS
- Python
app.post('/execute', async (req: Request, res: Response) => {
// Get the Soul Machines request object
const smRequest = req.body as ExecuteRequest;
// Extract relevant data (eg credentials) from skill config
const { firstCredentials, secondCredentials } = smRequest.config;
// Get the user's input text
const userInput = smRequest.text;
// @TODO: Replace this with a connection to your own NLP Platform
const fakeNLPService = new FakeNLPService(
firstCredentials,
secondCredentials
);
const fakeNLPServiceResponse = await fakeNLPService.send(userInput);
// extract relevant data from the third-party response
const spokenResponse = fakeNLPServiceResponse.text;
// Construct SM-formatted response body
const smResponse: ExecuteResponse = {
output: {
text: spokenResponse,
},
endConversation: true,
};
res.send(smResponse);
});
@router.post("/execute", status_code=200, response_model=ExecuteResponse, response_model_exclude_unset=True)
async def execute(request: ExecuteRequest) -> ExecuteResponse:
# Get the skill config from request object
skill_config = request.config
# Extract relevant data (eg credentials) from skill config
credentials = itemgetter("first_credentials", "second_credentials")(skill_config)
# Get the user's input text
user_input = request.text
# @TODO: Replace this with a connection to your own NLP Platform
fake_nlp_service = FakeNLPService(*credentials)
fake_nlp_response = fake_nlp_service.send(user_input)
# Extract relevant data from the third-party response
spoken_response = fake_nlp_response.text
# Construct SM-formatted response body
output = Output(
text=spoken_response,
)
response = ExecuteResponse(
output=output,
endConversation=True,
)
return response
Once again, test the endpoint locally to verify that it works as intended.
Welcome Intent
Each project in DDNA Studio has the option to enable "My Digital Person should greet me at start". If this is turned on, then a message with the text "Welcome" will be sent on behalf of the user when the Digital Person session begins.
As such, a Skill should ensure that this Welcome message is supported, and that the Skill will respond appropriately. This may mean simply forwarding the "Welcome" text to your NLP Platform, or may require mapping the "Welcome" to a specific event for that platform.
There is no guarantee that the "Welcome" message will be sent to your particular Skill, as it may be disabled on a per-project basis. Alternatively, another Skill with higher priority may have handled the greeting, preventing the message from ever reaching your Skill.
- ExpressJS
- Python
app.post('/execute', async (req: Request, res: Response) => {
// Get the Soul Machines request object
const smRequest = req.body as ExecuteRequest;
// Extract relevant data (eg credentials) from skill config
const { firstCredentials, secondCredentials } = smRequest.config;
// Get the user's input text
const userInput = smRequest.text;
// @TODO: Replace this with a connection to your own NLP Platform
const fakeNLPService = new FakeNLPService(
firstCredentials,
secondCredentials
);
// differentiate between "Welcome" message and actual user input
let fakeNLPServiceResponse;
if (userInput === 'Welcome') {
// TODO: send a conversation initialization message
// in the correct format for your NLP Platform.
// eg. if your "fakeNLPService" expected a "START" message
// then it might look something like this.
fakeNLPServiceResponse = await fakeNLPService.send('START');
} else {
fakeNLPServiceResponse = await fakeNLPService.send(userInput);
}
// extract relevant data from the third-party response
const spokenResponse = fakeNLPServiceResponse.text;
// Construct SM-formatted response body
const smResponse: ExecuteResponse = {
output: {
text: spokenResponse,
},
endConversation: true,
};
res.send(smResponse);
});
@router.post("/execute", status_code=200, response_model=ExecuteResponse, response_model_exclude_unset=True)
async def execute(request: ExecuteRequest) -> ExecuteResponse:
# Get the skill config from request object
skill_config = request.config
# Extract relevant data (eg credentials) from skill config
credentials = itemgetter("first_credentials", "second_credentials")(skill_config)
# Get the user's input text
user_input = request.text
# @TODO: Replace this with a connection to your own NLP Platform
fake_nlp_service = FakeNLPService(*credentials)
# differentiate between "Welcome" message and actual user input
fake_nlp_response: str
if user_input === "Welcome":
# TODO: send a conversation initialization message
# in the correct format for your NLP Platform.
# eg. if your "fake_nlp_service" expected a "START" message
# then it might look something like this.
fake_nlp_response = fake_nlp_service.send("START")
else:
fake_nlp_response = fake_nlp_service.send(user_iInput)
# Extract relevant data from the third-party response
spoken_response = fake_nlp_response.text
# Construct SM-formatted response body
output = Output(
text=spoken_response,
)
response = ExecuteResponse(
output=output,
endConversation=True,
)
return response
Unhandled Intents
Most bot frameworks require all intents to be explicitly defined. This means that some user inputs may be unhandled, as the user might say something which does not match with any intent.
In this case, the Skill should respond with "NO_MATCH", indicating that the skill was unable to respond to the user's input, and other skills should be given an opportunity to respond.
If the NO_MATCH
is handled in the third-party system, for example the DP responding with "I'm sorry, I don't understand" then the Skill is considered a "FALLBACK" skill, where all inputs result in a DP speech response no matter what. These types of skills must be used at the bottom of the skill stack as they do not ever respond with a NO_MATCH
and therefore never give other skills an opportunity to respond.
- ExpressJS
- Python
app.post('/execute', async (req: Request, res: Response) => {
// Get the Soul Machines request object
const smRequest = req.body as ExecuteRequest;
// Your existing code here to communicate with the third-party NLP platform
// ...code
// extract relevant data from the third-party response
const fakeNLPServiceResponse = await fakeNLPService.send(userInput);
const spokenResponse = fakeNLPServiceResponse.text;
// Construct base SM-formatted response body
const smResponse: ExecuteResponse = { endConversation: true };
// Example of a possible NO_MATCH when no response is returned by third-party
if (spokenResponse) {
// only set text output if there is a response
smResponse.output = {
text: spokenResponse,
};
} else {
// set intent to NO_MATCH
smResponse.intent = {
name: 'NO_MATCH',
confidence: 1, // or as defined by the third-party
};
}
res.send(smResponse);
});
@router.post("/execute", status_code=200, response_model=ExecuteResponse, response_model_exclude_unset=True)
async def execute(request: ExecuteRequest) -> ExecuteResponse:
# Get the skill config from request object
skill_config = request.config
# Extract relevant data (eg credentials) from skill config
credentials = itemgetter("first_credentials", "second_credentials")(skill_config)
# Get the user's input text
user_input = request.text
# @TODO: Replace this with a connection to your own NLP Platform
fake_nlp_service = FakeNLPService(*credentials)
fake_nlp_response = fake_nlp_service.send(user_iInput);
# Extract relevant data from the third-party response
spoken_response = fake_nlp_response.text
# Construct base SM-formatted response body
response = ExecuteResponse(
endConversation=True,
)
# Example of a possible NO_MATCH when no response is returned by third-party
if spoken_response: # only set text output if there is a response
response.output = Output(
text=spoken_response,
)
else: # set intent to NO_MATCH
response.intent = Intent(
name="NO_MATCH",
confidence=1, # or as defined by the third-party
)
return response
Fallback Skill
A "FALLBACK" skill can be configured by pairing skillType: "DEFAULT"
with matchType: "FALLBACK"
, see Default Skill Type
It is strongly recommended to create a new Skill Definition file and register this as a separate Skill.
However, you may choose to reuse your endpoints if you intend to use the same web application as your Base Skill.
{
"name": "My Fallback Skill",
"summary": "",
"description": "",
"status": "ACTIVE",
"serviceProvider": "SKILL_API",
"endpointExecute": "https://yourname.loca.lt/execute",
"config": {
"skillType": "DEFAULT",
"matchType": "FALLBACK"
}
}
Register skill in studio
Use the skill-definition.json
to register your Skill in DDNA Studio via the Manage Skills page.
Then, you will be able to see and select the skill in the project config page. You will need to have your skill running in order to deploy the project, otherwise the skill wont be validated and the deploy will fail.
Advanced Concepts
Stateful Sessions
Most chatbots are stateful, meaning you want to start a single session and then continue interacting with that session for the duration of the DP interaction. This is important for multi-step conversations where session context is persisted across turns.
SM treats your skill as being stateless, so maintaining state is the responsibility of the skill. You may choose to use the SM session to maintain state manually, or you may store the session in SM memory for future use.
Using the SM session: Every request includes a property "sessionId" which you can use to determine which end-user session the request came from.
Persisting your own session: Every request includes a property "memory" which includes key/value pairs for the current session. You may use memory to store your own sessionId and then read it on subsequent conversation turns.
It's recommended to store a sessionId as a "private" memory so that it will be accessible only to your own skill.
For more information about states, refer to Manage States.
Context Variables
SM has a concept of context variables which may need to be mapped to the third-party system. Every request includes a public-variables property which may be used for sharing context variables.
Additional properties can also be added to the context variables, which will be shared with the UI.
For context variables which should be private to your skill, you should use the 'memory' feature to store and retrieve them. Memory values can be private, so that only your skill can access them.
Content Cards
SM expects content cards to be included in the response's context variables. There is a set of standard content cards which are available in the Widget.
Mapping a third-party "image" card to an sm "image" card will make that card available to show in the SM Widget UI. Note that cards are not shown automatically, and must be triggered by a @Showcards() command at the desired moment in the DP's response.
Third-party card types that do not have an SM equivalent are not supported.
- ExpressJS
- Python
app.post('/execute', async (req: Request, res: Response) => {
// Get the Soul Machines request object
const smRequest = req.body as ExecuteRequest;
// Your existing code here to communicate with the third-party NLP platform
// ...code
// examples of spoken and cards responses returned from the third-party
const spokenResponse = `Hello! @Showcards(myImageCard) Here is a kitten.`;
const cardsResponse = {
myImageCard: {
type: 'image',
data: {
url: 'https://placekitten.com/200/200',
alt: 'An adorable kitten',
},
},
};
// Construct SM-formatted response body
const smResponse = {
output: {
text: spokenResponse,
variables: {
public: {
...cardsResponse,
},
},
},
endConversation: true,
} as ExecuteResponse;
res.send(smResponse);
});
@router.post("/execute", status_code=200, response_model=ExecuteResponse, response_model_exclude_unset=True)
async def execute(request: ExecuteRequest) -> ExecuteResponse:
# Your existing code to extract relevant data (eg. credentials)
# ...code
# Your existing code here to communicate with the third-party NLP platform
# ...code
# examples of spoken and cards responses returned from the third-party
spoken_response = "Hello! @Showcards(myImageCard) Here is a kitten."
cards_response = {
"card": {
"type": "image",
"data": {
"url": "https://placekitten.com/200/200",
"alt": "An adorable kitten",
},
},
}
# Construct SM-formatted response body
variables = Variables(public=cards_response)
output = Output(
text=spoken_response,
variables=variables,
)
response = ExecuteResponse(
output=output,
endConversation=True,
)
return response
For more information, see Content Cards.