Exploring Bot Composer

What is Bot Framework Composer?

The Bot Framework Composer is an IDE built on top of Microsoft's Bot Framework SDK, with the goal of providing a powerful and intuitive tool for developing and deploying bots at a faster pace. Its flow-type designer (GUI) is easy to understand and get started with, while the customization aspect doesn't feel compromised, the source code whether C# or JavaScript, can be edited and extended directly as it can be opened from Visual Studio or Visual Studio Code.

While Composer was available since the end of 2019, 2021 saw the release of its 2.0 version, which brings a good number of new features, which might even convince developers, who perhaps found previous iterations of Composer's capabilities wanting, to give it another chance.

First hand I confirm that the first versions had to be installed manually as all preview or beta releases, however the operation and intuitiveness remains the same as now.

Of course, it is still possible to build bots directly with the SDK, without relying on Composer.

Let's go directly to code!!!

After installing the Electron app on windows 10, we can choose to use any of the predefined templates whether C# templates or NodeJS templates

For our purposes, we're going to select the Empty Bot template, however, choosing a more comprehensive template can save valuable time if our goal was to build a bot for production use. Going down the list, each template can do the same as the previous one, only adding capabilities on top - the Enterprise templates even integrate with Office 365 and Active Directory via the Graph API.

Aside from the empty bot, all templates require provisioning some additional Azure resources, either a cognitive language service or a QnA Maker (though for demo and exploration purposes, free tiers are available).


After selecting a template, we will also need to specify the runtime of our Bot, either an App Service or Azure Functions.


Construction Blocks

Before we start building our Bot, we need to understand the basics of the components that make up a Bot.

The main entry point of our Bot is a Dialog. There is always a single main dialog, however we can build any number of additional child dialogs (or choose to put all of our logic inside the main dialog). Each dialog can be viewed as a container of independent functionality. As the complexity of our Bot increases, more and more child dialogs will be added - for the most sophisticated real-world Bots, having hundreds of dialogs is not uncommon.

Each dialog contains one or more handlers, called Triggers. A trigger has a condition and a list of Action to execute, whenever the defined condition is met. A trigger can be when the Bot recognizes the user's intent, an event in its dialog (lifecycle events), activities (e.g., user joined, user is typing) and so on. Of course, there are several types of actions we could take, maybe we want to call an external API, set some internal properties/variables of the currently logged in user, ask follow-up questions, or just respond with a simple text.


Creating our first interactions

It is quite reasonable to expect our Bot to greet us, when we first open its chat window. This is an activity (the user joined) in our main dialog, so let's add a new trigger in the main dialog:


Composer will then ask us to specify the type of this trigger (Activities) and then the exact type of activity we would like to trigger on (Greeting).

At the start, a flow is generated with a loop and an if statement, which contains some default values:


Although it may seem a bit more complicated than expected (after all, we just want to send a "Hello" message), there is not much to do here, just housekeeping. We want to focus on the "Send a reply" card, where we can define the message with which the Bot should greet each user. To make our Bot a bit more exciting, we can add some alternatives, from which the Bot will choose one at random:


By default, the only language of our Bot will be English, however, if we were to navigate to our source files, we would see that under the hood, the Composer saved our templates with an identification to their language, as JPDBlog.en-us.lg. In the settings, we can add new languages, which would generate a copy of the .lg files where we can customize the responses accordingly.


Running the Bot on Direct Line Protocol

La aplicación Composer viene con un gestor de tiempo de ejecución integrado con el que podemos ejecutar nuestro Bot en localhost. Para comunicarnos con el Bot, podemos optar por utilizar el Chat Web también integrado, sin embargo, su fiabilidad a día de hoy podría no ser tan buena como la de ejecutar localmente una instancia del Emulador del Framework del Bot.


The reason we potentially have to install another application just to run the Bot locally is the communication protocol it uses under the hood: Direct Line (or Direct Line Speech), a standard HTTPS protocol. For the sake of technical accuracy, the emulator actually uses the JavaScript client for Direct Line, which also uses Web Chat. Web Chat is a fairly customizable way to integrate Bots into existing front-end applications - it even offers a component for React.

After connecting to the Bot, as expected, it greets us:


To validate that the bot changes its greeting randomly, we can restart the conversation and finally see our alternative "Hi there!!!" greeting.


Expressions, functions and language generation

We may want to greet our user in a more formal way, with a "Good morning", "Good afternoon" or "Good evening". To do this, we will need to evaluate an expression to determine the current time of day, and then switch between responses, depending on the current time.

We have access to a set of pre-built functions, of which we will need two: utcNow, to get the current date and time, as well as getTimeOfDay, which expects a timestamp as a parameter and returns 'morning', 'afternoon', 'evening' or 'midnight'. Since we will only need three of these values, we will have to process our response a bit more.

For more complex responses, it is a good idea to extract the logic from the "Submit Response" step to the "Bot Responses" section and edit them directly there.


To access variables and dynamic/embedded functions, we will use string interpolation: if we set ${utcNow()} as the value, the Bot will evaluate it first and respond with the output.

Now we can define an if statement to only respond with our three greetings as such:


Finally, we just need to make sure that our "Send Response" step returns the value of our new template: just delete the previously coded values and use the Bot icon - all templates in the scope will be selectable:


The editor also has a decent syntax highlighter, which makes it quite easy to write even more complex templates: functions can be combined, parameters can be passed along and so on, as we will see below.


Suggestion Action Buttons and cards

Along with a text response, we can send additional elements with our response, most often used to enhance it with suggestions, buttons or follow-up options.

The simplest form is a suggested action. In the user interface, it will show up as a separate clickable bubble that will trigger a different action; in our case, we will set "Tell me a joke!" as the text for the suggested action. In order to respond with a joke, we will need to have our bot recognize and interpret the user's incoming "Tell me a joke!" text - in our main dialog, we can select the Default, Regular Expression or Custom type for its recognizer. For our case, Regular Expression will suffice.

After this configuration, we can create our new trigger, which this time will be based on the user's intent, where the RegEx pattern must match our hint text "Tell me a joke!".

As for the actions under the new trigger, another "Send response" will be needed, with our joke of choice - this is all that is needed to create and handle the suggestions.


In addition to suggestion buttons, we can also attach cards with our text response, either to ask the user to register somewhere, a thumbnail, an audio or a video. A good use case for cards might be if the user was searching for products: if the intent is recognized, returning more details with follow-up actions about the product would be a better experience for the user, rather than simply returning a link:



Consuming external API

In the real world, calling one or more APIs would probably be a fairly early requirement. In our case, when the user asks for the most popular books of the moment, we will call the New York Times Books API to get the top five bestselling juvenile books.

In a new trigger on a new and recognized user intent, we can choose the Send an HTTP request step:


Of note here is the OAuth login option: should the external API require it, that option would provide a login button for the user to identify themselves beforehand.

If we were to call other services in our Azure subscription, authentication could have been done via managed identities or service principals after deploying our Bot.

Since the NYTimes API uses an api key, we don't need to configure anything else, other than making sure we don't consign our api key in version control - if we were going to deploy to production, injecting sensitive data from a Key Vault during the CI/CD pipeline would have been an option.

After populating the HTTP method, the URL and the necessary headers, we will need to store the result of the call in a variable to access in the next steps - for this, we can create a new variable in our dialog, dialog.nyt_api_response.

As the HTTP request has failed or succeeded, we will put an If block after our Send HTTP request step. In it, we will check the status code: dialog.nyt_api_response.statusCode == 200. For both cases, true and false, a Send response block is placed to respond to the user. At this point, our action should look like this:


After validating that everything works as expected, we can start processing the result of the successful call. First, we will only need the title and author of the response, so we will use the built-in select function (split for better readability):

This would be the LINQ equivalent of:

Since we want to display the authors as well, instead of returning only book.title, we will return

We don't want to overwhelm the user with a long string, so we can wrap the selection in a take function, specifying how many elements to include in our result:

Since take returns an array, we will have to concatenate the elements into a single string: we will use the join function:

In the end, this would be equivalent in LINQ to:

Since we already know how to attach a card to our answer, we can also ask the user for the Amazon link to the number 1 best seller:

Where cardActionTemplate is defined as a template in Bot Responses:

Finally, we end with the following answer, with a button that works:


Deploying and integration

We can start the deployment of our Bot in Azure from the Publish option of the Composer. We can create new resources, use existing ones or generate a request that we can forward to someone who has access to provision the resources, in case we have limited roles.

For our simple Bot, we will only need an App Registration, Azure Service Web Apps for hosting and a Microsoft Bot Channel Registration. After a few minutes, the resources are deployed and we can confirm our intention to push our Bot source to the cloud.

As of July 2, 2021, the deployment integration within a CI/CD pipeline in Azure DevOps is in Preview, so it is not yet ready for production. However, the pipeline steps would be recognizable to anyone who has ever created a pipeline for .NET web applications (dotnet build, publish and AzureWebApp tasks).

Once the deployment is complete, we can integrate the Bot into, for example, our existing React app, via the Web Chat botframework-webchat npm package. To authenticate with our Bot, we will only need a simple token, which we can generate ourselves directly in the Azure Portal from the Channels menu of the deployed Web App Bot.


Thanks for reading this post, I hope you enjoy it and learn something useful

Share this post