Download Transcript option in Botframework v4

This is a whistle stop tour for adding a method to allow the user to download their transcript.

Automatically record the transcript

You need to add the built-in transcript services. In this example I’ll use the In-Memory store, you’ll want to evaluate the others for production code. Add this to the startup.cs

 
var memoryTranscriptStore = new MemoryTranscriptStore();
…
options.Middleware.Add(new TranscriptLoggerMiddleware(memoryTranscriptStore));

Next we’ll create a component to expose the ITranscriptStore (memoryTranscriptStore) to the dialog context. We’ll do that via our own middleware;

public class TranscriptProvider : IMiddleware
{
    private readonly ITranscriptStore transcriptStore;

    public TranscriptProvider(ITranscriptStore transcriptStore)
    {
        this.transcriptStore = transcriptStore;
    }

    public async Task OnTurn(ITurnContext context, MiddlewareSet.NextDelegate next)
    {
        context.Services.Add(transcriptStore);
        await next();
    }
}

Now add this to the startup middleware;

options.Middleware.Add(new TranscriptProvider(memoryTranscriptStore));

Download the transcript

The middleware will now capture all the activity traffic too and from the bot and user. We can add a simple mechanism to request the transcript file. In your bot’s OnTurn method we can hardcode a ‘Transcript’ message/command;

if (context.Activity.Text.Equals("Transcript", StringComparison.InvariantCultureIgnoreCase))
{
    var transcriptStore = context.Services.Get();
    var transcripts = await transcriptStore.GetTranscriptActivities(context.Activity.ChannelId, context.Activity.Conversation.Id);

    var transcriptContents = new StringBuilder();
    foreach (var transcript in transcripts.Items.Where(i => i.Type == ActivityTypes.Message))
    {
        transcriptContents.AppendLine((transcript.From.Name == "Bot" ? "\t\t" : "") + transcript.AsMessageActivity().Text);
    }

    byte[] bytes = StringToBytes(transcriptContents.ToString());

    var contentType = "text/plain";
    var attachment = new Attachment
    {
        Name = "Transcript.txt",
        ContentUrl = $"data:{contentType};base64,{Convert.ToBase64String(bytes)}",
        ContentType = contentType
    };

    var activity = MessageFactory.Attachment(attachment);
    await context.SendActivity(activity);

    return;
}
...
private byte[] StringToBytes(string transcriptToSend)
{
    byte[] bytes = new byte[transcriptToSend.Length * sizeof(char)];
    System.Buffer.BlockCopy(transcriptToSend.ToCharArray(), 0, bytes, 0, bytes.Length);
    return bytes;
}

When your user types in ‘Transcript’ they’ll be provided with a download attachment called ‘Transcript.txt’.

Production Ready

The above code is great for early testing but you should probably consider using the Download Activity Type and providing a URL to the full transcript. The above code has a nasty weakness in that it must fit inside the maximum reply payload for the bot, ~94K. You could just truncate the body but I’ll leave that up to you. Note, as of writing the emulator has an issue where it will allow you to click on the download but then it gets into a pickle and launches Windows Store. If you try this on a webchat it works fine.

Advertisements

Manipulating waterfall steps Botframework v4

To be honest I’m not even sure that this is strictly supported or that it is even a good idea, but as a point of interest you can manipulate the waterfall steps in v4. E.g. The standard flow is; step 1 -> step 2 -> step 3 -> step n. If your code realises that the user should skip a step then it can invoke the ‘next’ function;

async (dc, args, next) =>
{
    if (someCondition)
    {
        await next(args);
        return;
    }
}

That’s pretty easy. The difficult question, and one that I’m not even sure is (or should be) a valid one, how do you go back a step? Well, it is possible but it’s messy and you can’t get back to the initial step (although that is just starting again). You can go back a step by manipulating the dialog state;

// at step n
async (dc, args, next) =>
{
    if (someCondition)
    {
        var targetStep = 1;
        dc.ActiveDialog.Step = targetStep - 1;
        await Step1Prompt();
        return;
    }
}

I don’t recommend this approach, it’s ugly, but you know…possible.

ChoicePrompt, how to always call the validator in Botframework v4

BotFramework v4 has a number of helper prompts, TextPrompt, ChoicePrompt, etc. One common mechanism they share is a Validator delegate. When the user enters a value the Validator is invoked and you have an opportunity to check/change the outcome

var obj = new ValidatingTextPrompt(async (context, toValidate) =>
    {
        if (toValidate.Text.Length < minimumLength)
        {
            toValidate.Status = null;
            await context.SendActivity(minimumLengthMessage);
        }
    }
    );

The presence of the RetryPromptString means the ChoicePrompt will automatically retry of the user enters the incorrect value, such as 'frog'. However, what happens if the user enters the value '3'? Unfortunately this is considered as the 3rd choice and 'quit' will be selected. If your UI is really serving up numbers like this, that could be a real problem. Imagine if the list was 2,4,6 and you entered '3' or even worse '2'!? So I really want to add a Validator delegate that all prompts support;

this.Dialogs.Add("choicePrompt", new ChoicePrompt(Culture.English, ValidateChoice));

private async Task ValidateChoice(ITurnContext context, ChoiceResult toValidate)
{
    var userMessage = context.Activity.Text;
    if (userMessage == "3")
    {
        toValidate.Status = null;
        await Task.Delay(1);
    }
}

Sorted right? Wrong. Unfortunately there are two problems with this solutions; a) this is only called when a value from the choices list is selected (really??) b) the resulting selected value is passed in and not the original, i.e. ‘quit’ is passed in rather than ‘3’. My solution is to derive a new ChoicePrompt that will always call the available Validator with the original values;

public class ChoicePromptAlwaysVerify : Microsoft.Bot.Builder.Dialogs.ChoicePrompt
{
    private readonly PromptValidatorEx.PromptValidator validator;

    public ChoicePromptAlwaysVerify(string culture, PromptValidatorEx.PromptValidator validator = null) : base(culture, validator)
    {
        this.validator = validator;
    }

    protected override async Task OnRecognize(DialogContext dc, PromptOptions options)
    {
        var recognize = await base.OnRecognize(dc, options);
        if (this.validator != null)
        {
            await this.validator.Invoke(dc.Context, recognize);
        }

        return recognize;
    }
}

The code works by forcing the recognize override to call the validator. The downside is that this code will be called twice when the user makes a good choice (sigh), but it’s a small sacrifice to regain some consistent control over the valid values. It also allows for more specialized messages as the RetryMessage is fixed and has no chance to give a contextual response.

Creating a reusable TextPrompt in Bot Framework V4

The TextPrompt mechanism in V4 is fine and can implement a variety of validation techniques because it uses a delegate. However, delegates can create a lot of code noise, especially if you have a validation mechanism that you wish to reuse. Consider the following ‘Hello World’ of Waterflow steps;

public MyBot()
{
    dialogs = new DialogSet();
    dialogs.Add("greetings", new WaterfallStep[]
    {
        async (dc, args, next) =>
        {
            // Prompt for the guest's name.
            await dc.Prompt("textPrompt","What is your name?");
        },
        async(dc, args, next) =>
        {
            // args; Value: "<name>", Text: "<name>"
            var userResponse = args["Text"] as string;
            await dc.Context.SendActivity($"Hi {args["Text"]}!");
            await dc.End();
        }
    });

    // add the prompt, of type TextPrompt
    dialogs.Add("textPrompt", new Microsoft.Bot.Builder.Dialogs.TextPrompt(TextValidation));
}

private async Task TextValidation(ITurnContext context, TextResult toValidate)
{
    if (toValidate.Text.Length < 4)
    {
        toValidate.Status = null;
        await context.SendActivity("Sorry needs to be > 4");
    }
}

The problem is that the TextValidation delegate is ugly to re-use. I.e. I want a nicer way to share a simple length validation. This is my solution;

public class ValidatingTextPrompt : Microsoft.Bot.Builder.Dialogs.TextPrompt
{

    public static ValidatingTextPrompt Create(int minimumLength, string minimumLengthMessage)
    {
        var obj = new ValidatingTextPrompt(async (context, toValidate) =>
            {
                if (toValidate.Text.Length < minimumLength)
                {
                    toValidate.Status = null;
                    await context.SendActivity(minimumLengthMessage);
                }
            }
            );
        return obj;
    }

    public ValidatingTextPrompt(PromptValidatorEx.PromptValidator<TextResult> validator) : base(validator)
    {
    }
}

Then you can swap out the TextPrompt with the more specialized code;

dialogs.Add("textPrompt", ValidatingTextPrompt.Create(5, "I can't remember such a short name, please try again"));

If you have thoughts about a better way then please feel free to comment.

A debug TRACE with a filter

There are a number of options for diagnostic tracing in .net. Today I really wanted to trace a very specific set of API calls that sent and received JSON. I just wanted the batches of JSON, not other ‘noise’. I had a quick look around and it looked like I just wanted to write;

Trace.WriteLine(myJson, "CategoryOfJson");

Problem is that is going to get swamped with all the other data. I had a look at source, switches, listeners, etc. In the end I decided to write this filtered listener;

public class JsonTraceListener : TextWriterTraceListener
{
    /// <summary>
    /// The filtered category
    /// </summary>
    public const string Category = "JSON";

    /// <summary>
    /// Initializes a new instance of the  class.
    /// </summary>
    /// The name of the file the  writes to.
    /// The name of the new instance.
    public JsonTraceListener(string fileName, string name)
        : base(fileName, name)
    {
    }

    /// <summary>
    /// WriteLine with category override will call this too, prefixed with the category text
    /// </summary>
    /// A message to write.
    public override void WriteLine(string message)
    {
        if (message.StartsWith(Category))
        {
            base.WriteLine(message);
        }
    }

    /// <summary>
    /// Writes a category name and a message to the listener you create when you implement the  class, followed by a line terminator.
    /// </summary>
    /// A message to write.
    /// A category name used to organize the output.
    public override void WriteLine(string message, string category)
    {
        if (category == Category)
        {
            base.WriteLine(message, category);
        }
    }
}

The listener just needs to be added and used;

var listener = new JsonTraceListener(@"C:\Logs\Json.log", "JsonListener");
listener.Filter = new EventTypeFilter(SourceLevels.All);
Trace.Listeners.Add(listener);
Trace.WriteLine("some json here", JsonTraceListener.Category);

// You must close or flush the trace to empty the output buffer.
Trace.Flush();

The code could be easily changed to allow any type of filter condition. I’m sure there is a nicer way to do this, feel free to comment.

Deconstructing my issue with Fluent Design

Almost always when I see some form of promotion of Fluent Design I see demonstrations of Acrylic & Reveal and that’s it. The whole of Fluent Design distilled into cloned examples of two controls. This is my attempt to understand why that is. I have no knowledge of how the Fluent team worked or works. This is all guess work, theory and my opinion. I will try and call out when it really is just me guessing, so here goes;

Looking at the headlines of the Fluent Design documentation you will see this, ‘Communicate using experience from physical world’, which further breaks down to; ‘Be engaging and immersive’, ‘Use Light’, ‘Create a sense of depth’ and ‘Build with the right material’.

Let’s take a look at couple of these (which actually encompasses all of them);

Use of light

Like or not is almost always represented solely by Reveal highlight/focus. Guess – this comes from the idea of using the physical would where light helps to draw attention or add clarity to a subject. In a physical world these two things are obviously important in semi-dangerous situations like driving a car. The Fuel Gauge lights up and you can avoid the oncoming lack of mobility. Knowing the hot-plate/hob is on, etc., etc. However, these examples are about indicators which are not, in themselves, a Fluent concept. Hence why we only really discuss the clarity aspect. A bit like shining a torch on an area to get a better view. This is useful when a physical situation has a problematic constraint (it’s dark) or there is a need to stand out (glow sticks, day running lights, etc.). Unfortunately the former reason is often used to promote Reveal and I do not buy it. Reveal suggests that you have a 2D screen UI that includes deliberate constraints to features that they only show when a mouse (or similar physical pointer) is near. I believe this is just a bad design. Either the features need to be there as primary items or they do not. To make that design dependant on the pointing devices is just the final nail in that coffin. However, if we specialise then the design becomes relevant. In a 3D environment the levels of noise is exponentially greater than in 2D. The visual scope includes everything in periphery and in the depth, more so if you have transparent items. In these situations it can be argued that there is a benefit to de-emphasise areas and highlight others – pretty much what the human eye already does. Now put into this scenario a Hololens style selector-in-space where trying to both concentrate on the selector and evaluate the scene for actionable content is quite hard. So in this noisy scene having things indicate that you are close to been able to action them is very helpful. Yay for Reveal.

Depth

Continuing with the subject of 3D lets also look at, ‘Create a sense of depth’. At face value this is still very applicable in a 2D environment. “Is that message box blocking the canvas?”, you certainly want that to be obvious. But that isn’t what we really mean here (more on this later), what we typical consider with Fluent is Acrylic and Parallax. Let’s start with the easy one, Parallax. Yes it has its uses, and yes a lot of the time it is for some added sparkle. However, it is a fashion, who isn’t completely sick with the vertical parallax effect used in every other default Bootstrap site these days? (Side note – self same effect as seen on the landing page of Fluent.Microsoft.com). So I’m ok with Parallax, it has a use, however I think it is a pretty tenuous claim to suggest it is creating a sense of depth, at least not in any useful 3D sense of depth. Now, let’s talk about Acrylic. Ah good old acrylic. Guess – let’s start with 3D worlds. One of the big advantages of augmented reality or virtual 3D objects in your world is that you may be able to see through them – finally the X-Ray glasses that the back of all those kids comics promised. Wouldn’t it be great to have an idea of what’s behind this current UI object? You could have ‘Vanilla Sky’ style floating UIs everywhere – ok that’s a horrible extreme but somewhere before that seems like a good idea. It is like a virtual desktop but in the z rather than x axis. Having transparent UI helps you keep a grasp of where all these things are…maybe. Using tips from gaming you can also use light to further exaggerate the depth. So transparency and light seem useful design tools for 3D worlds.

Guess Summary

To reiterate my guesses;

Guess – has Reveal come from noisy 3D environments such as HoloLens but really has little value in the more common 2D space? If so is the, ‘mouse is getting nearer’ scenario really just a stretch example to allow Reveal to tick the box of useful-across-devices? I suspect you can guess my answer to that.

Guess – Acrylic too has more concrete examples in 3D spaces, but there are one or two examples in 2D space. Any temporary control nearer the user (e.g. navigation menu fly-out) or pinned hovering control (perhaps a floating translator). However Acrylic has been sold pretty much as purely razzmatazz, it’s graphically interesting to show off something. Typically the background of a permanently showing navigation view/menu and the app title bar. I believe these are exactly the wrong places to use Acrylic, at least on a 2D device. I have heard a couple of justifications for using Acrylic in these demo’s;

  1. You can show off your lovely desktop and personalisation choices
  2. It’s a way of distinguishing between areas.

If this was a court of law I’d present (1) as evidence that there isn’t any justification if that’s the best they can do. (2) is also a puzzle to me. For example, you have a navigation pane with a white background next to the main canvas that is also white. So if we conveniently ignore the idea of using whitespace properly or borders then we would have a problem. If we then introduce acrylic to the navigation view then we get a blurred white space – not a great deal of help. So then we make it transparent to the host and the lovely white desktop shines through, oh. So as a designer we don’t want to take the risk that at best we’ll get a slightly tinted version so we introduce colour to the acrylic to ensure our theme continues to work. So now we have a blue tinted navigation view against our white canvas. So we add a picture behind the navigation view to allow acrylic to do its thing. We then adjust the various strengths in order to get something that doesn’t clash with the text. Then we review the design and decide it looks nice the first couple of times we look at it but then gets distracting and looks like a poor UI choice. In the end we finish with a nice flat blue navigation view against a nice flat white canvas. “No acrylic here, please move along”.

What to do?

In my opinion Reveal is great for 3D worlds, almost pointless everywhere else. In its attempt to be seen as useful across all devices it actually presents more problems. If you have a touch only 2D device then a UI design where you are saying controls should be highlighted for the mouse but ‘lost’ to the touch user then I think we are on very thin ice. Acrylic should rarely be used. In a similar vein to Reveal, it’s real use is for specialised not generic scenarios. I believe you have to take the official guidance with a healthy dose of scepticism. E.g. ‘The things that surround us in the real world are sensory and invigorating. They bend, stretch, bounce, shatter, and glide. Those material qualities translate to digital environments, making people want to reach out and touch our designs.’, ok sounds good so far. ‘ Add material to your UWP app:’, oooh yes what, please tell us. ‘Acrylic’, eh, what, is that it? Aero Glass from Windows Vista? Ok I’m being harsh but my point is that acrylic is relied upon in an attempt to show off a concept but it should be seen as just that. It’s an advertising tool not a practical one and should be relegated to an, ‘oh by the way…’ rather than the constant flag bearer of Fluent.

Fluent isn’t just about Light, Depth and Material

Fluent Design is more than just Reveal & Acrylic. The problem is, the rest has all been said before. There is nothing wrong with giving the existing set of rules a new home. Guess – the issue I have is that as a UWP developer, me (and others like me) are typically the audience of UWP related material. To put it another way, I suspect the demonstrations of Fluent Design have been mostly aimed at folks that have been design & developing UWP already. I.e. the people that have followed the evolution so we want to be shown what’s new. That’s the flaw. By concentrating on what’s new, and often what looks shiny, Fluent Design has become synonymous with Reveal & Acrylic. That’s unfortunate. Recently for the first time, I saw a demo on a new Windows release that was focused on Enterprise improvements. It didn’t have any Acrylic, and it looked great.

So Fluent Design evangelists, advocates, etc., here are my pleas;

  • Adaptive controls are fine, but some environments need specialised controls (keep Reveal in mind). Don’t be afraid to admit it. Some controls are better suited than others, no amount of ‘adapting’ can make a unsuitable design work. Sometimes you just have to swap out the control/design
  • Remember that Fluent Design is all these other areas. Sure we’ve seen them before but that’s where the practical and applicable solutions live. Don’t reduce Fluent Design to a couple of sparkly surfaces and stop promoting it is as such

MS Bot framework / LUIS gotcha

I’ve been training the LUIS service for use with the Microsoft Bot Framework and I did the cardinal sin of changing two things in my code; add some ‘clever’ OO stuff and add a new set of LUIS Intents. When I tested my Bot I kept getting, ‘The given key was not present in the dictionary’ when instantiating my LUIS Dialog. Turns out that if you have an Intent in LUIS but you have not YET implemented the handler in your LUIS Dialog then this is the error you get should the user hit that Intent. I suspect there is someway of specifying a default handler but I’ve yet to find it.