Convert Vegetation Studio Pro’s Vegetation Mask to Biome Mask

Vegetation Masks (VM) can be a convenient way to allocate different vegetation types to an area. However, they don’t really play nicely with other Vegetation Studio features such as Vegetation Mask Lines (ML). The problem is that whatever vegetation you include in a VM will always override any attempts to use an ML to exclude vegetation. What does that all really mean? If you add a road to a scene then naturally you don’t want a huge tree in the middle of it. So you assign the road an ML and exclude all trees, simple. But if the trees are there because the road overlaps a VM then you’ll still have a forest in the middle of your road.

The answer to this is to use Biome Masks (BM). A BM fulfills a similar role, you can assign it different vegetation (along with lots of other options). A ML will correctly remove any vegetation from a BM and therefore you can have tree-free roads.

“Oh no, but I have created lots of VMs”
You may have a scene that you’ve painstakingly created using VMs, or perhaps you’ve used the Real World Terrain asset which generates VMs from the underlying texture (nice). If you have, then you can use the following script to convert VMs to BMs.

Instructions:

  1. Create your Biome in Vegetation Studio (Pro), remember the Biome type
  2. Create an empty game object in your scene for every type of Vegetation Mask; e.g. Grasses, Trees. This will be the Biome’s parent object
  3. Assign the script to the parent game object containing your VMs
  4. Select the Biome type from (1)
  5. Enter the “Starts With Mask” to find all the VMs that start with that name. So if you have VMs called “Grass 12344”, “Grass 54323”, etc, you would enter “Grass”
  6. Assign the associated empty game object from (2), i.e. drag that node into “Biome Parent”
  7. Repeat 3-6 for each type of conversation you want. E.g. another one with a “Starts With Mask” of “Tree”
  8. To run a conversion select the “Should Run” checkbox

Once run you should find the converted Biome Masks under the appropriate parent objects. The original Vegetation Masks are still there but have been disabled.

using System.Collections;
using System.Collections.Generic;
using AwesomeTechnologies.VegetationSystem;
using AwesomeTechnologies.VegetationSystem.Biomes;
using UnityEngine;

namespace CBC
{
    [ExecuteInEditMode]
    public class VegToBiomeMask : MonoBehaviour
    {
        [SerializeField]
        Transform biomeParent;

        [SerializeField]
        BiomeType biomeType;

        [SerializeField]
        string startWithMask;

        [SerializeField]
        bool shouldRun = false;

        void Update()
        {
            if (shouldRun)
            {
                var defaultGameObject = new GameObject();
                shouldRun = false;
                var childMasks = gameObject.GetComponentsInChildren();
                foreach(var childMask in childMasks)
                {
                    if (childMask.enabled && childMask.name.StartsWith(startWithMask))
                    {
                        var nodes = childMask.Nodes;
                        var biomeMaskGameObject = Instantiate(defaultGameObject, biomeParent.transform);
                        biomeMaskGameObject.transform.position = new Vector3(childMask.transform.position.x, childMask.transform.position.y, childMask.transform.position.z);
                        biomeMaskGameObject.name = "B" + childMask.name;
                        var biomeMaskArea = biomeMaskGameObject.AddComponent();
                        biomeMaskArea.ClearNodes();
                        foreach (var node in nodes)
                        {
                            biomeMaskArea.AddNode(new Vector3(node.Position.x, node.Position.y, node.Position.z));
                        }

                        // for some strange reason you have to reassign the positions again otherwise they all have an incorrect offset??
                        for (int x = 0; x < nodes.Count; x++)
                        {
                            var node = nodes[x];
                            var bNode = biomeMaskArea.Nodes[x];
                            bNode.Position = new Vector3(node.Position.x, node.Position.y, node.Position.z);
                        }

                        biomeMaskArea.BiomeType = biomeType;
                        childMask.enabled = false;
                    }
                }
            }
        }
    }
}

Overcoming namespace clashes when upgrading to Bot Framework 4.3

V4.3 comes with some nice additional support that I was eager to use. However, there is a problem. v4.3 (Microsoft.Bot.Builder.Azure) uses the latest variant of the Azure Storage library whereas Microsoft.AspNetCore.All, via Microsoft.AspNetCore.DataProtection.AzureStorage (2.2.0), uses the older variant. This can cause problems if your own code wishes to use one of the clashing types. E.g. if you add


CloudStorageAccount blah = new CloudStorageAccount(null, false);

Then you’ll get an error like The type X exists in both Y and Z, e.g.


error CS0433: The type 'CloudStorageAccount' exists in both 'Microsoft.Azure.Storage.Common, Version=9.4.2.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' and 'Microsoft.WindowsAzure.Storage, Version=9.3.2.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'

The only solution I’ve found is to utilize the obscure extern and some trickery I grabbed from SO – using extern alias.

Step 1 – Create (or ensure there is) an xml file in the root of your project called Directory.Build.targets (don’t put .xml as the extension)

Step 2 – populate with;

  
<Project>
  <Target Name="AddPackageAliases" BeforeTargets="ResolveReferences" Outputs="%(PackageReference.Identity)">
    <PropertyGroup>
      <AliasPackageReference>@(PackageReference->'%(Identity)')</AliasPackageReference>
      <AliasName>@(PackageReference->'%(Alias)')</AliasName>
    </PropertyGroup>

    <ItemGroup>
      <ReferencePath Condition="'%(FileName)'=='$(AliasPackageReference)'">
        <Aliases>$(AliasName)</Aliases>
      </ReferencePath>
    </ItemGroup>
  </Target>
</Project>
  

Step 3 – edit the project. Unload your bot project, edit it and find the reference you wish to alias. Add Alias= E.g. to add the alias AzureCommon

<ItemGroup>
    <PackageReference Include="Microsoft.ApplicationInsights.DependencyCollector" Version="2.9.1" />
    <PackageReference Include="Microsoft.ApplicationInsights.TraceListener" Version="2.9.1" />
    <PackageReference Include="Microsoft.AspNetCore" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.All" />
    <PackageReference Include="Microsoft.Azure.KeyVault.Core" Version="3.0.3" />
    <PackageReference Include="Microsoft.Azure.Storage.Common" Version="9.4.2" Alias="AzureCommon"/>

Save and reload the project

Step 4 (optional) -That should be enough to provide the separation for the compiler. But if you want to use the aliased version then add the extern to where you wish to use the clashing type, e.g.;


extern alias AzureCommon;
using AzureStore = AzureCommon;

...

AzureCommon.Microsoft.WindowsAzure.Storage.CloudStorageAccount blah =
new AzureCommon.Microsoft.WindowsAzure.Storage.CloudStorageAccount(null, false);

Step 5 – celebrate you’ve avoided this hiccup 🙂

Using the streamlined On handlers in Bot Framework v4.3

Bot Framework v4.3 has introduced a series of ‘On’ activity handlers to make you code more modular and easier to understand. Once you updated your project reference to use 4.3 you need to change you main bot to use the new activity handler class;

public class MayBotBot : IBot
To
 public class MayBotBot : ActivityHandler

You’ll then be able to use override to discover the options, e.g.


protected virtual Task OnMessageActivityAsync(ITurnContext turnContext, CancellationToken cancellationToken)
{
return Task.CompletedTask;
}

The other nice new feature that might slide under the radar is when the framework looks for a dialog Id it will now search up the stack to look for one. I.e. you could, if you wanted to, AddDialog all your possible dialogs in the root dialog and remove all AddDialog from everywhere else. Don’t do that, but in theory you could. The advantage here is that you can declare your common dialogs once and not have to keep adding them everywhere else.

Enjoy the chocolatey goodness of 4.3.

 

 

A debug TRACE with a filter

There are a number of options for diagnostic tracing in .net. Today I really wanted to trace a very specific set of API calls that sent and received JSON. I just wanted the batches of JSON, not other ‘noise’. I had a quick look around and it looked like I just wanted to write;

Trace.WriteLine(myJson, "CategoryOfJson");

Problem is that is going to get swamped with all the other data. I had a look at source, switches, listeners, etc. In the end I decided to write this filtered listener;

public class JsonTraceListener : TextWriterTraceListener
{
    /// <summary>
    /// The filtered category
    /// </summary>
    public const string Category = "JSON";

    /// <summary>
    /// Initializes a new instance of the  class.
    /// </summary>
    /// The name of the file the  writes to.
    /// The name of the new instance.
    public JsonTraceListener(string fileName, string name)
        : base(fileName, name)
    {
    }

    /// <summary>
    /// WriteLine with category override will call this too, prefixed with the category text
    /// </summary>
    /// A message to write.
    public override void WriteLine(string message)
    {
        if (message.StartsWith(Category))
        {
            base.WriteLine(message);
        }
    }

    /// <summary>
    /// Writes a category name and a message to the listener you create when you implement the  class, followed by a line terminator.
    /// </summary>
    /// A message to write.
    /// A category name used to organize the output.
    public override void WriteLine(string message, string category)
    {
        if (category == Category)
        {
            base.WriteLine(message, category);
        }
    }
}

The listener just needs to be added and used;

var listener = new JsonTraceListener(@"C:\Logs\Json.log", "JsonListener");
listener.Filter = new EventTypeFilter(SourceLevels.All);
Trace.Listeners.Add(listener);
Trace.WriteLine("some json here", JsonTraceListener.Category);

// You must close or flush the trace to empty the output buffer.
Trace.Flush();

The code could be easily changed to allow any type of filter condition. I’m sure there is a nicer way to do this, feel free to comment.

Deconstructing my issue with Fluent Design

Almost always when I see some form of promotion of Fluent Design I see demonstrations of Acrylic & Reveal and that’s it. The whole of Fluent Design distilled into cloned examples of two controls. This is my attempt to understand why that is. I have no knowledge of how the Fluent team worked or works. This is all guess work, theory and my opinion. I will try and call out when it really is just me guessing, so here goes;

Looking at the headlines of the Fluent Design documentation you will see this, ‘Communicate using experience from physical world’, which further breaks down to; ‘Be engaging and immersive’, ‘Use Light’, ‘Create a sense of depth’ and ‘Build with the right material’.

Let’s take a look at couple of these (which actually encompasses all of them);

Use of light

Like or not is almost always represented solely by Reveal highlight/focus. Guess – this comes from the idea of using the physical would where light helps to draw attention or add clarity to a subject. In a physical world these two things are obviously important in semi-dangerous situations like driving a car. The Fuel Gauge lights up and you can avoid the oncoming lack of mobility. Knowing the hot-plate/hob is on, etc., etc. However, these examples are about indicators which are not, in themselves, a Fluent concept. Hence why we only really discuss the clarity aspect. A bit like shining a torch on an area to get a better view. This is useful when a physical situation has a problematic constraint (it’s dark) or there is a need to stand out (glow sticks, day running lights, etc.). Unfortunately the former reason is often used to promote Reveal and I do not buy it. Reveal suggests that you have a 2D screen UI that includes deliberate constraints to features that they only show when a mouse (or similar physical pointer) is near. I believe this is just a bad design. Either the features need to be there as primary items or they do not. To make that design dependant on the pointing devices is just the final nail in that coffin. However, if we specialise then the design becomes relevant. In a 3D environment the levels of noise is exponentially greater than in 2D. The visual scope includes everything in periphery and in the depth, more so if you have transparent items. In these situations it can be argued that there is a benefit to de-emphasise areas and highlight others – pretty much what the human eye already does. Now put into this scenario a Hololens style selector-in-space where trying to both concentrate on the selector and evaluate the scene for actionable content is quite hard. So in this noisy scene having things indicate that you are close to been able to action them is very helpful. Yay for Reveal.

Depth

Continuing with the subject of 3D lets also look at, ‘Create a sense of depth’. At face value this is still very applicable in a 2D environment. “Is that message box blocking the canvas?”, you certainly want that to be obvious. But that isn’t what we really mean here (more on this later), what we typical consider with Fluent is Acrylic and Parallax. Let’s start with the easy one, Parallax. Yes it has its uses, and yes a lot of the time it is for some added sparkle. However, it is a fashion, who isn’t completely sick with the vertical parallax effect used in every other default Bootstrap site these days? (Side note – self same effect as seen on the landing page of Fluent.Microsoft.com). So I’m ok with Parallax, it has a use, however I think it is a pretty tenuous claim to suggest it is creating a sense of depth, at least not in any useful 3D sense of depth. Now, let’s talk about Acrylic. Ah good old acrylic. Guess – let’s start with 3D worlds. One of the big advantages of augmented reality or virtual 3D objects in your world is that you may be able to see through them – finally the X-Ray glasses that the back of all those kids comics promised. Wouldn’t it be great to have an idea of what’s behind this current UI object? You could have ‘Vanilla Sky’ style floating UIs everywhere – ok that’s a horrible extreme but somewhere before that seems like a good idea. It is like a virtual desktop but in the z rather than x axis. Having transparent UI helps you keep a grasp of where all these things are…maybe. Using tips from gaming you can also use light to further exaggerate the depth. So transparency and light seem useful design tools for 3D worlds.

Guess Summary

To reiterate my guesses;

Guess – has Reveal come from noisy 3D environments such as HoloLens but really has little value in the more common 2D space? If so is the, ‘mouse is getting nearer’ scenario really just a stretch example to allow Reveal to tick the box of useful-across-devices? I suspect you can guess my answer to that.

Guess – Acrylic too has more concrete examples in 3D spaces, but there are one or two examples in 2D space. Any temporary control nearer the user (e.g. navigation menu fly-out) or pinned hovering control (perhaps a floating translator). However Acrylic has been sold pretty much as purely razzmatazz, it’s graphically interesting to show off something. Typically the background of a permanently showing navigation view/menu and the app title bar. I believe these are exactly the wrong places to use Acrylic, at least on a 2D device. I have heard a couple of justifications for using Acrylic in these demo’s;

  1. You can show off your lovely desktop and personalisation choices
  2. It’s a way of distinguishing between areas.

If this was a court of law I’d present (1) as evidence that there isn’t any justification if that’s the best they can do. (2) is also a puzzle to me. For example, you have a navigation pane with a white background next to the main canvas that is also white. So if we conveniently ignore the idea of using whitespace properly or borders then we would have a problem. If we then introduce acrylic to the navigation view then we get a blurred white space – not a great deal of help. So then we make it transparent to the host and the lovely white desktop shines through, oh. So as a designer we don’t want to take the risk that at best we’ll get a slightly tinted version so we introduce colour to the acrylic to ensure our theme continues to work. So now we have a blue tinted navigation view against our white canvas. So we add a picture behind the navigation view to allow acrylic to do its thing. We then adjust the various strengths in order to get something that doesn’t clash with the text. Then we review the design and decide it looks nice the first couple of times we look at it but then gets distracting and looks like a poor UI choice. In the end we finish with a nice flat blue navigation view against a nice flat white canvas. “No acrylic here, please move along”.

What to do?

In my opinion Reveal is great for 3D worlds, almost pointless everywhere else. In its attempt to be seen as useful across all devices it actually presents more problems. If you have a touch only 2D device then a UI design where you are saying controls should be highlighted for the mouse but ‘lost’ to the touch user then I think we are on very thin ice. Acrylic should rarely be used. In a similar vein to Reveal, it’s real use is for specialised not generic scenarios. I believe you have to take the official guidance with a healthy dose of scepticism. E.g. ‘The things that surround us in the real world are sensory and invigorating. They bend, stretch, bounce, shatter, and glide. Those material qualities translate to digital environments, making people want to reach out and touch our designs.’, ok sounds good so far. ‘ Add material to your UWP app:’, oooh yes what, please tell us. ‘Acrylic’, eh, what, is that it? Aero Glass from Windows Vista? Ok I’m being harsh but my point is that acrylic is relied upon in an attempt to show off a concept but it should be seen as just that. It’s an advertising tool not a practical one and should be relegated to an, ‘oh by the way…’ rather than the constant flag bearer of Fluent.

Fluent isn’t just about Light, Depth and Material

Fluent Design is more than just Reveal & Acrylic. The problem is, the rest has all been said before. There is nothing wrong with giving the existing set of rules a new home. Guess – the issue I have is that as a UWP developer, me (and others like me) are typically the audience of UWP related material. To put it another way, I suspect the demonstrations of Fluent Design have been mostly aimed at folks that have been design & developing UWP already. I.e. the people that have followed the evolution so we want to be shown what’s new. That’s the flaw. By concentrating on what’s new, and often what looks shiny, Fluent Design has become synonymous with Reveal & Acrylic. That’s unfortunate. Recently for the first time, I saw a demo on a new Windows release that was focused on Enterprise improvements. It didn’t have any Acrylic, and it looked great.

So Fluent Design evangelists, advocates, etc., here are my pleas;

  • Adaptive controls are fine, but some environments need specialised controls (keep Reveal in mind). Don’t be afraid to admit it. Some controls are better suited than others, no amount of ‘adapting’ can make a unsuitable design work. Sometimes you just have to swap out the control/design
  • Remember that Fluent Design is all these other areas. Sure we’ve seen them before but that’s where the practical and applicable solutions live. Don’t reduce Fluent Design to a couple of sparkly surfaces and stop promoting it is as such

MS Bot framework / LUIS gotcha

I’ve been training the LUIS service for use with the Microsoft Bot Framework and I did the cardinal sin of changing two things in my code; add some ‘clever’ OO stuff and add a new set of LUIS Intents. When I tested my Bot I kept getting, ‘The given key was not present in the dictionary’ when instantiating my LUIS Dialog. Turns out that if you have an Intent in LUIS but you have not YET implemented the handler in your LUIS Dialog then this is the error you get should the user hit that Intent. I suspect there is someway of specifying a default handler but I’ve yet to find it.

Hive Offline again?

One of the joys of the British Gas Hive system is the scheduling of heating & water. However, my install has a particular tendency to just stop working. This is a problem because you don’t get any real warning apart from a freezing cold shower. Since this has just happened to me for the 3rd time in a year (twice in as many weeks) I thought I’d write down my guide to getting it online again. This is just for me, you should obviously phone the Hive helpline and NOT follow these instructions.

  1. First off, check the status of the service; https://status.hivehome.com/
  2. Look at the Hive Hub controller (probably next to your boiler). If it’s got a big red light on then; ensure no other functions are on, i.e. no green lights on heating or water. Turn the hub off, wait a second or two, turn it back on again. For me this is fuse-style switch.
  3. Now go to your graphical thermostat and press the knob in. If it says, “no signal” then take it near to the Hub Controller (see 1). Take a battery out of the thermostat, wait a second or two, put the battery back in. Press the centre knob again, you should now have a signal.
  4. Log on to your Hive App or the online web site. If the heating & water are still saying offline then go to the Hive Hub next to your router. Turn that off, wait a second or two and back on again. Wait for the green light to settle down. Once it has then you just have to wait about 15 mins. If you are using the App you may have to also kill that off and restart it (not just move away from it, you need to kill it). Hopefully it will be back online. If not, you’ll have to call the Hive help line.

Hive Keeper for Windows 10/Windows Phone

Since I became concerned about not realizing when Hive had gone offline I decided to write a little helper. Providing the machine you install it on is running it might help you detect when it’s gone offline, install it from the Microsoft Store
English badge

MS Load Test – System.IO.IOException Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

Today I was tearing my hair out trying to figure out why my Web Performance Tests were not playing back correctly. Each run stubbornly displayed, ‘System.IO.IOException Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host’. Turns out that the WebTestRequest wasn’t issued with the correct https protocol. My fix for now is to place the following in the WebTest constructor;

 
public WebTestCoded()
{
     this.PreAuthenticate = true;
     this.Proxy = "default";
     ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
}

Forcing Specflow to generate files using a specific version

I recently ran into annoying problem with the lovely Specflow. The latest Visual Studio Extension for Specflow was version 1.x. That shouldn’t be very important as each individual project installs all the juicy code via Nuget packages, in my example version 2.x. In theory you should be able to create a new feature file and the 1.x Extension should use the 2.x code generator. This is especially important as 2.x now uses a later version of NUnit where 1.x still uses older unsupported attributes. To put it another way, if you want to use the latest version of NUnit then you need to be using the latest version of the Specflow code generator. The problem is that every now and again the 1.x Extension becomes confused and reverts back to using the 1.x generator. This causes the build to fail, and worse still you typically have to restart Visual Studio before it magically decides to use the 2.x generator…ARRGH!

Fortunately not all is lost. When the Extension starts it looks into the App.Config for various settings, one of which is the path to the generator is should use. So be explicit;

<specFlow>
    <!-- For additional details on SpecFlow configuration options see http://go.specflow.org/doc-config -->
    <generator
       allowDebugGeneratedFiles="false"
       allowRowTests="true"
       generateAsyncTests="false"
       path="<your path>\packages\SpecFlow.2.2.1\tools" />
  </specFlow>