21 February 2017

A generic toggle component for HoloLens apps

Intro

The following scenario is one I have seen a lot of times – the user taps on a UI element, and then it and/or a couple of elements need to fade out, disappear, whatever. I suppose every developer has felt this itch that occurs when you basically make something the same the second time around, and you feel there will be a third and a fourth time coming up. Time for spinning up a new reusable component. Meet Toggler and it’s friend, Togglable.

The Toggler

This is a simple script that you can attach to any object that will function as a toggle – a ‘button’ if you like. It’s so simple and concise I just write the whole thing in one go:

using System;
using System.Collections.Generic;
using HoloToolkit.Unity.InputModule;
using UnityEngine;

namespace HoloToolkitExtensions
{
    public class Toggler : MonoBehaviour, IInputClickHandler
    {
        private AudioSource _selectSound;

        public List<Togglable> Toggles = new List<Togglable>();

        public virtual void Start()
        {
            _selectSound = GetComponent<AudioSource>();
        }

        public virtual void OnInputClicked(InputClickedEventData eventData)
        {
            foreach (var toggle in Toggles)
            {
                toggle.Toggle();
            }
            if (_selectSound != null)
            {
                _selectSound.Play();
            }
        }
    }
}

This thing has a list of Togglable. When it’s clicked, it calls the method “Toggle” on all Togglable objects in the list, and optionally plays a feedback sound to confirm the toggle has been clicked.

The Togglable

This is almost embarrassingly simple.

using UnityEngine;

namespace HoloToolkitExtensions
{
    public abstract class Togglable : MonoBehaviour
    {
        public abstract void Toggle();
    }
}

and in itself completely uninteresting. What is interesting though is that you can use this base class to implement behaviours that actually do something useful (which is the point of bas classes, usually. D’oh). I will give a few examples.

A toggleable that ‘just disappears’

Also not very complicated, although there’s a bit more to it than you would think

namespace HoloToolkitExtensions
{
    public class ActiveTogglable : Togglable
    {
        public bool IsActive = true;
        public virtual void Start()
        {
            gameObject.SetActive(IsActive);
        }

        public override void Toggle()
        {
            IsActive = !IsActive;
            gameObject.SetActive(IsActive);
        }

        public virtual void Update()
        {
            // This code to make sure the logic still works in someone
            // set the IsActive field directly
            if (IsActive != gameObject.activeSelf)
            {
                gameObject.SetActive(IsActive);
            }
        }
    }
}

To if Toggle is called, SetActive is called with either true or false and it will make the gameobject that it’s attached to flash in and out of existence.

A toggleable that fades in or out

This is a bit more work, but with the use of LeanTween animating opacity is pretty easy:

using UnityEngine;

namespace HoloToolkitExtensions
{
    public class FadeTogglable : Togglable
    {
        public bool IsActive = true;
        public float RunningTime = 1.5f;
        private bool _isBusy = false;
        private Material _gameObjectMaterial;

        public virtual void Start()
        {
            Animate(0.0f);
            _gameObjectMaterial = gameObject.GetComponent<Renderer>().material;
        }

        public override void Toggle()
        {
            IsActive = !IsActive;
            Animate(RunningTime);
        }

        public virtual void Update()
        {

            // This code to make sure the logic still works in someone
            // set the IsActive field directly
            if (_isBusy)
            {
                return;
            }
            if (IsActive != (_gameObjectMaterial.color.a == 1.0f))
            {
                Animate(RunningTime);
            }
        }

        private void Animate(float timeSpan)
        {
            _isBusy = true;
            LeanTween.alpha(gameObject, 
                IsActive ? 1f : 0f, timeSpan).setOnComplete(() => _isBusy = false);
        }
    }
}

Initially it animates to the initial state in 0 seconds (i.e. instantly), and when the Toggle is called it animates in the normal running time from totally opaque to transparent – or the other way around.

There is a little caveat here – the object that needs to fade out then needs to use a material that actually supports transparency. So, for instance:

image

So what is the point of all this?

I have created a little sample application to demonstrate the point. There is one ‘button’ – a rotating blue sphere with red ellipses on it, and four elements that need to be toggled when the button is clicked – two cubes that simply need to wink out, and two capsules that need to fade in and out:

image

You drag the ActiveTogglable on both cubes, and FadeTogglable on both capsules. In fact, I did it a little bit different: I made prefab of both cube and capsule and dragged two instances on the scene. Force of habit. But in the end it does not matter. What does matter is that, once you have dragged a Toggle script on top of the sphere, you can now simply connect the Toggle and the Toggleables in the Unity editor, like this:

image

Which makes it pretty darn powerful and reusable I’d say – and extendable, since nothing keeps you from implementing your own Toggleables.

The result in action looks like this:

Why not an interface in stead of a superclass?

Yeah, that’s what I thought too. But you just try – components that can me dragged on top of each other need to be just that – components. So everything you drag needs to be a component at minimum, but you want the concrete class to be behaviours. So – you have to use a base class that’s a behaviour too. Welcome to the wondrous world of Unity, where nothing is what it seems – or what you think it is supposed to be ;)

Concluding remarks and some thoughts about 3D interfaces

Remember how Apple designed skeuomorphic user interfaces, that for instance required you to take a book out of a bookshelf? For young people, who never may have held much physical books, that’s about as absurd as the floppy disk icon for save – that is still widely used. But it worked in the real world, so we took that to the digital 2D world, even when it did no longer make sense. Microsoft took the lead with what was then called ‘Metro’ for the ‘digital native’ float design. Now buttons no longer mimic 3D (radio buttons) and heaven knows what.

We are now in the 2007 of 3D UI design. No-one has any idea how to implement true 3D ‘user interfaces’, and there is no standard at all. So we tend to fall back on what worked – 2D design elements or 3D design elements that resemble 3D objects – like 3D ‘light switch buttons’ attached to some ‘wall’. Guilty as changed – my HoloLens app for Schiphol has a 2D ‘help screen’  complete with button.

With my little rotation globe I am trying to find a way to ‘3D digital native design’, although I am not a designer at all. But I am convinced the future is somewhere in that direction. We need a ‘digital design language’ for Mixed Reality. Maybe it’s rotating globes. Maybe it’s something else. But I am sure as hell about what it’s not – and that is floating 2D or 3D buttons or ‘devices’ resembling physical machinery.

Code, as per my trademark, can be found here.

11 February 2017

A behaviour for dynamically loading and applying image textures in HoloLens apps

Intro

After two nearly code-less posts it’s time for something more code-heavy, although it’s still out of my ordinary mode of operation: it’s fairly short and not much code. So rest assured, not the code equivalent of “War and Peace”, as usual ;)

For both a customer app and one of my own projects I needed to be able to download images from an external source to use as texture on a Plane (this is a flat object with essentially only width and height). Now that’s not that hard – on the Unity scripting reference there’s a clear example how to do that. But for my own project I need to make sure I could also change the image (so reload an image on a plane that already had loaded a texture before) and I must also be able to make sure the image was not distorted by width/height ratio differences between the Plane and the image. That required a radical different approach.

Enough talk: code!

The behaviour itself is rather small and simple, even if I say so myself. It starts as follows:

using UnityEngine;

public class DynamicTextureDownloader : MonoBehaviour
{
    public string ImageUrl;
    public bool ResizePlane;

    private WWW _imageLoader = null;
    private string _previousImageUrl = null;
    private bool _appliedToTexture = false;

    private Vector3 _originalScale;

    void Start()
    {
        _originalScale = transform.localScale;
    }

    void Update()
    {
        CheckLoadImage();
    }
}

The ImageUrl is property you can either set from code or the editor and points to the location of the desired image on the web, ResizePlane (default false) determines whether or not you want the Plane to resize to fit the width/height ratio of the image. You may not always want that, as the center of the Plane stays in place. For instance, if the Plane’s top is aligned with something else. If the resizing makes the Plane’s height decrease, that may ruin your experience.

The other first three privates are status variables, the last one is the original scale of the plane before we started messing with it. We need to retain that, as can’t trust that scale once we start messing with it. I have seen the Plane become smaller and smaller when I alternated between portrait and landscape pictures.

The crux is the CheckLoadImage method:

private void CheckLoadImage()
{
    // No image requested
    if (string.IsNullOrEmpty(ImageUrl))
    {
        return;
    }

    // New image set - reset status vars and start loading new image
    if (_previousImageUrl != ImageUrl)
    {
        _previousImageUrl = ImageUrl;
        _appliedToTexture = false;
        _imageLoader = new WWW(ImageUrl);
    }

    if (_imageLoader.isDone && !_appliedToTexture)
    {
        // Apparently an image was loading and is now done. Get the texture and apply
        _appliedToTexture = true;
        Destroy(GetComponent<Renderer>().material.mainTexture);
        GetComponent<Renderer>().material.mainTexture = _imageLoader.texture;
        Destroy(_imageLoader.texture);

        if (ResizePlane)
        {
            DoResizePlane();
        };
    }
}

This might seem mightily odd if you are .NET developer, but that’s because of the nature of Unity. Keep in mind this method is called from Update, so it’s called 60 times per second. The flow is simple:

  • If ImageUrl is null, just forget it
  • If an ImageUrl is set and it is a new one, reset the two status variables and make a new WWW object. You can see this as a kind of WebClient. Key to know it’s async, and it has a done property, that only gets true when it’s downloading. So while it’s downloading, the next part is skipped
  • If, however the WWW object is done, we will need to apply it to the texture, but only if we did not do so before. So then we actually apply it.

So after the image is applied, the first if clause is false, because we have an ImageUrl. The second one is false, because the last loaded url is equal to the current one. And finally, the last if clause is false because the texture is applied. So although it’s called 60 times a second, it essentially does nothing. Until you change the ImageUrl.

An important note – you see that I first destroy the existing Render’s texture, then load the WWW’s texture into the renderer, and then destroy the WWWs texture again. If you are using a lot of these objects in one project and have them change image regularly, Unity’s garbage collection process cannot keep up and on a real device (i.e. a HoloLens) you will run out of memory soon. The nasty thing is this won’t happen soon in the editor or an emulator. This is why you always need to test on a real device. And this is also why I had to update this post later ;)

Resizing in correct width/height ratio

Finally the resizing, that’s not very hard it turns out. As long a you keep in mind the Plane’s ‘natural posture’  is ‘flat on the ground’, so what you tend to think of a X is indeed X, but what you tend to think of as Y, is in fact Z in the 3D world.

private void DoResizePlane()
{
    // Keep the longest edge at the same length
    if (_imageLoader.texture.width < _imageLoader.texture.height)
    {
        transform.localScale = new Vector3(
            _originalScale.z * _imageLoader.texture.width / _imageLoader.texture.height,
            _originalScale.y, _originalScale.z);
    }
    else
    {
        transform.localScale = new Vector3(
            _originalScale.x, _originalScale.y,
            _originalScale.x * _imageLoader.texture.height / _imageLoader.texture.width);
    }
}

It also turns out a loaded texture comes handily with it’s own size attributes, which makes it pretty easy do to the resize.

Sample app

I made this really trivial HoloLens app that shows two (initially empty) Planes floating in the air. I have given them different with/height ratios on purpose (in fact they mirror each other):

image

I have dragged the behaviour on both of them. One will show my blog’s logo (that’s a landscape picture) and one comes from an Azure Blob container and shows portrait oriented picture of… well, see for yourself. If you deploy this app – or just hit the play button in Unity, the will initially show this:

image

If you air tap on one of the pictures you get this:

image

In the second picture, the pictures are a lot larger as they fit ‘better’ into the pane. If you click a picture again they will swap back. The only thing that actually changes it the value of ImageUrl.

Bonus brownie points and eternal fame, by the way, for the first one who correctly tells me who the person in the picture is, and at what occasion this picture was taken :D.

Some concluding remarks

If you value your sanity, don’t mess with the rotation of the Planes themselves. Just pack them into an empty game object and rotate that, so the local coordinate system is still ‘flat’, as far as the Planes are concerned. I have had all kinds of weird effects if you start messing with Plane orientation and location. Not sure why this is, probably things I don’t quite understand that but – you have been warned.

This is (or will be part) of something bigger, but I wanted to share the lessons learned separately, preventing them to get lost in some bigger picture. In the mean time, you can view the demo project with source here.

03 February 2017

Using a HoloLens scanned room inside your HoloLens app

HoloLens can interact with reality – that’s why it’s a mixed reality device, after all. The problem is, sometimes, the reality you need is not always available. For example, if the app needs to run in a room or facility at a (client) location you only have limited access to. And you have to locate stuff on places relative to places in the room. Now you can of course use the simulation in the device portal and capture the room.

image

You can save the room into an XEF file and upload that to (another) HoloLens. That works fine runtime, but in Unity that doesn’t help you much with getting a feeling of the space, and moreover, it messes with your HoloLens’ spatial mapping. I don’t like to use it on a real live HoloLens.

There is another option though, in the 3D view tab

image

If you click “update”, you will see a rendering of the space the HoloLens has recognized, belonging to the ‘space’ it finds itself in. Basically, the whole mesh. In this case, my ground and first floor of my house (I never felt like taking the HoloLens to 2nd floor). If you click “Save” it will offer to save a SpatialMapping.obj. That simple WaveFront Object format. And this is something you actually can use in Unity.

Only it looks rather crappy. Even if you know what you are looking at. This is the side of my house, with left bottom the living (the rectangular thing is the large cupboard), on top of that the master bedroom* with the slanted roof, and if you look carefully, you can see the stairs coming up from the hallway a little right of center, at the bottom of the house.

image

What is also pretty confusing it the fact meshes can have only one side. This has the peculiar effect that at a lot of places you can look into the house from outside, but not outside the house from within. Anyway. This mesh is way too complex (the file is over 40mb) and messy.

imageFortunately – there’s Meshlab. And it’s free too. Thank heavens, because after you have bought a HoloLens you are probably quite of money ; )

Meshlab has quite some tools to make your Mesh a bit smoother. Usually, when you look at a piece of mesh, like for instance the master bedroom, it looks kinda spikey – see left. But after choosing Filter/Remeshing, Simplication and Reconstruction/Simplification: Quadratic Edge Collapse Decimation

imageimage

My house starts to look at lot less like the lair of the Shrike – it’s more like an undiscovered Antonio Gaudi building now. Hunt down the material used (in the materials subfolder), set it to transparent and play with color and transparency. I thought this somewhat transparent brown worked pretty well. Although there’s definitely still a lot of ‘noise’, it now definitely looks like my house, good enough for me to know where things are – or ought to be.

Using this Hologram of a space you can position Kinect-scanned objects or 3d models relative to each other based upon their true relative positions without actually being in the room. Then, when you go back to the real room, all you have to to is to make sure the actual room coincides with the scanned room model – add world anchors to the models inside the room, and then get rid of the room Hologram. Thus, you can use this virtual room as a kind of ‘staging area’, which I successfully did for a client location to which physical access is very limited indeed.

You might notice a few odd things – there are two holes in the floor in the living room – that is where the black leather couch and the black swivel chair are. As I’ve noticed before, black doesn’t work well with the HoloLens spatial mapping. Fascinating I find also the rectangular area that seems to float about a meter from left side of the house. That’s actually a large mirror that hangs on the bedroom wall, but the HoloLens spatial mapping apparently sees it as a reclined area. Very interesting. So not only this gives you a view of my house, but also a bit about HoloLens quirks.

The project showed above, with both models (the full and the simplified one) in it, can be found here.

* I love using the phrase “master bedroom” in relation to our house – as it conjures up images of a very large room like found in a typical USA suburban family house. I can assure you neither our house nor our bedroom do justice to that image. This is a Dutch house. But is is made out of concrete, unlike most houses in the USA, and will probably last way beyond my lifespan

31 January 2017

Scanning physical objects with an Xbox One Kinect to use as Holograms in HoloLens

Intro

imageThere are several demos out there that show obviously scanned models of people or physical objects used in HoloLens applications. I think it was fellow MVP and Dresdener 3D genius RenĂ© Schulte who first used his bust in a demo or at least went public with it. Unfortunately I have not been able to find very much about the actual process used to do the scanning and get to this result, so I have been trying to kludge together a procedure to make a full color 3d scan of a physical object – myself – and show it in a HoloLens.

Shopping list

For this you will need the following hardware:

And the following software:

Setting up the hardware

The Kinect adapter, when you remove it from the box, seems to be quite an intricate contraption of two boxes and three wires, one of them permanently connected to one of the boxes. That box – I’ll call it box #1 - is about the size of a package of cigarettes, and is the power supply. There is a wire with a mains plug that needs to be connected to this box. The other wire – that is permanently attached to box # 1– has a round plug on the other side.That plug goes into box #2 (the one without fixed wire). It’s about the size of an overly thick Mars bar. Now we have one wire unaccounted for – one with a weird squarish plug on one side, and a USB-3 plug on the other side. The squarish part also goes into box #2, next to the round plug coming from the power supply. Now there’s only one hole unaccounted for - on the other side box #1 is another hole – that’s where you need to connect the actual Kinect 2.

Plug in the mains connected to box #1, and connect the USB plug coming from box #2 to your computer’s USB3.0 port. This will start the installation of a couple of drivers, and I seem to recall it also automatically installed the 3D Scan app that you also can find in the store here.

Downloading additional software

If the 3D scan app did not install automatically, you will need to install it from the Windows Store. Also, download and install CloudCompare. Finally download the shader. That’s a zip file – we will need that later in the process. So store that in your downloads folder and unblock it.

The actual scanning

The 3D Scanning app is actually rather straightforward. You can set a few options, and I noticed that you have to fiddle a lot with the settings to get the effect you want. A higher scan resolution give a lower depth, for instance. Also, Kinect sometimes just loses track of the object it tries to track. What you do have to make sure is that there is plenty of light and prevent shadows, because although the actual object tracking is done via Kinect’s magic sensors, the overlaying of color is using the camera and that just needs light. And move slowly. Very slowly.

I scanned myself sitting on a rotating chair and a 120 seconds settings, rotating very very slowly. For this I used the “Kinect Sensor-stationary” setting. When it’s done, it will open the 3D builder app. That usually complains about invalid geometries that need to be fixed. Click the popup, prepare to wait for quite a while, and then finally the 3D object will be ready for the next step

Converting to a format usable for Unity

The annoying thing is that although the 3D builder can save the scanned object as a WaveFront Object (obj) file – that is readable by Unity -  it will strip any color from it. So we will need to take an in-between step. To that effect, don’t save the file in OBJ but in PLY format. Then start CloudCompare, and open the PLY file in that you just created. Hit file/save as and save the file now as an FBX file. That gives you a number of options – I usually just use FBX binary.

imageImporting in Unity and showing the colors

When you import this into Unity and drag the scanned object on the canvas, you will soon notice a few things about me

  • I’m rather large – like statue-of-Roman-emperor-with-overly-inflated-ego large
  • I seem to hang at a random place at a random angle in the sky, even if position and rotation are 0,0,0
  • I am … rather pale.

Getting to let me look less like a roman statue uses requires the special shader. Using my standard folder structure, I added a folder “Shaders” and copied the contents of the UnityVC zip file into it. Net result:

image 

Then in your assets folder, go to the folder where you have imported the scanned model into. In my case that’s App/Models:

image

In that you will find a Material ColorMaterial

image

Select that Material, go into it’s properties in the Inspector over on the right and select either “Standard (Vertex Color)” or “Standard Specular (Vertex Color)”

image

And boom. There I am, in full color color and glory looking like a zombie.

image

Now to make me appear in front of the HoloLens view and not seem like some giant ancient, balding and grey god of… whatever descending from Heaven, I used the following settings on the scanned Hologram:

image

And then you get a more or less life-sized floating ghost/zombie/Borg me-like appearance

image

I had the dubious pleasure of walking around myself, noticing the back part was missing.

Lessons learned

  • First of all – already mentioned in passing, make sure there is enough light, and prevent shadows. This is harder than it sounds. You will notice scanned Holograms tend to look rather pale when created with insufficient light.
  • The handheld setting is way harder to use than the stationary setting. Consider placing objects on a rotary platform rather than moving around it with the Kinect
  • Move or rotate the object you want to scan slowly (or move the Kinect slowly)
  • You will have to fiddle a lot with settings before you get the result you want
  • Larger objects (like humans) are way easier to scan than small objects
  • Make anything you don’t want on the scan as black as and non-reflective as possible

Concluding remarks

Unity will complain regularly about an error in the shader in the editor, but it still seems to work fine. Having no clue about shaders and how to write them yet, I tend to ignore it. The resulting project, although containing no code written by me at all, can be found here.

25 January 2017

Manipulating Holograms (move, scale, rotate) by gestures

Intro

This tutorial will show you how to manipulate (move, rotate and scale) Holograms by using gestures. It uses speech commands to change between moving, rotating and scaling. It will build upon my previous blog post, re-using the collision detection. The app we are going to create will work like this (turn on sound to hear the voice commands):

Setting the stage

Once again, we start off with a rather standard setup…
clip_image001[4]
only now we have a cube and a sphere. Neither of their settings are particularly spectacular, and this will show the sphere initially a bit to the left, and a rectangular box a bit to the right
clip_image002[9]clip_image003[4]
 
 
 
 
If we actually want this Holograms to do something we need to code some stuff:
  • A behaviour to select and activate Holograms – TapToSelect
  • A behaviour that acts on speech commands, so the next behaviour knows what to do – this is SpeechCommandExecuter
  • A behaviour that responds to tap-hold-and-move gesture and does the actual moving, rotating and scaling – this is SpatialManipulator
  • Something to wire the whole thing together, a kind of app state manager – AppStateManager and its base class, very originally called BaseAppStateManager.

Selecting a hologram

The fun thing with the new HoloToolkit is that we don’t need do much anymore to listen or track hand movements and speech. It’s all done by the InputManager. We only need to implement the right interfaces to be called with whatever we want to know. So when we want an object to receive a tap, we only need to add a behaviour that implements IInputClickHandler:
public class TapToSelect : MonoBehaviour, IInputClickHandler
{
    public virtual void OnInputClicked(InputEventData eventData)
    {
        if (BaseAppStateManager.IsInitialized)
        {
            // If not already selected - select, otherwise, deselect
            if (BaseAppStateManager.Instance.SelectedGameObject != gameObject)
            {
                BaseAppStateManager.Instance.SelectedGameObject = gameObject;
            }
            else
            {
                BaseAppStateManager.Instance.SelectedGameObject = null;
            }
            var audioSource = GetAudioSource(gameObject);
            if (audioSource != null)
            {
                audioSource.Play();
            }
        }
        else
        {
            Debug.Log("No BaseAppStateManager found or initialized");
        }
    }
}
This basically just gives the object to the App State Manager – or more it’s base class, later more about it, and optionally plays a sound – if the omitted GetAudioSource method can find and AudioSource in either the object or its parent. In the app it does, it plays a by now very recognizable ‘ping’ confirmation sound.

Speech command the new way – listen to me, just hear me out*

Using speech commands has very much changed with the new HoloToolkit. There are actually two ways to go about it:
  • Using a SpeechInputSource and implementing an ISpeechHandler ‘somewhere’. This is rather straightforward and is very much and analogy of how the IInputClickHandler works. Disadvantage is that you have to define your keywords twice – both in the SpeechInputSource and in the ISpeechHandler implementation
  • Using a KeywordManager to define your keywords and map them to some object’s method(s).
This sample uses the latter method. It’s a bit of an odd workflow to get it working, but once that’s clear, it’s rather elegant. It’s also more testable, as the interpretation of keywords is separated from the execution. We are implementing that execution in the SpeechCommandExecuter. Its public methods are Move, Rotate, Scale, Done, Faster and Slower which pretty much maps to the available speech commands. And if you look in the code, you will see what it does internally is just call private methods, which in turn try to find the selected objects’ SpatialManipulator and call methods there.
private void TryChangeMode(ManipulationMode mode)
{
    var manipulator = GetSpatialManipulator();
    if (manipulator == null)
    {
        return;
    }

    if (manipulator.Mode != mode)
    {
        manipulator.Mode = mode;
        TryPlaySound();
    }
}

private void TryChangeSpeed(bool faster)
{
    var manipulator = GetSpatialManipulator();
    if (manipulator == null)
    {
        return;
    }

    if (manipulator.Mode == ManipulationMode.None)
    {
        return;
    }

    if (faster)
    {
        manipulator.Faster();
    }
    else
    {
        manipulator.Slower();

    }
    TryPlaySound();
}

private SpatialManipulator GetSpatialManipulator()
{
    var lastSelectedObject = AppStateManager.Instance.SelectedGameObject;
    if (lastSelectedObject == null)
    {
        Debug.Log("No selected element found");
        return null;
    }
    var manipulator = lastSelectedObject.GetComponent<SpatialManipulator>();
    if (manipulator == null)
    {
        manipulator = lastSelectedObject.GetComponentInChildren<SpatialManipulator>();
    }

    if (manipulator == null)
    {
        Debug.Log("No manipulator component found");
    }
    return manipulator;
}
So why this odd arrangement? That’s because KeywordManager needs an object with parameterless methods to call on keyword recognition. So, we add this SpeechCommandExecuter and a KeywordManager (from the HoloToolkit) to the Managers object, and then we are going to make this work. The easiest way to get going is
  • Expand “Keywords and Responses”,
  • Change “size” initially in one.
  • Type “move object” into keyword,
  • Click + under “Response”
  • Drag the “Managers” object in the now visible “None” field. This is best explained by an image:
clip_image002[11]
And then you have to select to what method of which object you want to map this speech command. To do this, click the dropdown menu next to “Runtime Only”, that will initially say “no function”. From the drop down first select the object you want (SpeechCommandExecuter) and then the method you want (Move).
clip_image004
Unfortunately, all objects in the game object are displayed, as are all public methods and properties from every object you select – plus those of its parent classes. You sometimes really have to hunt them down. It’s a bit confusing at first, but once you have done it a couple of time you will get the hang of it. It might feel as an odd way of programming if you are used to the formal declarative approach of things in XAML, but that’s the way it is.
Then change the size to 6 and add all the other keyword/method combinations. By using this method you only have to drag the Managers object once, as the Unity editor will copy all values of the first entry to the new ones.

Spatial Manipulation

This is the behaviour that does most of the work. And it’s surprisingly simple. The most important (new) part is like this:
using UnityEngine;
using HoloToolkit.Unity.InputModule;

namespace LocalJoost.HoloToolkitExtensions
{
    public class SpatialManipulator : MonoBehaviour
    {
        public float MoveSpeed = 0.1f;

        public float RotateSpeed = 6f;

        public float ScaleSpeed = 0.2f;

        public ManipulationMode Mode { get; set; }


        public void Manipulate(Vector3 manipulationData)
        {
            switch (Mode)
            {
                case ManipulationMode.Move:
                    Move(manipulationData);
                    break;
                case ManipulationMode.Rotate:
                    Rotate(manipulationData);
                    break;
                case ManipulationMode.Scale:
                    Scale(manipulationData);
                    break;
            }
        }

        void Move(Vector3 manipulationData)
        {
            var delta = manipulationData * MoveSpeed;
            if (CollisonDetector.CheckIfCanMoveBy(delta))
            {
                transform.localPosition += delta;
            }
        }

        void Rotate(Vector3 manipulationData)
        {
            transform.RotateAround(transform.position, Camera.main.transform.up, 
                -manipulationData.x * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.forward, 
                manipulationData.y * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.right, 
                manipulationData.z * RotateSpeed);
        }

        void Scale(Vector3 manipulationData)
        {
            transform.localScale *= 1.0f - (manipulationData.z * ScaleSpeed);
        }
    }
}
The manipulation mode can be either Move, Rotate, Scale – or None, in which case this behaviour does nothing at all. So, when ‘something’ supplies a Vector3 to the Manipulate method, it will either move, rotate or scale the object.
  • In move mode, when you move your hand, the object will follow the direction. So, if you pull towards you, it will come toward you. Move up, it will move up. Elementary.
  • Scale is even more simple. Pull toward you, the object will grow, push from you, it will shrink.
  • Rotate is a bit tricky. Push from you, the object will rotate around the horizontal axis. That is, an axis running through your view from left to right. Effectively, the top of the object will be moving from you and the bottom to you. Move your hand from left to right, or right to left, and the object will rotate around and axis that is running from top to bottom of your view. Last and most tricky – and least intuitive: move your hand from top to bottom and the object will rotate clockwise over the z axis – that is, the axis ‘coming out of your eyes’
There are two more methods – Faster and Slower, which are called via the SpeechManager as you have seen, and their function is not very spectacular: they either multiply the speed value of the currently active manipulation mode by two, or divide it by two. So by saying “go faster” you will make the actual speed at which your Hologram moves, rotates or scales go twice as fast, depending on what you are doing. “Go slower” does the exact opposite.
using UnityEngine;
using HoloToolkit.Unity.InputModule;

namespace LocalJoost.HoloToolkitExtensions
{
    public class SpatialManipulator : MonoBehaviour
    {
        public float MoveSpeed = 0.1f;

        public float RotateSpeed = 6f;

        public float ScaleSpeed = 0.2f;

        public ManipulationMode Mode { get; set; }


        public void Manipulate(Vector3 manipulationData)
        {
            switch (Mode)
            {
                case ManipulationMode.Move:
                    Move(manipulationData);
                    break;
                case ManipulationMode.Rotate:
                    Rotate(manipulationData);
                    break;
                case ManipulationMode.Scale:
                    Scale(manipulationData);
                    break;
            }
        }

        void Move(Vector3 manipulationData)
        {
            var delta = manipulationData * MoveSpeed;
            if (CollisonDetector.CheckIfCanMoveBy(delta))
            {
                transform.localPosition += delta;
            }
        }

        void Rotate(Vector3 manipulationData)
        {
            transform.RotateAround(transform.position, Camera.main.transform.up, 
                -manipulationData.x * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.forward, 
                manipulationData.y * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.right, 
                manipulationData.z * RotateSpeed);
        }

        void Scale(Vector3 manipulationData)
        {
            transform.localScale *= 1.0f - (manipulationData.z * ScaleSpeed);
        }
    }
}
It’s not quite rocket science as you can see. Notice the re-use of the collision detector I introduced in my previous blog post.

App state – the missing piece

The only thing missing now is how it’s all stitched together. That’s actually done using two classes – a BaseStateManager and a descendant, AppStateManager
The BaseStateManager doesn’t do much special. Its main feats are having a property for a selected object and notifying the rest of the world of getting one. And that’s not even used in this sample app but I consider it useful for other purposes, so I left it in. It also calls a virtual method if the selected object is changed.
using System;
using HoloToolkit.Unity;
using UnityEngine;

namespace LocalJoost.HoloToolkitExtensions
{
    public class BaseAppStateManager : Singleton<BaseAppStateManager>
    {
        private GameObject _selectedGameObject;

        public GameObject SelectedGameObject
        {
            get { return _selectedGameObject; }
            set
            {
                if (_selectedGameObject != value)
                {
                    ResetDeselectedObject(_selectedGameObject);
                    _selectedGameObject = value;
                    if (SelectedObjectChanged != null)
                    {
                        SelectedObjectChanged(this, 
                        new GameObjectEventArgs(_selectedGameObject));
                    }
                }
            }
        }

        protected virtual void ResetDeselectedObject(GameObject oldGameObject)
        {
        }

        public event EventHandler<GameObjectEventArgs> SelectedObjectChanged;
    }
}
There is also a class GameObjectEventArgs but that’s too trivial to show here. Note, by the way, I stick to C# 4.0 concepts as this is what Unity currently is limited to.
The actual AppStateManager glues the whole thing together:
public class AppStateManager : BaseAppStateManager, IManipulationHandler
{
    void Start()
    {
        InputManager.Instance.AddGlobalListener(gameObject);
    }

    public static new AppStateManager Instance
    {
        get { return (AppStateManager)BaseAppStateManager.Instance; }
    }

    protected override void ResetDeselectedObject(GameObject oldGameObject)
    {
        var manipulator = GetManipulator(oldGameObject);
        if (manipulator != null)
        {
            manipulator.Mode = ManipulationMode.None;
        }
    }

    public void OnManipulationUpdated(ManipulationEventData eventData)
    {
        if (SelectedGameObject != null)
        {
            var manipulator = GetManipulator(SelectedGameObject);
            if (manipulator != null)
            {
                manipulator.Manipulate(eventData.CumulativeDelta);
            }
        }
    }

    protected SpatialManipulator GetManipulator(GameObject obj)
    {
        if (obj == null)
        {
            return null;
        }
        var manipulator = obj.GetComponent<SpatialManipulator>() ??
            obj.GetComponentInChildren<SpatialManipulator>();
        return manipulator;
    }
}
It implements the IManipulationHandler, which means our almighty IManipulationHandler will call it’s OnManipulationUpdated whenever it detects hand with a tap-and-hold gesture (thumb and index finger pressed together) while moving. And it will give that data to the SpatialManipulator in the select object, that is – if the selected object has one. It also makes sure the currently active object gets deactivated once you select a new one. Note, IManipulationHandler requires you to implement three more methods, omitted here, as they are not used in this app.
There is an important line in the Start method, that will define this object as a global input handler. Its OnManipulationUpdated always gets called. Normally, this get only called when it is selected – that is, if your gaze strikes the object. That makes it very hard to move it, as your gaze most likely will most likely fall off the object as you move it. This approach has the advantage you can even manipulate objects even if you are not exactly looking at them.

Wiring it all together in Unity

imageThis is actually really simple now we have all the components. Just go to the Cube in your hierarchy and add these three components:

And don’t forget to drag the InputManager and the Cube itself on the Stabilizer and the Collision Detector fields as I explained in my previous blog post. Repeat for the Sphere object. Build your app, and you should get the result I show in the video.

Concluding remarks

It’s fairly easy to wire together something to move stuff around using gestures. There are a few limitations as to the intuitiveness of the rotation gesture, and you might also notice that while moving the object around uses collision detection, rotating and scaling do not. I leave those as ‘exercise to the reader’ ;). But I do hope this takes you forward as a HoloLens developer.
Full code can be found here


*bonus points if you actually immediately recognized this phrase

18 January 2017

Dragging holograms with gaze and tapping them in place on a surface

Intro

Simply put – what I was trying to do create is the effect of what you get in the – in the meantime good old – Holograms app: when you pull a hologram out a menu, it ‘sticks to your gaze’ and follows it. You can air tap and then it stays hanging in the air where you left it, but you can also put it on a floor, on a table, or next to a wall. You can’t push it through a surface. That is, most of the time ;). So, like this:

In the video, you can see it follows the gaze cursor floating through the air till it hits a wall to the left and then stops, then goes down till it hits the bed and then stops, then up again till I finally place it on the floor.

A new year, a new toolkit

As happens often in the bleeding end of technology, things tend to change pretty fast. This is also the case in HoloLens country. I have taken the plunge to Unity 5.5 and the new HoloToolkit which has a few breaking changes. Things have gotten way simpler since the previous iteration. Also, I would like to point out that for this tutorial I am using the latest patch release, which at the time of this writing it 5.5.0p3, released December 22, 2016.

Setting up the initial project

This is best illustrated by a picture. If you have setup the project we basically only need this. Both Managers and HologramCollection are simply empty game objects meant to group stuff together, then don’t have any specific other function here. Drag and drop the four blue prefabs in the indicated places, then set some properties for the cube

imageimage

The Cube is the thing that will be moved. Now it’s time for ‘some’ code.

The main actors

There are two scripts that play the leading role, with a few supporting roles.

  • MoveByGaze
  • IntialPlaceByTap

The first one makes an object move, the second one actually ends it. Apropos, the actual moving is done by our old friend iTween, whose usefulness and application was already described in part 5 of the AMS HoloATC series. So, you will need to include this in the project to prevent all kind of nasty errors. Anyway, let’s get to he star of the show, MoveByGaze.

Moving with gaze

It starts like this:

using UnityEngine;
using HoloToolkit.Unity.InputModule;
using HoloToolkit.Unity.SpatialMapping;

namespace LocalJoost.HoloToolkitExtensions
{
    public class MoveByGaze : MonoBehaviour
    {
        public float MaxDistance = 2f;
        public bool IsActive = true;
        public float DistanceTrigger = 0.2f;
        public BaseRayStabilizer Stabilizer = null;
        public BaseSpatialMappingCollisionDetector CollisonDetector;

        private float _startTime;
        private float _delay = 0.5f;
        private bool _isJustEnabled;
        private Vector3 _lastMoveToLocation;
        private bool _isBusy;

        private SpatialMappingManager MappingManager
        {
            get { return SpatialMappingManager.Instance; }
        }

        void OnEnable()
        {
            _isJustEnabled = true;
        }

        void Start()
        {
            _startTime = Time.time + _delay;
            _isJustEnabled = true;
            if (CollisonDetector == null)
            {
                CollisonDetector = 
                  gameObject.AddComponent<DefaultMappingCollisionDetector>();
            }
        }
    }
}

Up above are the settings:

  • MaxDistance is the maximum distance from your head the behaviour will try to place the object on a surface. Further than that, and it will just float in the air.
  • IsActive determines whether the behaviour is active (duh)
  • DistanceTrigger is the distance your gaze has to be from the object your are moving, before it actual starts to move. It kind of trails your gaze. This prevents the object from moving in a very nervous way.
  • Stabilizer is the stabilizer made, used and maintained by the InputManager. You will have to drag the InputManager from your scene on this field to use the stabilizer. It’s not mandatory, but highly recommended
  • CollisionDetector is a class we will see later – it basically makes sure the object that you are dragging is not pushed through any surfaces. You will need to add a collision detector to the game object that you are dragging along – or maybe a game object that is part of the game object that you are dragging. That collision detector needs then to be dragged on this field on the MoveByGaze This is not mandatory. If you don’t add one, the object you attach the MoveByGaze to will just simply follow your gaze, and move right through any object. That’s the work of the DefaultMappingCollisionDetector who is essentially a null pattern implementation. 

Anyway, in the Update method all the work is done:

void Update()
{
    if (!IsActive || _isBusy || _startTime > Time.time)
        return;
    _isBusy = true;

    var newPos = GetPostionInLookingDirection();
    if ((newPos - _lastMoveToLocation).magnitude > DistanceTrigger || _isJustEnabled)
    {
        _isJustEnabled = false;
        var maxDelta = CollisonDetector.GetMaxDelta(newPos - transform.position);
        if (maxDelta != Vector3.zero)
        {
            newPos = transform.position + maxDelta;
            iTween.MoveTo(gameObject,
                iTween.Hash("position", newPos, "time", 2.0f * maxDelta.magnitude,
                    "easetype", iTween.EaseType.easeInOutSine, "islocal", false,
                    "oncomplete", "MovingDone", "oncompletetarget", gameObject));
            _lastMoveToLocation = newPos;
        }
        else
        {
            _isBusy = false;
        }
    }
    else
    {
        _isBusy = false;
    }
}

private void MovingDone()
{
    _isBusy = false;
}

Only if the behaviour is active, not busy, and the first half second is over we are doing anything at all. And the first thing is – telling the world we are busy indeed. Thid method, like all Updates, is called 60 times a second and we want to keep things a bit controlled here. Race conditions are annoying.

Then we get a position in the direction the user is looking, and if that exceeds the distance trigger – or this is the first time we are getting here – we start off finding how far ahead along this gaze we can place the actual object by using CollisionDetector. If that’s is possible – that is, if the CollisionDetector does not find any obstacles, we can actually move the object using iTween. Important is to note that whenever the move is not possible, _isBusy immediately gets set to false. Also, note the fact that the smaller the distance, the faster the move. This is to make sure the final tweaks of setting the object in the right place don’t take a long time. Otherwise, _isBusy is only reset after a successful move.

Then the final pieces of this behaviour:

private Vector3 GetPostionInLookingDirection()
{
    RaycastHit hitInfo;

    var headReady = Stabilizer != null
        ? Stabilizer.StableRay
        : new Ray(Camera.main.transform.position, Camera.main.transform.forward);

    if (MappingManager != null &&
        Physics.Raycast(headReady, out hitInfo, MaxDistance, MappingManager.LayerMask))
    {
        return hitInfo.point;
    }

    return CalculatePositionDeadAhead(MaxDistance);
}

private Vector3 CalculatePositionDeadAhead(float distance)
{
    return Stabilizer != null
        ? Stabilizer.StableRay.origin + 
Stabilizer.StableRay.direction.normalized * distance : Camera.main.transform.position + Camera.main.transform.forward.normalized * distance; }

GetPostionInLookingDirection first tries to get the direction in which you are looking. It tries to use the Stabilizer’s StableRay for that. The Stabilizer is a component of the InputManager that stabilizes your view – and the cursor uses it as well. This prevents the cursor from wobbling too much when you don’t keep your head perfectly still (which most people don’t – this includes me). The stabilizer takes an average movement of 60 samples and that makes for a much less nervous-looking experience. If you don’t have a stabilizer defined, it just takes your actual looking direction – the camera’s position and your looking direction.

Then it tries to see if the resulting ray hits a wall or a floor – but no further than MaxDistance away. If it sees a hit, it returns this point, if it does not, if gives a point in the air MaxDistance away along an invisible ray coming out of your eyes. That’s what CalculatePositionDeadAhead does – but also trying to use the Stabilizer first to find the direction.

Detect collisions

Okay, so what is this famous collision detector that prevents stuff from being pushed through walls and floors, using the spatial perception that makes the HoloLens such a unique device? It’s actually very simple, although it took me a while to actually get it this simple.

using UnityEngine;

namespace LocalJoost.HoloToolkitExtensions
{
    public class SpatialMappingCollisionDetector : BaseSpatialMappingCollisionDetector
    {
        public float MinDistance = 0.0f;

        private Rigidbody _rigidbody;

        void Start()
        {
            _rigidbody = GetComponent<Rigidbody>() ?? gameObject.AddComponent<Rigidbody>();
            _rigidbody.isKinematic = true;
            _rigidbody.useGravity = false;
        }

        public override bool CheckIfCanMoveBy(Vector3 delta)
        {
            RaycastHit hitInfo;
            // Sweeptest wisdom from 
            //http://answers.unity3d.com/questions/499013/cubecasting.html
            return !_rigidbody.SweepTest(delta, out hitInfo, delta.magnitude);
        }

        public override Vector3 GetMaxDelta(Vector3 delta)
        {
            RaycastHit hitInfo;
            if(!_rigidbody.SweepTest(delta, out hitInfo, delta.magnitude))
            {
                return KeepDistance(delta, hitInfo.point); ;
            }

            delta *= (hitInfo.distance / delta.magnitude);
            for (var i = 0; i <= 9; i += 3)
            {
                var dTest = delta / (i + 1);
                if (!_rigidbody.SweepTest(dTest, out hitInfo, dTest.magnitude))
                {
                    return KeepDistance(dTest, hitInfo.point);
                }
            }
            return Vector3.zero;
        }

        private  Vector3 KeepDistance(Vector3 delta, Vector3 hitPoint)
        {
            var distanceVector = hitPoint - transform.position;
            return delta - (distanceVector.normalized * MinDistance);
        }
    }
}

This behaviour first tries to find a RigidBody, and failing that, adds it. We will need this to check the presence of anything ‘in the way’. But – this is important – we will set ‘isKinematic’ to true and ‘useGravity’ to false, or else or object will come under control of the Unity physics engine and drop on the floor. In this case, we want to control the movement of the object.

So, this class has two public methods (it’s abstract base class demands that). One, CheckIfCanMoveBy (that we don’t use now), just says if you can move your object in the intended direction over the intended distance without hitting anything. The other essentially does the same, but if it finds something in the way, it also tries to find a distance over which you can move in the desired direction. For this, we use the SweepTest method of RigidBody. Essentially you give it a vector, a distance along that vector, and it has an out variable that gives you info about a hit, should any occur. If a hit does occur, it tries at again at 1/4th, 1/7th and 1/10th of that initially found distance. Failing everything, it returns a zero vector. By using this rough approach, and object moves quickly in a few steps till it can no more.

And then it also moves the object back over a distance you can set from the editor. This keeps the object just a little above the floor or from the wall, show that be desired. That’s what KeepDistance is for.

The whole point of having a base class BaseSpatialMappingCollisionDetector, by the way, is a) enabling null pattern implementation which as implemented by DefaultMappingCollisionDetector and b) make different collision detectors based upon different needs. A bit of architectural considerations within the sometimes-bewildering universe of Unity development.

Making it stop – InitialPlaceByTap

Making the MoveByGaze stop is very simple – set the IsActive field to false. Now we only need something to actually make that happen. With the new HoloToolkit, this is actually very very simple:

using UnityEngine;
using HoloToolkit.Unity.InputModule;

namespace LocalJoost.HoloToolkitExtensions
{
    public class InitialPlaceByTap : MonoBehaviour, IInputClickHandler
    {
        protected AudioSource Sound;
        protected MoveByGaze GazeMover;

        void Start()
        {
            Sound = GetComponent<AudioSource>();
            GazeMover = GetComponent<MoveByGaze>();

            InputManager.Instance.PushFallbackInputHandler(gameObject);
        }

        public void OnInputClicked(InputEventData eventData)
        {
            if (!GazeMover.IsActive)
            {
                return;
            }

            if (Sound != null)
            {
                Sound.Play();
            }

            GazeMover.IsActive = false;
        }
    }
}

By implementing IInputClickHandler the InputManager will send an event to this object when you air tap and it is selected by gaze. But by pushing it as fallback handler it will get this event also when it’s not selected. The event processing is pretty simple – if the GazeMover in this object is active, it’s de-activated. Also, if there’s an AudioSource detected, it’s sound is played. I very much recommend this kind of audio feedback.

Wiring it all together

On your cube, you drag the MoveByGaze, SpatialMappingCollisionDetector, and InitialPlaceByTap scripts. Then you drag the cube itself again on the CollisionDetector field of MoveByGaze, and the InputManager on the Stabilizer field. Unity itself will select the right component.

image

So, in this case I could also have used GetComponent<SpatialMappingCollisionDetector> in stead of a field where you need to drag something on. But this way is more flexible – in app I did not want to use the whole object’s collider, but only that of a child object. Note I have set the MinDistance for the SpatialMappingCollisionDetector for 1 cm – it will keep an extra centimeter distance from the wall or the floor.

Concluding remarks

So this is how you can more or less replicate part of the behavior of the Holograms App, by moving around objects with your gaze and placing them on surfaces using air tap. The unique capabilities of the HoloLens allow us to place objects next to or on top of physical objects, and the new HoloToolkit makes using those capabilities pretty easy.

Full code, as per my MVP ‘trademark’, can be found here

28 December 2016

Unity 5.5/HoloLens: AudioPluginMsHRTF.dll has failed the AppContainerCheck check

All right, so I was finally past a couple of deadlines with paid HoloLens apps and was ready to take the plunge to update my app AMS HoloATC to Unity 5.5 and the newest HoloToolkit. After wrestling my way past some some interesting breaking changes (more about that later), I was finally at the point where everything worked again as it did before – and started adding new features. Today was the day I thought it was ready enough to justify a new submission to the store. I had tested everything thoroughly, so what could possibly go wrong.

image

Image result for wtf

Why, if it ain’t my dear old friend the WACK. The 'binary analyzer' has found a file AudioPluginMsHRTF.dll that does not pass the AppContainerCheck and the WACK gives me a whole bunch of suggestions that don't mean anything to me. Apparently I have to add a couple of options when I link the app, but I have not seen a linker since I wrote my last C program, which was – I think – before 2000. Now I know the WACK has some issues, so I tried submitting it anyway – but no dice. The Windows Store spitted it right back to me, with the same error. So now what?

Fortunately I have had dealt with Spatial Sound before and recognized the “Ms HRTF Spatializer” as something I just selected in Unity. After numerous (time consuming) failures trying to juggle the WACK, I took a desperate measure. I went back to a machine that had a previous installation of Unity on it – 5.4 – and checked AudioPluginMsHRTF.dll there. I found it in
C:\Program Files\Unity HoloLens 5.4.0f3-HTP\Editor\Data\VR\Unity\WindowsStoreApps. It reports being its size as 2931200 bytes
For Unity 5.5, in
C:\Program Files\Unity550f3\Editor\Data\UnityExtensions\Unity\VR\WindowsStoreApps\uap_x86, there’s a file of 13312 bytes big. Only 13kb. Yet, when you deploy and app with this only 13kb dll to a HoloLens Spatial Sound works. You can clearly hear the difference – simple stereo versus spatial sound.

So I went about and looked around where this dll is taken from when Visual Studio project is built, and found it comes from the plugins directory in the project directory of the generated solution. In my case that’s C:\Projects\AMS_HoloATC\AMS HoloATC\App\AMS HoloATC\Plugins\X86. I replaced the 13kb AudioPluginMsHRTF.dll by the one I had taken from the 5.4 installation. That fixed the WACK. Unfortunately, it also made the app crash whenever I used Spatial Sound. Nice try, no cigar. But it conformed my suspicions that the 5.5 version of this dll is indeed the thing that makes the WACK protest.

imageSo what I did, in the end, was just get rid of ‘real’ Spatial Sound. I went to Edit/Project settings in Unity and set “Spatializer Plugin” to “None”, disabling the Ms HRTF Spatializer. Re-created the Visual Studio solution from complete scratch, and the reference to AudioPluginMsHRTF.dll and AudioPluginMsHRTF.pdb were gone. And so was the WACK error.

But of course, now my app has no real spatial sound anymore – it’s reduced to ‘normal’ Unity Stereo Panning. But a running app is better than one that does not. I have filed a bug at Unity with a repro and contacted some people within Microsoft. I will keep you posted about any progress. In the mean time, this should keep you going forward.

For those who want to try out themselves, I have made this zipped up solution including all the files I built, and you can easily check the resulting app does not pass the WACK.

Update 30-dec-2016 – the issue has been confirmed as a bug by Unity and will be addressed.

Update 31-jan-2016 – the issue has been fixed – in Unity 5.5.1p2 or higher.