18 January 2017

Manipulating Holograms (move, scale, rotate) by gestures

Intro

This tutorial will show you how to manipulate (move, rotate and scale) Holograms by using gestures. It uses speech commands to change between moving, rotating and scaling. It will build upon my previous blog post, re-using the collision detection. The app we are going to create will work like this (turn on sound to hear the voice commands):

Setting the stage

Once again, we start off with a rather standard setup…

clip_image001[4]

only now we have a cube and a sphere. Neither of their settings are particularly spectacular, and this will show the sphere initially a bit to the left, and a rectangular box a bit to the right

clip_image002[9]clip_image003[4]

 

 

 

 

If we actually want this Holograms to do something we need to code some stuff:

  • A behaviour to select and activate Holograms – TapToSelect
  • A behaviour that acts on speech commands, so the next behaviour knows what to do – this is SpeechCommandExecuter
  • A behaviour that responds to tap-hold-and-move gesture and does the actual moving, rotating and scaling – this is SpatialManipulator
  • Something to wire the whole thing together, a kind of app state manager – AppStateManager and its base class, very originally called BaseAppStateManager.

Selecting a hologram

The fun thing with the new HoloToolkit is that we don’t need do much anymore to listen or track hand movements and speech. It’s all done by the InputManager. We only need to implement the right interfaces to be called with whatever we want to know. So when we want an object to receive a tap, we only need to add a behaviour that implements IInputClickHandler:

public class TapToSelect : MonoBehaviour, IInputClickHandler
{
    public virtual void OnInputClicked(InputEventData eventData)
    {
        if (BaseAppStateManager.IsInitialized)
        {
            // If not already selected - select, otherwise, deselect
            if (BaseAppStateManager.Instance.SelectedGameObject != gameObject)
            {
                BaseAppStateManager.Instance.SelectedGameObject = gameObject;
            }
            else
            {
                BaseAppStateManager.Instance.SelectedGameObject = null;
            }
            var audioSource = GetAudioSource(gameObject);
            if (audioSource != null)
            {
                audioSource.Play();
            }
        }
        else
        {
            Debug.Log("No BaseAppStateManager found or initialized");
        }
    }
}

This basically just gives the object to the App State Manager – or more it’s base class, later more about it, and optionally plays a sound – if the omitted GetAudioSource method can find and AudioSource in either the object or its parent. In the app it does, it plays a by now very recognizable ‘ping’ confirmation sound.

Speech command the new way – listen to me, just hear me out*

Using speech commands has very much changed with the new HoloToolkit. There are actually two ways to go about it:

  • Using a SpeechInputSource and implementing an ISpeechHandler ‘somewhere’. This is rather straightforward and is very much and analogy of how the IInputClickHandler works. Disadvantage is that you have to define your keywords twice – both in the SpeechInputSource and in the ISpeechHandler implementation
  • Using a KeywordManager to define your keywords and map them to some object’s method(s).

This sample uses the latter method. It’s a bit of an odd workflow to get it working, but once that’s clear, it’s rather elegant. It’s also more testable, as the interpretation of keywords is separated from the execution. We are implementing that execution in the SpeechCommandExecuter. Its public methods are Move, Rotate, Scale, Done, Faster and Slower which pretty much maps to the available speech commands. And if you look in the code, you will see what it does internally is just call private methods, which in turn try to find the selected objects’ SpatialManipulator and call methods there.

private void TryChangeMode(ManipulationMode mode)
{
    var manipulator = GetSpatialManipulator();
    if (manipulator == null)
    {
        return;
    }

    if (manipulator.Mode != mode)
    {
        manipulator.Mode = mode;
        TryPlaySound();
    }
}

private void TryChangeSpeed(bool faster)
{
    var manipulator = GetSpatialManipulator();
    if (manipulator == null)
    {
        return;
    }

    if (manipulator.Mode == ManipulationMode.None)
    {
        return;
    }

    if (faster)
    {
        manipulator.Faster();
    }
    else
    {
        manipulator.Slower();

    }
    TryPlaySound();
}

private SpatialManipulator GetSpatialManipulator()
{
    var lastSelectedObject = AppStateManager.Instance.SelectedGameObject;
    if (lastSelectedObject == null)
    {
        Debug.Log("No selected element found");
        return null;
    }
    var manipulator = lastSelectedObject.GetComponent<SpatialManipulator>();
    if (manipulator == null)
    {
        manipulator = lastSelectedObject.GetComponentInChildren<SpatialManipulator>();
    }

    if (manipulator == null)
    {
        Debug.Log("No manipulator component found");
    }
    return manipulator;
}

So why this odd arrangement? That’s because KeywordManager needs an object with parameterless methods to call on keyword recognition. So, we add this SpeechCommandExecuter and a KeywordManager (from the HoloToolkit) to the Managers object, and then we are going to make this work. The easiest way to get going is

  • Expand “Keywords and Responses”,
  • Change “size” initially in one.
  • Type “move object” into keyword,
  • Click + under “Response”
  • Drag the “Managers” object in the now visible “None” field. This is best explained by an image:

clip_image002[11]

And then you have to select to what method of which object you want to map this speech command. To do this, click the dropdown menu next to “Runtime Only”, that will initially say “no function”. From the drop down first select the object you want (SpeechCommandExecuter) and then the method you want (Move).

clip_image004

Unfortunately, all objects in the game object are displayed, as are all public methods and properties from every object you select – plus those of its parent classes. You sometimes really have to hunt them down. It’s a bit confusing at first, but once you have done it a couple of time you will get the hang of it. It might feel as an odd way of programming if you are used to the formal declarative approach of things in XAML, but that’s the way it is.

Then change the size to 6 and add all the other keyword/method combinations. By using this method you only have to drag the Managers object once, as the Unity editor will copy all values of the first entry to the new ones.

Spatial Manipulation

This is the behaviour that does most of the work. And it’s surprisingly simple. The most important (new) part is like this:

using UnityEngine;
using HoloToolkit.Unity.InputModule;

namespace LocalJoost.HoloToolkitExtensions
{
    public class SpatialManipulator : MonoBehaviour
    {
        public float MoveSpeed = 0.1f;

        public float RotateSpeed = 6f;

        public float ScaleSpeed = 0.2f;

        public ManipulationMode Mode { get; set; }


        public void Manipulate(Vector3 manipulationData)
        {
            switch (Mode)
            {
                case ManipulationMode.Move:
                    Move(manipulationData);
                    break;
                case ManipulationMode.Rotate:
                    Rotate(manipulationData);
                    break;
                case ManipulationMode.Scale:
                    Scale(manipulationData);
                    break;
            }
        }

        void Move(Vector3 manipulationData)
        {
            var delta = manipulationData * MoveSpeed;
            if (CollisonDetector.CheckIfCanMoveBy(delta))
            {
                transform.localPosition += delta;
            }
        }

        void Rotate(Vector3 manipulationData)
        {
            transform.RotateAround(transform.position, Camera.main.transform.up, 
                -manipulationData.x * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.forward, 
                manipulationData.y * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.right, 
                manipulationData.z * RotateSpeed);
        }

        void Scale(Vector3 manipulationData)
        {
            transform.localScale *= 1.0f - (manipulationData.z * ScaleSpeed);
        }
    }
}

The manipulation mode can be either Move, Rotate, Scale – or None, in which case this behaviour does nothing at all. So, when ‘something’ supplies a Vector3 to the Manipulate method, it will either move, rotate or scale the object.

  • In move mode, when you move your hand, the object will follow the direction. So, if you pull towards you, it will come toward you. Move up, it will move up. Elementary.
  • Scale is even more simple. Pull toward you, the object will grow, push from you, it will shrink.
  • Rotate is a bit tricky. Push from you, the object will rotate around the horizontal axis. That is, an axis running through your view from left to right. Effectively, the top of the object will be moving from you and the bottom to you. Move your hand from left to right, or right to left, and the object will rotate around and axis that is running from top to bottom of your view. Last and most tricky – and least intuitive: move your hand from top to bottom and the object will rotate clockwise over the z axis – that is, the axis ‘coming out of your eyes’

There are two more methods – Faster and Slower, which are called via the SpeechManager as you have seen, and their function is not very spectacular: they either multiply the speed value of the currently active manipulation mode by two, or divide it by two. So by saying “go faster” you will make the actual speed at which your Hologram moves, rotates or scales go twice as fast, depending on what you are doing. “Go slower” does the exact opposite.

using UnityEngine;
using HoloToolkit.Unity.InputModule;

namespace LocalJoost.HoloToolkitExtensions
{
    public class SpatialManipulator : MonoBehaviour
    {
        public float MoveSpeed = 0.1f;

        public float RotateSpeed = 6f;

        public float ScaleSpeed = 0.2f;

        public ManipulationMode Mode { get; set; }


        public void Manipulate(Vector3 manipulationData)
        {
            switch (Mode)
            {
                case ManipulationMode.Move:
                    Move(manipulationData);
                    break;
                case ManipulationMode.Rotate:
                    Rotate(manipulationData);
                    break;
                case ManipulationMode.Scale:
                    Scale(manipulationData);
                    break;
            }
        }

        void Move(Vector3 manipulationData)
        {
            var delta = manipulationData * MoveSpeed;
            if (CollisonDetector.CheckIfCanMoveBy(delta))
            {
                transform.localPosition += delta;
            }
        }

        void Rotate(Vector3 manipulationData)
        {
            transform.RotateAround(transform.position, Camera.main.transform.up, 
                -manipulationData.x * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.forward, 
                manipulationData.y * RotateSpeed);
            transform.RotateAround(transform.position, Camera.main.transform.right, 
                manipulationData.z * RotateSpeed);
        }

        void Scale(Vector3 manipulationData)
        {
            transform.localScale *= 1.0f - (manipulationData.z * ScaleSpeed);
        }
    }
}

It’s not quite rocket science as you can see. Notice the re-use of the collision detector I introduced in my previous blog post.

App state – the missing piece

The only thing missing now is how it’s all stitched together. That’s actually done using two classes – a BaseStateManager and a descendant, AppStateManager

The BaseStateManager doesn’t do much special. Its main feats are having a property for a selected object and notifying the rest of the world of getting one. And that’s not even used in this sample app but I consider it useful for other purposes, so I left it in. It also calls a virtual method if the selected object is changed.

using System;
using HoloToolkit.Unity;
using UnityEngine;

namespace LocalJoost.HoloToolkitExtensions
{
    public class BaseAppStateManager : Singleton<BaseAppStateManager>
    {
        private GameObject _selectedGameObject;

        public GameObject SelectedGameObject
        {
            get { return _selectedGameObject; }
            set
            {
                if (_selectedGameObject != value)
                {
                    ResetDeselectedObject(_selectedGameObject);
                    _selectedGameObject = value;
                    if (SelectedObjectChanged != null)
                    {
                        SelectedObjectChanged(this, 
                        new GameObjectEventArgs(_selectedGameObject));
                    }
                }
            }
        }

        protected virtual void ResetDeselectedObject(GameObject oldGameObject)
        {
        }

        public event EventHandler<GameObjectEventArgs> SelectedObjectChanged;
    }
}

There is also a class GameObjectEventArgs but that’s too trivial to show here. Note, by the way, I stick to C# 4.0 concepts as this is what Unity currently is limited to.

The actual AppStateManager glues the whole thing together:

public class AppStateManager : BaseAppStateManager, IManipulationHandler
{
    void Start()
    {
        InputManager.Instance.AddGlobalListener(gameObject);
    }

    public static new AppStateManager Instance
    {
        get { return (AppStateManager)BaseAppStateManager.Instance; }
    }

    protected override void ResetDeselectedObject(GameObject oldGameObject)
    {
        var manipulator = GetManipulator(oldGameObject);
        if (manipulator != null)
        {
            manipulator.Mode = ManipulationMode.None;
        }
    }

    public void OnManipulationUpdated(ManipulationEventData eventData)
    {
        if (SelectedGameObject != null)
        {
            var manipulator = GetManipulator(SelectedGameObject);
            if (manipulator != null)
            {
                manipulator.Manipulate(eventData.CumulativeDelta);
            }
        }
    }

    protected SpatialManipulator GetManipulator(GameObject obj)
    {
        if (obj == null)
        {
            return null;
        }
        var manipulator = obj.GetComponent<SpatialManipulator>() ??
            obj.GetComponentInChildren<SpatialManipulator>();
        return manipulator;
    }
}

It implements the IManipulationHandler, which means our almighty IManipulationHandler will call it’s OnManipulationUpdated whenever it detects hand with a tap-and-hold gesture (thumb and index finger pressed together) while moving. And it will give that data to the SpatialManipulator in the select object, that is – if the selected object has one. It also makes sure the currently active object gets deactivated once you select a new one. Note, IManipulationHandler requires you to implement three more methods, omitted here, as they are not used in this app.

There is an important line in the Start method, that will define this object as a global input handler. Its OnManipulationUpdated always gets called. Normally, this get only called when it is selected – that is, if your gaze strikes the object. That makes it very hard to move it, as your gaze most likely will most likely fall off the object as you move it. This approach has the advantage you can even manipulate objects even if you are not exactly looking at them.

Wiring it all together in Unity

imageThis is actually really simple now we have all the components. Just go to the Cube in your hierarchy and add these three components:

 

And don’t forget to drag the InputManager and the Cube itself on the Stabilizer and the Collision Detector fields as I explained in my previous blog post. Repeat for the Sphere object. Build your app, and you should get the result I show in the video.

Concluding remarks

It’s fairly easy to wire together something to move stuff around using gestures. There are a few limitations as to the intuitiveness of the rotation gesture, and you might also notice that while moving the object around uses collision detection, rotating and scaling do not. I leave those as ‘exercise to the reader’ ;). But I do hope this takes you forward as a HoloLens developer.

Full code can be found here

 

 

*bonus points if you actually immediately recognized this phrase

Dragging holograms with gaze and tapping them in place on a surface

Intro

Simply put – what I was trying to do create is the effect of what you get in the – in the meantime good old – Holograms app: when you pull a hologram out a menu, it ‘sticks to your gaze’ and follows it. You can air tap and then it stays hanging in the air where you left it, but you can also put it on a floor, on a table, or next to a wall. You can’t push it through a surface. That is, most of the time ;). So, like this:

In the video, you can see it follows the gaze cursor floating through the air till it hits a wall to the left and then stops, then goes down till it hits the bed and then stops, then up again till I finally place it on the floor.

A new year, a new toolkit

As happens often in the bleeding end of technology, things tend to change pretty fast. This is also the case in HoloLens country. I have taken the plunge to Unity 5.5 and the new HoloToolkit which has a few breaking changes. Things have gotten way simpler since the previous iteration. Also, I would like to point out that for this tutorial I am using the latest patch release, which at the time of this writing it 5.5.0p3, released December 22, 2016.

Setting up the initial project

This is best illustrated by a picture. If you have setup the project we basically only need this. Both Managers and HologramCollection are simply empty game objects meant to group stuff together, then don’t have any specific other function here. Drag and drop the four blue prefabs in the indicated places, then set some properties for the cube

imageimage

The Cube is the thing that will be moved. Now it’s time for ‘some’ code.

The main actors

There are two scripts that play the leading role, with a few supporting roles.

  • MoveByGaze
  • IntialPlaceByTap

The first one makes an object move, the second one actually ends it. Apropos, the actual moving is done by our old friend iTween, whose usefulness and application was already described in part 5 of the AMS HoloATC series. So, you will need to include this in the project to prevent all kind of nasty errors. Anyway, let’s get to the star of the show, MoveByGaze.

The first one makes an object move, the second one actually ends it. A propos, the actual moving is done by our old friend iTween, whose usefulness and application I already described in part 5 of my AMS HoloATC series. So you will need to include this in the project to prevent all kind of nasty errors. Anyway, let’s get to the star of the show, MoveByGaze.

Moving with gaze

It starts like this:

using UnityEngine;
using HoloToolkit.Unity.InputModule;
using HoloToolkit.Unity.SpatialMapping;

namespace LocalJoost.HoloToolkitExtensions
{
    public class MoveByGaze : MonoBehaviour
    {
        public float MaxDistance = 2f;
        public bool IsActive = true;
        public float DistanceTrigger = 0.2f;
        public BaseRayStabilizer Stabilizer = null;
        public BaseSpatialMappingCollisionDetector CollisonDetector;

        private float _startTime;
        private float _delay = 0.5f;
        private bool _isJustEnabled;
        private Vector3 _lastMoveToLocation;
        private bool _isBusy;

        private SpatialMappingManager MappingManager
        {
            get { return SpatialMappingManager.Instance; }
        }

        void OnEnable()
        {
            _isJustEnabled = true;
        }

        void Start()
        {
            _startTime = Time.time + _delay;
            _isJustEnabled = true;
            if (CollisonDetector == null)
            {
                CollisonDetector = 
                  gameObject.AddComponent<DefaultMappingCollisionDetector>();
            }
        }
    }
}

Up above are the settings:

  • MaxDistance is the maximum distance from your head the behaviour will try to place the object on a surface. Further than that, and it will just float in the air.
  • IsActive determines whether the behaviour is active (duh)
  • DistanceTrigger is the distance your gaze has to be from the object your are moving, before it actual starts to move. It kind of trails your gaze. This prevents the object from moving in a very nervous way.
  • Stabilizer is the stabilizer made, used and maintained by the InputManager. You will have to drag the InputManager from your scene on this field to use the stabilizer. It’s not mandatory, but highly recommended
  • CollisionDetector is a class we will see later – it basically makes sure the object that you are dragging is not pushed through any surfaces. You will need to add a collision detector to the game object that you are dragging along – or maybe a game object that is part of the game object that you are dragging. That collision detector needs then to be dragged on this field on the MoveByGaze This is not mandatory. If you don’t add one, the object you attach the MoveByGaze to will just simply follow your gaze, and move right through any object. That’s the work of the DefaultMappingCollisionDetector who is essentially a null pattern implementation. 

Anyway, in the Update method all the work is done:

void Update()
{
    if (!IsActive || _isBusy || _startTime > Time.time)
        return;
    _isBusy = true;

    var newPos = GetPostionInLookingDirection();
    if ((newPos - _lastMoveToLocation).magnitude > DistanceTrigger || _isJustEnabled)
    {
        _isJustEnabled = false;
        var maxDelta = CollisonDetector.GetMaxDelta(newPos - transform.position);
        if (maxDelta != Vector3.zero)
        {
            newPos = transform.position + maxDelta;
            iTween.MoveTo(gameObject,
                iTween.Hash("position", newPos, "time", 2.0f * maxDelta.magnitude,
                    "easetype", iTween.EaseType.easeInOutSine, "islocal", false,
                    "oncomplete", "MovingDone", "oncompletetarget", gameObject));
            _lastMoveToLocation = newPos;
        }
        else
        {
            _isBusy = false;
        }
    }
    else
    {
        _isBusy = false;
    }
}

private void MovingDone()
{
    _isBusy = false;
}

Only if the behaviour is active, not busy, and the first half second is over we are doing anything at all. And the first thing is – telling the world we are busy indeed. Thid method, like all Updates, is called 60 times a second and we want to keep things a bit controlled here. Race conditions are annoying.

Then we get a position in the direction the user is looking, and if that exceeds the distance trigger – or this is the first time we are getting here – we start off finding how far ahead along this gaze we can place the actual object by using CollisionDetector. If that’s is possible – that is, if the CollisionDetector does not find any obstacles, we can actually move the object using iTween. Important is to note that whenever the move is not possible, _isBusy immediately gets set to false. Also, note the fact that the smaller the distance, the faster the move. This is to make sure the final tweaks of setting the object in the right place don’t take a long time. Otherwise, _isBusy is only reset after a successful move.

Then the final pieces of this behaviour:

private Vector3 GetPostionInLookingDirection()
{
    RaycastHit hitInfo;

    var headReady = Stabilizer != null
        ? Stabilizer.StableRay
        : new Ray(Camera.main.transform.position, Camera.main.transform.forward);

    if (MappingManager != null &&
        Physics.Raycast(headReady, out hitInfo, MaxDistance, MappingManager.LayerMask))
    {
        return hitInfo.point;
    }

    return CalculatePositionDeadAhead(MaxDistance);
}

private Vector3 CalculatePositionDeadAhead(float distance)
{
    return Stabilizer != null
        ? Stabilizer.StableRay.origin + 
Stabilizer.StableRay.direction.normalized * distance : Camera.main.transform.position + Camera.main.transform.forward.normalized * distance; }

GetPostionInLookingDirection first tries to get the direction in which you are looking. It tries to use the Stabilizer’s StableRay for that. The Stabilizer is a component of the InputManager that stabilizes your view – and the cursor uses it as well. This prevents the cursor from wobbling too much when you don’t keep your head perfectly still (which most people don’t – this includes me). The stabilizer takes an average movement of 60 samples and that makes for a much less nervous-looking experience. If you don’t have a stabilizer defined, it just takes your actual looking direction – the camera’s position and your looking direction.

Then it tries to see if the resulting ray hits a wall or a floor – but no further than MaxDistance away. If it sees a hit, it returns this point, if it does not, if gives a point in the air MaxDistance away along an invisible ray coming out of your eyes. That’s what CalculatePositionDeadAhead does – but also trying to use the Stabilizer first to find the direction.

Detect collisions

Okay, so what is this famous collision detector that prevents stuff from being pushed through walls and floors, using the spatial perception that makes the HoloLens such a unique device? It’s actually very simple, although it took me a while to actually get it this simple.

using UnityEngine;

namespace LocalJoost.HoloToolkitExtensions
{
    public class SpatialMappingCollisionDetector : BaseSpatialMappingCollisionDetector
    {
        public float MinDistance = 0.0f;

        private Rigidbody _rigidbody;

        void Start()
        {
            _rigidbody = GetComponent<Rigidbody>() ?? gameObject.AddComponent<Rigidbody>();
            _rigidbody.isKinematic = true;
            _rigidbody.useGravity = false;
        }

        public override bool CheckIfCanMoveBy(Vector3 delta)
        {
            RaycastHit hitInfo;
            // Sweeptest wisdom from 
            //http://answers.unity3d.com/questions/499013/cubecasting.html
            return !_rigidbody.SweepTest(delta, out hitInfo, delta.magnitude);
        }

        public override Vector3 GetMaxDelta(Vector3 delta)
        {
            RaycastHit hitInfo;
            if(!_rigidbody.SweepTest(delta, out hitInfo, delta.magnitude))
            {
                return KeepDistance(delta, hitInfo.point); ;
            }

            delta *= (hitInfo.distance / delta.magnitude);
            for (var i = 0; i <= 9; i += 3)
            {
                var dTest = delta / (i + 1);
                if (!_rigidbody.SweepTest(dTest, out hitInfo, dTest.magnitude))
                {
                    return KeepDistance(dTest, hitInfo.point);
                }
            }
            return Vector3.zero;
        }

        private  Vector3 KeepDistance(Vector3 delta, Vector3 hitPoint)
        {
            var distanceVector = hitPoint - transform.position;
            return delta - (distanceVector.normalized * MinDistance);
        }
    }
}

This behaviour first tries to find a RigidBody, and failing that, adds it. We will need this to check the presence of anything ‘in the way’. But – this is important – we will set ‘isKinematic’ to true and ‘useGravity’ to false, or else or object will come under control of the Unity physics engine and drop on the floor. In this case, we want to control the movement of the object.

So, this class has two public methods (it’s abstract base class demands that). One, CheckIfCanMoveBy (that we don’t use now), just says if you can move your object in the intended direction over the intended distance without hitting anything. The other essentially does the same, but if it finds something in the way, it also tries to find a distance over which you can move in the desired direction. For this, we use the SweepTest method of RigidBody. Essentially you give it a vector, a distance along that vector, and it has an out variable that gives you info about a hit, should any occur. If a hit does occur, it tries at again at 1/4th, 1/7th and 1/10th of that initially found distance. Failing everything, it returns a zero vector. By using this rough approach, and object moves quickly in a few steps till it can no more.

And then it also moves the object back over a distance you can set from the editor. This keeps the object just a little above the floor or from the wall, show that be desired. That’s what KeepDistance is for.

The whole point of having a base class BaseSpatialMappingCollisionDetector, by the way, is a) enabling null pattern implementation which as implemented by DefaultMappingCollisionDetector and b) make different collision detectors based upon different needs. A bit of architectural considerations within the sometimes-bewildering universe of Unity development.

Making it stop – InitialPlaceByTap

Making the MoveByGaze stop is very simple – set the IsActive field to false. Now we only need something to actually make that happen. With the new HoloToolkit, this is actually very very simple:

using UnityEngine;
using HoloToolkit.Unity.InputModule;

namespace LocalJoost.HoloToolkitExtensions
{
    public class InitialPlaceByTap : MonoBehaviour, IInputClickHandler
    {
        protected AudioSource Sound;
        protected MoveByGaze GazeMover;

        void Start()
        {
            Sound = GetComponent<AudioSource>();
            GazeMover = GetComponent<MoveByGaze>();

            InputManager.Instance.PushFallbackInputHandler(gameObject);
        }

        public void OnInputClicked(InputEventData eventData)
        {
            if (!GazeMover.IsActive)
            {
                return;
            }

            if (Sound != null)
            {
                Sound.Play();
            }

            GazeMover.IsActive = false;
        }
    }
}

By implementing IInputClickHandler the InputManager will send an event to this object when you air tap and it is selected by gaze. But by pushing it as fallback handler it will get this event also when it’s not selected. The event processing is pretty simple – if the GazeMover in this object is active, it’s de-activated. Also, if there’s an AudioSource detected, it’s sound is played. I very much recommend this kind of audio feedback.

Wiring it all together

On your cube, you drag the MoveByGaze, SpatialMappingCollisionDetector, and InitialPlaceByTap scripts. Then you drag the cube itself again on the CollisionDetector field of MoveByGaze, and the InputManager on the Stabilizer field. Unity itself will select the right component.

image

So, in this case I could also have used GetComponent<SpatialMappingCollisionDetector> in stead of a field where you need to drag something on. But this way is more flexible – in app I did not want to use the whole object’s collider, but only that of a child object. Note I have set the MinDistance for the SpatialMappingCollisionDetector for 1 cm – it will keep an extra centimeter distance from the wall or the floor.

Concluding remarks

So this is how you can more or less replicate part of the behavior of the Holograms App, by moving around objects with your gaze and placing them on surfaces using air tap. The unique capabilities of the HoloLens allow us to place objects next to or on top of physical objects, and the new HoloToolkit makes using those capabilities pretty easy.

Full code, as per my MVP ‘trademark’, can be found here

28 December 2016

Unity 5.5/HoloLens: AudioPluginMsHRTF.dll has failed the AppContainerCheck check

All right, so I was finally past a couple of deadlines with paid HoloLens apps and was ready to take the plunge to update my app AMS HoloATC to Unity 5.5 and the newest HoloToolkit. After wrestling my way past some some interesting breaking changes (more about that later), I was finally at the point where everything worked again as it did before – and started adding new features. Today was the day I thought it was ready enough to justify a new submission to the store. I had tested everything thoroughly, so what could possibly go wrong.

image

Image result for wtf

Why, if it ain’t my dear old friend the WACK. The 'binary analyzer' has found a file AudioPluginMsHRTF.dll that does not pass the AppContainerCheck and the WACK gives me a whole bunch of suggestions that don't mean anything to me. Apparently I have to add a couple of options when I link the app, but I have not seen a linker since I wrote my last C program, which was – I think – before 2000. Now I know the WACK has some issues, so I tried submitting it anyway – but no dice. The Windows Store spitted it right back to me, with the same error. So now what?

Fortunately I have had dealt with Spatial Sound before and recognized the “Ms HRTF Spatializer” as something I just selected in Unity. After numerous (time consuming) failures trying to juggle the WACK, I took a desperate measure. I went back to a machine that had a previous installation of Unity on it – 5.4 – and checked AudioPluginMsHRTF.dll there. I found it in
C:\Program Files\Unity HoloLens 5.4.0f3-HTP\Editor\Data\VR\Unity\WindowsStoreApps. It reports being its size as 2931200 bytes
For Unity 5.5, in
C:\Program Files\Unity550f3\Editor\Data\UnityExtensions\Unity\VR\WindowsStoreApps\uap_x86, there’s a file of 13312 bytes big. Only 13kb. Yet, when you deploy and app with this only 13kb dll to a HoloLens Spatial Sound works. You can clearly hear the difference – simple stereo versus spatial sound.

So I went about and looked around where this dll is taken from when Visual Studio project is built, and found it comes from the plugins directory in the project directory of the generated solution. In my case that’s C:\Projects\AMS_HoloATC\AMS HoloATC\App\AMS HoloATC\Plugins\X86. I replaced the 13kb AudioPluginMsHRTF.dll by the one I had taken from the 5.4 installation. That fixed the WACK. Unfortunately, it also made the app crash whenever I used Spatial Sound. Nice try, no cigar. But it conformed my suspicions that the 5.5 version of this dll is indeed the thing that makes the WACK protest.

imageSo what I did, in the end, was just get rid of ‘real’ Spatial Sound. I went to Edit/Project settings in Unity and set “Spatializer Plugin” to “None”, disabling the Ms HRTF Spatializer. Re-created the Visual Studio solution from complete scratch, and the reference to AudioPluginMsHRTF.dll and AudioPluginMsHRTF.pdb were gone. And so was the WACK error.

But of course, now my app has no real spatial sound anymore – it’s reduced to ‘normal’ Unity Stereo Panning. But a running app is better than one that does not. I have filed a bug at Unity with a repro and contacted some people within Microsoft. I will keep you posted about any progress. In the mean time, this should keep you going forward.

For those who want to try out themselves, I have made this zipped up solution including all the files I built, and you can easily check the resulting app does not pass the WACK.

Update 30-12-2016 – the issue has been confirmed as a bug by Unity and will be addressed.

19 November 2016

A HoloLens airplane tracker 8–adding the church and the billboard

Intro

All the code to track aircraft in a HoloLens is already yours. Now the only thing I have left to write is how the Wortell Church and the billboard that appears on the map came to be. It’s not technically a part of the aircraft tracker per se, but a fun exercise in code that you might use if your company is near an airport too.

Required assets

We need two things:

  • A Church
  • A Billboard (and a logo to put on it)

Getting and importing the church

This was actually the hardest one. It turns on that there are lots of churches on cgtrader, but if you download for instance this church as FBX or OBJ and import it into Unity, you don’t get the promised church on the left– you get the ghost thing on the right!

image

image

As one of my colleagues - who lived in the UK for a long time before joining us - would put it: “it’s not quite what I had in mind”. And this one actually looks quite good – I have seen churches with not only textures but complete parts missing. Apparently not everyone checks the result of every format they convert to before uploading. Fortunately, the author writes he or she has created the model in Sketchup. Ain’t that nice, as you can download a free trial version of that program. So download that – and the original Sketchup model. Unpack the zip file, and rename the 3d-model.skp that’s in there to church.skp. Then double-click from the explorer and Sketchup will open, after possibly asking you about trial licensing first. I had to zoom out a little, but this is what I got

image

imageAnd that was indeed what I had in mind. Now hit File/Export/3D model, select FBX file (*.fbx) for “Save as “type”, choose a name that suits you and hit save

If you would import the resulting church.fbx into Unity you get this:

image

imageAlthough not nearly as pretty as in Sketchup, that’s more like it! To keep your project clean and make it easier to swap out this church for another one (or other building)later, don’t drag the FBX into the root of your assets, but first add a folder, then drag in the church, like displayed on the right.This way, the materials of all the imported objects don’t get mixed up.

To keep the church and the (future) billboard together, I created an empty GameObject “Wortell” in HoloGramCollection. I dragged the church in there. You will find the church too big (like way way too big), so scale it to 0.025 in all directions.Then set for location X to –0.076 and Z to 0.2915. Finally, set Y rotation to 158.

 

image

image

 

 

 

 

 

 

 

 

This will put the church nicely where it’s more or less supposed to be, although I think it’s about three times too big with respect to the map but otherwise it would be nearly invisible. For the record – this is not a model of the actual church that is converted to the Wortell offices, but it resembles it a little.

imageGetting a billboard and a logo

Getting this one is actually very easy. Go to the Unity Asset store – that is, click Window/Asset Store and type “Billboard” in filter. You will get an enormous amounts of hits but you just need the very first one, that I indicated in red. Click on it, then click “download” (or “import”, if you have downloaded it before) .

When you actually import, make sure to deselect the sample scene, as we really don’t need it:

image

imageDrag the billboard prefab from Assets/Billboard inside the Wortell object on the hierachy. You will see that once again this is way too big. Scale it to 0.03 for x, y and z, give it the same position as the church and set rotation to 0 for x, 0 for y, and 90 for z. The billboard will now look like this:

image

So the only thing missing is the Wortell logo – or whatever you want to put on it. I am not sure if this is the right procedure, but I went with this:

  • billboard textureAdd a 512x512 image with your logo to Assets/App/Textures. I took this one. Notice the Wortell text is just about halfway – this should put the text at the bottom of the billboard (as is customary with our logo)
  • Click on the Billboard in the hierarchy.
  • Expand the “board” material in the inspector
  • Drag your logo texture on the square that says “None (texture)”
  • Set X and Y tiling both to 2

Net result should be like left, and the billboard standing above the church should now look to the right:

imageimage

Getting the logo the way you want it to be takes a little fiddling. I really can’t tell you why you need to set those tiling values – I just found out it worked this way.

Adding some life to it

It looks pretty nice, but that billboard has all the interactivity of a wooden shed – it’s sitting there doing nothing. We want it to do two things:

  • When it’s showing, it needs to be rotated to face the user – so it’s always readable
  • Initially it won’t be visible at all, air tapping the church will make it visible (and invisible again, like a toggle)

After all, that’s what’s in the original video, and this very feature was even demonstrated live on Windows Weekly by none other than Mary Jo Foley herself  ;) when she and Paul Thurrott visited the Wortell offices on November 16, 2016.

Making the billboard face the user

This requires actually zero programming effort, as a billboard behaviour is already in the HoloToolkit. So basically all you have to do is:

  • Got to Assets/HoloToolkit/Utilities/Scripts
  • Locate the Billboard.cs script
  • Drag it on top of your billboard game object in the hierarchy
  • Set the script’s pivot axis to X

image

And you are done. If you deploy the app now to a HoloLens or emulator you will see the billboard now follows the camera. Now this may seem odd, as the billboard should rotate around the axis that runs from to top bottom – in other words, Y. But since the billboard is default rotated 90⁰ around it’s Z-axis and rotates around it’s own axis rather than a world axis, this makes sense. Kind of ;)

If you are smart, you are now are wondering why I did not use that for the aircraft label, as the billboard script also supports full free axis movement. That is simple – at the time I wrote the code for the label, I simply had not thought of looking into the HoloToolkit. So I wrote a custom script. And I kept it that way, as a kind of way to show you how I am still learning as well :)

Preparing the billboard folding in or out by air tapping

So what we want to achieve is to be able to air tap the church, but this should have an effect on the billboard. I have solved this as follows:

  • Create a collider on the church
  • Make a behaviour attached to the billboard that handles the folding in/out of the billboard
  • Make a little behaviour attached to the church that receives the air tap and passes that to the behaviour mentioned in the previous bullet point.

image

First the collider. As explained in the previous episode , to be able to show a cursor on an object or select it, it needs to have a collider. I once again took the box collider for easy selection.

Only then we notice something peculiar when we look at the church from the top:

image

Meh. That’s annoying. Apparently the creator of this model has created it on an angle, or there’s something wrong with the conversion. Either way we will have to fix it. We do this by digging into the church object in the hierarchy and rotate the Group1 component 25⁰ over it’s Y-axis so it’s visually aligned with the collider:

imageimage

Now of course church itself is no longer aligned with the road. But if you subtract the 25⁰ from the 158⁰ the church object it is rotated:

imageimage 

everything is now correctly rotated and aligned. And now for some action.

Adding the behaviours

We are almost there. First, we need the behavior that does the actual folding and unfolding of the billboard:

using System;
using UnityEngine;

public class BillBoardToggler : MonoBehaviour
{
  private float _startScale;
  void Start()
  {
    _startScale = transform.localScale.x;
    Hide();
  }

  public void Toggle()
  {
    if (Math.Abs(transform.localScale.x) < 0.001f)
    {
      // Pop up
      transform.localScale = new Vector3(0, _startScale, _startScale);
      iTween.ScaleTo(gameObject,
          new Vector3(_startScale, transform.localScale.y,
              transform.localScale.z), 1f);
    }
    else
    {
      // Pop down
      iTween.ScaleTo(gameObject, iTween.Hash("scale",
          new Vector3(0, transform.localScale.y, transform.localScale.z),
          "time", 1f, "oncomplete", "Hide"));
    }
  }

  private void Hide()
  {
    transform.localScale = new Vector3(0, 0, 0);
  }
}

So at start we save the x scale, and hide the complete billboard by scaling it to 0,0,0, effectively making it infinitely small – and thus invisible. Now if the Toggle method is called:

  • If the scale bigger than 0.001, the billboard apparently was popped down. So we first set the Y and Z scale to their original value – and keep X to 0, then proceed to animate the X scale from 0 to its original value, using our old friend iTween, which we saw already in the 5th episode of this series. This lets the billboard rise out of the ground
  • otherwise, the billboard was already popped up. So we animate the x scale to 0. If you look closely, you will see the billboard being reduced to its outline on the airport map. But just after that, Hide is called, letting the outline also pop out of existence.

Drag this behaviour on top of the billboard. Only one thing left to do – make sure this thing gets called when the user air taps the church. For that, we only need this simple behaviour:

using UnityEngine;

public class ChurchController : MonoBehaviour
{
  private BillBoardToggler _toggler;

  // Use this for initialization
  void Start()
  {
    _toggler = transform.parent.GetComponentInChildren<BillBoardToggler>();
  }

  private void OnSelect()
  {
    if (_toggler != null)
    {
      _toggler.Toggle();
    }
  }
}

Drag this on on top of the church. On start it will try to find a BillBoardToggler script on a sibling, and when the OnSelect method is called – a standard message sent by the HoloToolkit GestureManager – it will call the Toggle method on that, letting the billboard pop up (or make it go down again). So if you move your gaze cursor to the church and air tap on it:

image

Conclusion

So that’s it. No more beans to spill. This the the app and all of the app, except for the live data feed. In fact, it’s better then the app in the store, because backtracking the road I took making this app I skipped a few detours, false starts and messy things that have not turned up in this cleaned up tutorial code. That is why I can recommend everyone writing an extensive tutorial once in a while – because not only the students will learn, but the teacher as well.

The final and completed code of this app can be found here. I don’t rule out the possibility that I will write about extensions to this app at some later point, but this is what I wanted to share. I enjoyed immensely writing about it, and I hope you enjoyed reading. If you made it this far – kudos to you. I only ask you one thing – if this inspires you to write HoloLens apps too – consider writing a tutorial as well. Maye I will learn something from you as well ;)

13 November 2016

A HoloLens airplane tracker 7–activating an aircraft by air tapping

Intro

In this episode I am going to describe how the airplanes are selected in the sky by air tapping them. This requires some changes made to both the Azure Mobile App service and the app. The change to the app service is necessary as I want to share the selected airplane between all possible HoloLenses using the app. Why? Because ;). You actually can do this with the app in the store. This is why sometimes airplanes get selected even when you don’t select them – that is because someone else, somewhere in the world, selects an airplane.

Adapting the service

We need to be able to post to the service which aircraft is active, and in the data that is pulled from the server the active airplane needs to be indicated. Remember in the very first episode the “Flight” object had an unused “IsActive” field? It won’t be unused very much longer. First, you add a reference to System.Runtime.Caching

image

Then, add using statements:

using System.Runtime.Caching;
using System.Net.Http;
using System.Net;

Proceed by adding the following two fields:

private ObjectCache Cache = MemoryCache.Default;

private const string ActiveIdKey = "ActiveId";

We will need to add a method to post the desired active plane’s Id to the server:

[HttpPost]
public HttpResponseMessage Post(string activeId)
{
  Cache[ActiveIdKey] = activeId;
  return Request.CreateResponse(HttpStatusCode.OK);
}

Then, in the Get method, insert right before the “return flights” statement the following code:

var activeKey = Cache[ActiveIdKey] as string;
if (activeKey != null)
{
  var activeFlight = flights.FirstOrDefault(p => p.Id == activeKey);
  if (activeFlight != null)
  {
    activeFlight.IsActive = true;
  }
}

This is a rather naive way of ‘storing’ data on a server but I am just do darn lazy to set up a database if I only want to have one simple id kept active. So I am using the MemoryCache. If you post an id to the Post method, it will simply set the cache key – which will be copied into the data in the next data download, provided it’s actually an id used in the data. Publish your service, and let’s go the Unity project again

Adding a gaze cursor – and messing a bit with it

The HoloToolkit has a number of gaze cursors, but I am not quite happy with those, so I made one for myself – by adding a bit into an existing one. The procedure is as follows:

  • From HoloToolkit/Input/scripts, drag Handsmanager on top of the HologramCollection
  • From HoloToolkit/Input/scripts, drag Gazemanager on top of the HologramCollection
  • From HoloToolkit/Input/Prefabs, drag Cursor on top of the HologramCollection

Then we create a little extra behavior, called HandColorCursor

using UnityEngine;
using HoloToolkit.Unity;

public class HandColorCursor : Singleton<HandColorCursor>
{
  // Use this for initialization
  private GameObject _cursorOnHolograms;

  public Color HandDetectedColor;

  private Color _originalColor = Color.green;

  void Start()
  {
    _cursorOnHolograms = CursorManager.Instance.CursorOnHolograms;
    _originalColor = 
      _cursorOnHolograms.GetComponent<MeshRenderer>().material.color;
  }

  // Update is called once per frame
  void LateUpdate()
  {
    if (_cursorOnHolograms != null && HandsManager.Instance != null)
    {
      _cursorOnHolograms.GetComponent<MeshRenderer>().material.color =
        HandsManager.Instance.HandDetected ? Color.green : _originalColor;
    }
  }
}

Drag this behavior on top of the Cursor prefab. Now if you build and run the app and point the gaze cursor to the airport it should become a kind of blue circle. If you move your hand into view, the cursor should appear and go green.

image

imageIf you move it back, the cursor should again become blue. Unfortunately, if you point it at anything else, you will still see the fuzzy ‘CursorOffHologram’ image. That it’s because the airport is the only thing that has a collider. To make aircraft selectable, we will need to do some work at AircraftHolder.

Updating the airplane

To be selectable, it needs to have a collider. I have chosen a box collider that is pretty big with respect to the actual airplane model – to make it easier to actually select the airplane (as it can be quite some distance away, or at least seem to be) and a box makes it possible to actually see the cursor sitting pretty stable on the selection area as it hits the collider (as opposed to a mesh collider that will make it wobble around). Go to your AircraftHolder prefab in the App/Prefab folder, and add an 80x80x80 Box Collider

image

And you will indeed see the cursor appear over the airplane

image

Getting some selecting done – code first

From HoloToolkit/Input/scripts, drag GestureManage on top of the HologramCollection. GestureManager sends an OnSelect message to any GameObject that happens to be hit by a gaze (and where, incidentally, the cursor is hanging out out) using this code:

private void OnTap()
{
  if (FocusedObject != null)
  {
    FocusedObject.SendMessage("OnSelect");
  }
}

So there needs to be an OnSelect method in the AircraftController to handle this. First we will need to add the following fields to the AircraftController:

private bool _isActive;
private AudioSource _sound;
private Dictionary<MeshRenderer, Color> _originalColors;

To the Start method, we add the following code, just in front of the “_initComplete = true;” statement

_sound = GetComponent<AudioSource>();
_originalColors = new Dictionary<MeshRenderer, Color>();
foreach (var component in GetComponentsInChildren<MeshRenderer>())
{
  _originalColors.Add(component, component.material.color);
}

So this actually initializes some things by getting the audio source and reads the color of all sub components – so we can revert it easily later. Then add this method:

private void SetHighlight()
{
  if (_isActive != _flightData.IsActive)
  {
    foreach (var component in GetComponentsInChildren<MeshRenderer>())
    {
      if (component.material.color == Color.white || 
          component.material.color == Color.red)
      {
        component.material.color = _flightData.IsActive ?
           Color.red : _originalColors[component];
      }
    }
    if (_flightData.IsActive)
    {
      _sound.Play();
    }
    else
    {
      _sound.Stop();
    }
    _isActive = _flightData.IsActive;
  }
}

If the _isActive field of the current airplane differs from the new _flightData.IsActive, then flip white things to red – or red things back to white, depending on the actual value of _flightData.IsActive. And it turns on the sound – the annoying sonar ping you saw in the movie I made in the first episode. And fortunately it can also turn in it off ;)

Next, add a call to this new method at the end of the already existing “SetNewFlightData” method. Then, add this code:

private void OnSelect()
{
  SendMessageUpwards("AircraftSelected", _flightData.Id);
}

This is like SendMessage, but then upwards through the object hierarchy, in stead of down to the children. So this will look for an AircraftSelected method in the parent.

Save the file. Now we go to the DataService, and add the following method. Make sure it is sitting between #if UNITY_UWP – #endif directives.

#if UNITY_UWP
public async Task SelectFlight(string flightId)
{
  var parms = new Dictionary<string, string>
  {
    { "activeId", flightId }
  };
  await _client.InvokeApiAsync<List<Flight>>("FlightData", 
           HttpMethod.Post, parms);
}
#endif

This does the actual calling of the App service. And then we need to go to AircraftLoader, which plays the select sound and passes the call trough to the DataService:

public void AircraftSelected(object id)
{
  _sound.Play();
#if UNITY_UWP
  DataService.Instance.SelectFlight(id.ToString());
#endif
}

Save everything, go back to Unity.

Getting some selecting done – connecting the dots in Unity

There are a few things we need to be doing in Unity, but one thing is very important. One I forgot in my first CubeBouncer demo app and in the app that’s in the store, simply because I was not aware of it. Fortunately someone who shall remain nameless here made me aware of it.

First, in the main Unity window, hit Edit/Project Settings/Audio. Then check “Ms HRTF Spatialization” for “Spatializer plugin”

image

Then open the AircraftHolder prefab again, and add an AudioSource. Important advice, also from the person mentioned above: hit the “Spatialize” checkbox. Then select the “Loop” checkbox and uncheck the “Play On Awake” checkbox. Finally, find a sound you would like to be played by a selected airplane. I took a sonar ping. I put it in the Assets/App/Audio folder, and dragged it on the Audio Clip property. I also moved the “Spatial Blend” slider all the way to the right but I don’t think that is necessary anymore now I use the Spatializer. Net result:

image

And sure enough, if you now air tap an airplane it shows like below, and it starts pinging:

image

Adding confirmation sound to selection

I think it’s good practice to add a confirmation sound of some sorts to any user input that is ‘understood’. This is especially necessary with speech commands, but I like to add it to air taps. Sometimes the effect of what you do is not immediately visible, and an audio confirmation the HoloLens has understood you and is working on making your command getting executed is very nice. I tend to add a kind of “pringgg” sound. I used the “Ready” file that is in Assets/App/Audio.

First, add an AudioSource to the HologramCollection and drag the “Ready” sound on top of the “AudioClip” field. Don’t forget to uncheck the “Play On Awake” checkbox.Then go to the AircraftLoader behaviour. Since this receives the message from every AircraftController and passes it through to the DataService, this is a nice central place to play the confirmation sound. So we add a new field:

private AudioSource _sound;

that needs to be initialized in the Start method:

_sound = GetComponent<AudioSource>();

and finally in the AircraftSelected method, insert this line at the top of the method:

_sound.Play();

And we are done. Now if you airtap on the aircraft your HoloLens will sound a ‘pringgg’ sound. That is, if you tapped correctly.

Conclusion

And that’s that. Selecting an airplane by by air tapping it it, and posting an id to the Azure Mobile App service is pretty easy indeed. A lot of the interaction components are basically dragged in from the HoloToolkit, very little actual programming is required. This is the power of the HoloToolkit – I use as much from what is already there, and make as little as possible myself. As any good programmer I am a lazy programmer ;)

This blog post written far, far away from home, in Sammamish, WA, USA, shortly after the MVP summit that left me very inspired – again. After posting this I will soon retire – and go to SEA-TAC tomorrow for my flight back home in the Netherlands.

Code can be found, as always, here on github.

29 October 2016

Running a Hololens app on a Raspberry PI2–and controlling it with a keyboard

Intro

No, I am not drunk, nor did I smoke some peculiar produce our capital Amsterdam is famous for, and I did not bump my head a bit too hard on my car while getting out of it either. I was just curious how far the U of UWP would carry me. And it turns out, a lot further than I thought. It’s actually possible to run a HoloLens app on a Raspberry PI2. When I saw it happen, I actually had a bit trouble believing my own eyes, but it is possible indeed. With surprisingly little work. Although I added a little extra work to make it more fun.

Parts required & base material

  • One Raspberry PI2 or 3 running the latest version of Windows 10 IoT Core.
  • A keyboard that plays nice with the above
  • A screen to see the result on

I used a Raspberry PI2 running the latest Windows 10 IoT Core Insider’s build and a NexDock. Mainly because I have one, it’s handy and it looks cool. Be aware that to connect the NexDock keyboard you will need a Bluetooth dongle for your RP2 – or you will need use a RP3 which has it on board.

This post builds upon my (as of the time of this writing still incomplete) series about the Hololens Aircraft tracker, but that’s only because that’s a cool app to demo the idea on. It is no part of the series. I made a separate branch of the app at the end of the 6th episode. So this is like an interlude.

Some brute-force demolishing to get things to compile

Basically it’s as simple as setting the build target to ARM:

image

The only problem is, if you do that and then rebuild the project, it will complain about this

image

For the very simple reason this dll, which is part of the HoloToolkit, is not available for ARM. Hell, HoloLens runs x86 only, so why should it be available. No-one ever anticipated using a dolt peculiar person like me trying to run it on a bloody Raspberry PI2.

There is a rather crude way of fixing it. We are not using SpatialUnderstanding anyway, most certainly not on the Raspberry PI2. So I got rid of the plugin that Visual Studio complained about, by going to this folder in Unity:

image

And hitting delete. Rebuild project in Unity, try to compile it in Visual Studio. Alas. Still no success

image

But this is a different plugin dll – PlaneFinding. Also something we don’t use. Now this is a bit confusing, as there are three folder containing a Planefinding.dll. Maybe that’s an error in the HoloToolkit.

image 

Whatever. Let’s get rid of the whole plugins folder under SpatialMappping. Once again, rebuild in Unity, compile in Visual Studio. And wouldn’t you know it…

image

The sweet smell of success. Now I want you to understand this is no way to go about to make a HoloLens project compatible with a Raspberry PI2. This is using a mallet to hit very hard on a square peg to make it go through a round hole. I have demolished two components you might want to use in the future version of your HoloLens app. But this is not serious development – this is hacking for fun’s sake to prove a point. That is why I have made a separate branch ;).

Controlling the app’s viewpoint

When you run the app on the HoloLens this is no issue at all. If you want to see the airport and it’s planes from closer up or from a different angle, you just move your head, walk to the object of your interest – or around it. If if runs on a screen, things are a bit different. So I created this little behaviour (with “ou” indeed, which suggests the good folks at Unity have been educated in The Queen’s English) that more or less replicates the key mappings of the HoloLens emulator:

It’s crude, ugly, the result is a bit stuttering – but it does the job.

using UnityEngine;

public class KeyboardCameraController : MonoBehaviour
{
  public float Rotatespeed = 0.4f;
  public float MoveSpeed = 0.02f;
  public float FastSpeedAccleration = 7.5f;

  private Quaternion _initialRotation;
  private Vector3 _initialPosition;
  void Start()
  {
    _initialRotation = Camera.main.transform.rotation;
    _initialPosition = Camera.main.transform.position;
  }

  void Update()
  {
    var speed = 1.0f;
    if (Input.GetKey(KeyCode.LeftShift) || Input.GetKey(KeyCode.RightShift))
    {
      speed = FastSpeedAccleration * speed;
    }

    if (Input.GetKey(KeyCode.LeftArrow))
    {
      Camera.main.transform.RotateAround(Camera.main.transform.position,
          Camera.main.transform.up, -Rotatespeed * speed);
    }

    if (Input.GetKey(KeyCode.RightArrow))
      Camera.main.transform.RotateAround(Camera.main.transform.position,
          Camera.main.transform.up, Rotatespeed * speed);

    if (Input.GetKey(KeyCode.UpArrow))
    {
      Camera.main.transform.RotateAround(Camera.main.transform.position,
          Camera.main.transform.right, -Rotatespeed * speed);
    }

    if (Input.GetKey(KeyCode.DownArrow))
      Camera.main.transform.RotateAround(Camera.main.transform.position,
          Camera.main.transform.right, Rotatespeed * speed);

    if (Input.GetKey(KeyCode.A))
      Camera.main.transform.position +=
          Camera.main.transform.right * -MoveSpeed * speed;

    if (Input.GetKey(KeyCode.D))
      Camera.main.transform.position +=
          Camera.main.transform.right * MoveSpeed * speed;

    if (Input.GetKey(KeyCode.W))
      Camera.main.transform.position +=
          Camera.main.transform.forward * MoveSpeed * speed;

    if (Input.GetKey(KeyCode.S))
      Camera.main.transform.position +=
          Camera.main.transform.forward * -MoveSpeed * speed;

    if (Input.GetKey(KeyCode.PageUp))
      Camera.main.transform.position +=
          Camera.main.transform.up * MoveSpeed * speed;

    if (Input.GetKey(KeyCode.PageDown))
      Camera.main.transform.position +=
          Camera.main.transform.up * -MoveSpeed * speed;

    if (Input.GetKey(KeyCode.Escape))
    {
      Camera.main.transform.position = _initialPosition;
      Camera.main.transform.rotation = _initialRotation;
    }

  }
}

Drag this behaviour on top of the camera (or for what matters, anything at all). You will be able to control the camera’s standpoint via what I have been told is the the standard PC gamer’s WASD and arrow key mappings. PageUp/PageDown will move the camera standpoint up and down, ESC will bring you back to the original viewpoint when the app started, and using SHIFT will make things go faster.

Deploy and run

Deploy the app to your Raspberry PI2 or 3 using Visual Studio – and select your PI as remote machine. Use either “Release” or “Master” build configuration. The latter should - in theory – go faster, but takes much longer (as in very much longer) to compile and deploy. Also, if you choose “Master”, the app does not always start on my PI, it’s sometimes only deployed – so you have to get it going via the device’s Device Portal. This may have something to do with me running an Insider’s build.

Either way – if you have WiFi either built-in or via dongle, that’s fine, but unless you particularly like waiting, connect your PI2/3 to a wired connection while deploying. I also observed issues when the app runs while only having WiFi connectivity – data comes in pretty slow, it can take minutes before the first airplanes appear, while on wired it takes like 30 seconds, max. Apparently my 2.4Ghz network (which the PIs use) is not that strong compared to the 5G all the other devices in the house employ.

And it works. Here are some pictures and a video of the app in action. The performance is not stellar (big surprise here, matching a $35 device against a $3000 device that comes straight out of Star Trek), but still – it works pretty reasonable.

IMG_4786IMG_4792

Conclusion

Looks like the U in Universal Windows Platform is pretty universal indeed. Microsoft weren’t talking BS rubbish about this promise. This app can also be deployed to PCs (I learned that by accidentally choosing “Local Machine” as a target) and I don’t doubt it will run on phones and even XBox One, although I think I would have to ponder a little about the way to control the viewpoint on those devices as they don’t have a keyboard. Yet, an impressive result.

Code can be found here.