The First Financial Hololens App
Working for 8ninths as part of the first seven Microsoft Hololens developers, I was tasked with the massive design job of creating the first futures trading application for Hololens. This project, for me, was a learning experience in all respects, from redefining my process around how I work to learning how to design for an emersive virtual environment.
How it Began
8ninths teamed up with the Citi Innovation Lab to develop a Proof of Concept illustrating how mixed reality is the next game-changing technology for finance.
Financial traders’ current workstations provide an abundance of data, but in formats that are difficult to process and prioritize—long streams of figures on multiple monitors, each representing something different. We wanted to use a combination of 2D and 3D presentations of data to optimize the trader’s ability to extract meaning from the information, quickly and accurately. If we could increase efficiency while reducing time, cost, and the cognitive load of working with abstract data, we could bring the same strengths to bear across any domain involving data interpretation and collaboration. Thus, we created the Holographic Workstation—a new paradigm that has ramifications far beyond the world of finance to information workers of all kinds.
Design and development for Hololens is unlike any other process we’d experienced. This was brand new territory and thus we started by developing a clear taxonomy. We developed HoloLens Design Patterns by breaking down and examining the core building blocks of early holographic experiences. Defining and naming these gave us specific foundational elements with which to work.
Our Discovery Process
We were given an overview of the issues presented in the current futures trading applications directly from our contact on the futures trading floor at CitiBank. While this provided us with a cursory understanding of the space and some of it's issues, we would come to find that it was a continual learning process. The following points are what we gleaned from this process.
- lack of prioritization within six to eight screens of 2D information
- lack of easily discernible centralized knowledge
- inefficiency in navigating between windows and tabs
- inefficiency in recognizing critical patterns and market changes
- loss of opportunity for collaboration and dialogue
- loss of the “human element” and the “feel of what is going on in the market”
Using the HoloLens Design Patterns and drawing from the pain points we’d identified, we defined five specific areas on which to focus our efforts:
-A physical workstation
-Volumetric data visualization
-Collaboration between traders
Design & Development
Because the project involved both a physical environment and a holographic one, we were able to start work before we even had the HoloLens device in our hands, and solved a lot of problems up front with cardboard and tape.
The workstation framework was designed with the overarching concept of broad-on-top to specific-on-the-bottom, and with a three-tiered shelf structure. The shelves were designed to be a certain distance from the user and at certain proportions to optimize our usage of the device’s field of view (FOV): each tier’s bounds roughly correlate with view capabilities. One goal here was to allow the user to focus on a single part of their workstation at a time, letting that tier fill their view while the others persist in memory until they glance back up at them. It was exciting to turn the supposedly-problematic limited FOV into a feature for our product.
Once we had the HoloLens, we began working out our design workflow by creating a basic framework “sandbox” that supported static and animated models. We explored and built out important tools, asking questions like: “What would our cursor look like? What’s the logic for using gaze+tap to move an object? How do you distinguish move from rotate? What’s the right scale?” Letting the designers arrange objects in-Lens was an important step.
As the project progressed, useful tool ideas continued to come up, and our engineers worked to integrate them into our sandbox toolkit, but gradually design needed more functional implementation. The engineering team pivoted towards prototyping functionality in collaboration with designers, focusing on getting motion and interaction in the right place and on facilitating as quick an iteration cycle as possible.
Working with HoloLens before its APIs were consistent, we dealt with a lot of shifts, and were constantly on our toes to respond to the configuration of the latest release. We even rebuilt some functionality, like gesture and gaze tracking, at a fairly low level, which allowed us to know the functionality exactly so we could fix any inconsistencies ourselves.
There were special challenges with networking between the Surface Pro and the Lens. In theory it’s easy—they’re both essentially Windows 10 machines—and in practice today it’s no problem at all, but at the time there were some very low-level bugs causing network packets to be unreadable. Scrappy prototyping code saves the day again! Our workarounds were not secure or shippable, but this was a concept piece and it was more important for us to communicate the design; we were confident we could rebuild the technical solution to updated specs for the next phase of the project.