Craft

CRAFT – A new way of creating art in immersive realities

Microsoft Design Challenge
Design a product, service, or solution to solve for exclusion in a deskless workplace. The prompt of this challenge was to integrate inclusive design principles to build a solution to enable people with disabilities working in deskless workplace to improve diversity in design.

A deskless workplace includes people working in an environment who are not constrained by traditional or conventional office setting. Designing for inclusivity opens up experiences and reflects how people adapt to the world around them.

After exploring multiple options, our team decided to build a multi-modal tool called ‘Craft’ for people with physical disabilities to create 3D art in Virtual Reality. The idea was to build a tool that enabled people with limited mobility in their arms to draw using their voice commands and eye movements.

Problem Statement – How might we design a multi-modal tool for people with limited mobility in their arms to create art in Virtual Reality?

Solution – We designed a tool using principles of multi-modality – ‘Craft‘, that allows people to create art in Virtual Reality using voice commands and eye movements. Multi modal human-computer interaction refers to the interaction with the virtual and physical environment through natural modes of communication.

Team: Cherisha Agarwal, Joanna Yen, Pratik Jain, Shimin Gu, Raksha Ravimohan, Srishti Kush

My role: Working with a diverse team from different backgrounds and skill sets really helped push the boundaries of this project to visualize an impactful idea and bring it to life. We all participated in the design process right from research and interviews to prototyping and testing. I actively conducted the user interviews and testing to analyze the results and document the insights and helped in the various prototypes for our project. I worked on designing the personas and various other digital assets. I was mainly responsible for designing the style guide for our project booklet and designed the entire booklet layout and graphics.

Duration: March – April 2018, 2 months

Tools used: Sketch | InDesign | Unreal | Photoshop | Illustrator

Design Process

Research & Brainstorming

Card Sorting

Since we had the freedom to create any product we wanted to, based on our interests we came up with different professions and disabilities that the team wanted to focus on. After deciding the professions, the following process was followed:

  • Listing all the tasks that the person does, in order to figure out the point at which they will not be able to work normally due to a disability
  • Card sorting to come up with problems we potentially wanted to work on. Card sorting helped to narrow down our ideas to three different scenarios and disabilities: Artist with limited hand mobility, Cashier using a kiosk and having physical disabilities.
  • After discussing the idea with our Professor Dana Karwas and getting feedback from the Microsoft team, we decided to go ahead with the idea of artists with limited hand mobility and trying to create drawings in VR.

Insights from Artists and Designers

To get more clarity on our idea, we decided to speak with artists and creative technologists, to
understand their workflow. These artists were mainly students from NYU or working professionals who worked on illustrations, cinema, music, 3D modelling, graphic design and games. The students helped us in figuring out the different pain points and learning curve.

The responses were very insightful in conveying tools and softwares used and how to frame solutions if the same softwares were to be used
by people with disabilities. We also discussed the idea of multi-modal input as a means of interaction for persons with limited hand mobility. They provided useful insights where eye tracking could work in the softwares.

Secondary Research

The current market is flooded with options to draw art in Virtual Reality. Some of the options are TiltBrush, Quill, Medium, Block by Google. Apart from these popular tools there are many other upcoming which are now available including Make VR, Gravity Sketch and Mozilla A-Painter.

We tried on the VR headset to get a sense of the 3D art interactions in Virtual Reality by immersing ourselves with Medium, TiltBrush and HoloLens to understand the current features and functionalities. Surprisingly, the apps and headsets meant for VR did not have accessibility features and are completely unusable for someone with limited mobility. Here are some of our observations:

  • Drawing and selection of tools can be done only through hand held controllers at present.
  • Drawing using voice by mentioning coordinates is not intuitive
  • State of natural language processing is not advanced enough to perfectly transform the user’s commands to strokes

Stakeholder Interviews

To help us understand how we should develop our idea, we needed to get insights from experts in the field and potential users. Since we were foraying into an unexplored territory, we required new perspectives to better understand the interactions and complications involved. Some of the interviews we did were with:

  • Todd Bryant, NYU Professor for VR who helped us understand VR technicalities and also pointed out there was no accessibility currently in VR
  • Serap Yigit, a User Experience Researcher at Google who helped us learn about user research techniques and usability functionalities
  • Claire Kearney-Volpe, NYU Professor at the Ability Lab who guided us to focus on multi-modal interactions and put us in touch with potential users at Adapt Community Network
  • Erica Wayne, Account Manager at Tobii Pro who helped us understand the eye tracking mechanisms
  • Peter Cobb, Director of Adapt Community Network who explained how users with limited physical mobility currently create art
  • Interviews with Adapt Community members who suffered from cerebral palsy and were quite excited about our multi-modal tool to create digital art

These interviews gave us a real world experience of how users reacted to our idea. Some of the key insights were as follows:

Personas

Based on our interview insights, we identified four unique persona to focus on for our project. Each persona had their own goals, motivations and frustrations and different mobility levels as well. Hence it helped us make our design more universal in nature.

Persona Spectrum

We also established the persona spectrum, which indicate the exclusions we would be solving for as it relates to their story. By designing for someone with a permanent disability, someone with a temporary ailment or situational limitation can also benefit. We defined various ways the solution can be applied across multiple scenarios and contexts of different people with similar motivations.

Prototyping & Iteration

After understanding our persona spectrum and user flow, we proceeded to create basic prototypes to demonstrate and test the concept of our idea. To make our tool accessible, we had to empathize for our users and identify pain points and intuitive ways of interaction. We decided to go for multi modal interaction, since the user has limited hand mobility, we wanted to make use of voice and eye gestures to perform tasks. We wanted to experiment the entire user flow with both eye interactions and voice, to see which is more intuitive. We started with creating a video to demonstrate how voice commands and eye gaze can be used to operate the tool without hands. This prototype helped us demonstrate our idea and make it understandable to our users as it was quite a new and unfamiliar concept to many people.

Paper Prototype

After working on the paper prototype, we compiled our product features and interactions. Once we got insights from professionals, we decided to build a paper prototype. The idea was to build something simple which we could take to our users and test it out. Taking cues from the video we shot earlier, we listed down the features we wanted to add to the prototype. We were looking to add common and easy to comprehend tools. We came up with a list of tools which included drawing tools, system tools and functionality tools. After this we printed out each tool icon on a A4 sheet as demonstrated in the pictures below.

User Testing

After our prototype was ready, we wanted to come up with a cheap and easy solution to demonstrate eye tracking. We concluded that a laser pen would be the most optimal way to do that. Our plan was to attach the laser to pen on a hat and once the person moved their head, the beam of the laser would move. This would give an idea to the user where they were pointing on the paper prototype. Once the user gazes at a particular tool, the selected tool is highlighted using the blue-violet light. To draw, the user would move the laser point across the canvas and a student would trace the trajectory using a marker. Some of the tools such as scale and color palette were made expandable and we used different sheets of paper to make pop ups. We decided to test it with designers and artists with experience in experience in VR as well as 2D and 3D software drawing tools and normal hand movements to get quick feedback on usability. We then planned to test it with the members of the ADAPT community who had limited hand movement, but were interested in art and the concept of drawing using eye movements.

Final Demonstration

After understanding our user journey and building our information architecture by reorganizing the tool structure and finalizing on our interactions, we proceeded to create the final hi fidelity prototype using the Unreal Engine. We also included an on-boarding AI assistant called Crafty to help new users get familiar with the interface. The assistant guided the user with the features and functionalities of each tool and was available if the user got stuck at any point. The final prototype process included Movement tracking, Interactable user interface, Time-based gaze selection, Teleport function, Painting function and changing the Environment. These tasks helped us accomplish a working prototype which users could interact with in the immersive Environment.

On-boarding

To introduce our users to this new way of creating art and to ease the use of this application, we have designed an AI assistant called Crafty. The on boarding process involves introduction to the interactions and the layout of the interface. Crafty will guide the user through a few tools and get them to start drawing. The user can also engage in a relevant conversation with Crafty and get insights. We believe that a lot of our users would benefit from this assistant especially if they do not have prior experience with a drawing tool software.

Movement tracking
Since we didn’t have any eye tracking ready VR headset, the current solution was for us to replace eye tracking with head-mounted display(HMD) and track the head movements in Unreal.

Interactable user interface
Our paper prototype had given us a good idea of the user interface so we rebuilt it in VR. We decided to make all the icons with rounded rectangle so the shape would not be too sharp and look nicer in VR. When a tool or function is selected, the color of the icon will change from white to orange in order to give user feedback. To ensure users can see the menu clearly without much effort, we carefully placed the menu in suitable distance away from users.

Time-Based gaze selection
This function was realized by Unreal blueprint. Our functions would be active after gazing at its icon for one second. A real-time waiting wheel will be generated to indicate the waiting time.

Painting function
When user selects brush, a text will pop out to guide user to use voice to start or stop drawing. The start and stop drawing function can also be operated by double-blink if eye tracking is enabled. The user can also choose several different brushes by selecting strokes in the menu, or saying change stroke type.

Teleport function
As we are building a 3D painting tool in virtual reality environment, as well as our users might not be able to move their body in physical world freely, teleport is an important function in our prototype. Teleport basically means transport your virtual body to another position you selected instantly. In our prototype, if you selected teleport icon and then look somewhere under the ground, a blue circle will show up to show where you are looking at.

Environment
We decided to set the default environment to a night scene with some stars and clouds upward so users can see the environment more clearly and still doesn’t seem too dark. We also built several other environments like seaside, forest for users to change by selecting different environments in setting menu. The color of interface would change accordingly in order to make sure visibility.

Visual Renders

Final Prototype

Next steps

We have received good feedback for our prototype but we aim to work towards improving it even more especially in the following areas:

  • Use eye movements to draw
  • Incorporate voice functionality by using machine learning and artifical intelligence
  • Use natural language processing for building an effective AI assistant
  • Test with more users and refine the user interface
  • Brainstorm on how to transform the drawing to 3D coordinates
  • Release it on the Oculus Store and Vive Store

We hope that this tool would be extremely useful for people with limited mobility to not only experience VR but also to create interesting content and share it with the world.