CRAFT

We designed a new way of creating art in immersive realities. Our solution was a multi-modal tool called Craft that enabled people with limited mobility in their arms to draw in Virtual Reality using their voice commands and eye movements thereby removing the dependency on hand controllers.

RoleUX Research, PrototypingYearMar 2018 - Nov 2018Linkwww.youtube.com

Microsoft Inclusive Design Challenge – Design a product, service, or solution to solve for exclusion in a deskless workplace by integrating inclusive design principles to build a solution to enable people with disabilities working in deskless workplace to improve diversity in design.

Problem Statement – How might we design a tool for people with limited mobility in their arms to create art in Virtual Reality?

Solution – Our team designed a tool called ‘Craft’ using principles of multi-modality‘, that allows people with limited mobility in their arms to create 3D art in Virtual Reality using voice commands and eye movements. Multi modal human-computer interaction refers to the interaction with the virtual and physical environment through natural modes of communication. 

Successfully completed the Microsoft Inclusive Design Challenge and went on to win grant of $10,000 for the NYC Media Lab XR Startup Bootcamp

Team: Cherisha Agarwal, Joanna Yen, Pratik Jain, Shimin Gu, Raksha Ravimohan, Srishti Kush

My role: I actively contributed to the UX design process by conducting the user interviews and testing to analyze the results and document the insights and helped in the various prototypes for our project. I worked on designing the personas and various other digital assets. I was responsible for designing the style guide for our project booklet and designed the entire booklet layout and graphics. I was also responsible to lead the team for the XR Startup Bootcamp for this project concept where we analyzed the business model and product market fit.

Tools used: Adobe InDesign, Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro, Sketch, Unreal


DESIGN PROCESS

Research & Brainstorming

Since we had the freedom to create any product we wanted to, based on our interests we came up with different professions and disabilities that the team wanted to focus on based on our target audience. After deciding the professions, the following process was followed:

  • Listing all the tasks that the person does, in order to figure out the point at which they will not be able to work normally due to a disability
  • Card sorting to come up with problems we potentially wanted to work on. Card sorting helped to narrow down our ideas to three different scenarios and disabilities: Artist with limited hand mobility, Cashier using a kiosk and having physical disabilities and a blind tour guide trying to find his way at a museum.
  • After discussing the idea with our Professor Dana Karwas and getting feedback from the Microsoft team, we decided to go ahead with the idea of artists with limited limb mobility and trying to create drawings in VR.

Insights from Artists and Designers

To get more clarity on our idea, we decided to speak with artists and creative technologists, to understand their workflow. These artists were mainly students from NYU or working professionals who worked on illustrations, cinema, music, 3D modelling, graphic design and game design. The students helped us in figuring out the different pain points and learning curve. The responses were very insightful in conveying tools and softwares used and how to frame solutions if the same softwares were to be used by people with disabilities. We also discussed the idea of multi-modal input as a means of interaction for persons with limited hand mobility. They provided useful insights where eye tracking could work in the softwares.

Secondary Research

The current market is flooded with options to draw art in Virtual Reality. Some of the options are TiltBrush, Quill, Medium, Block by Google. Apart from these popular tools there are many other upcoming tools which are now available including Make VR, Gravity Sketch and Mozilla A-Painter. We tried on the VR headset to get a sense of the 3D art interactions in Virtual Reality by immersing ourselves with Medium, TiltBrush and HoloLens to understand the current features and functionalities. Surprisingly, the apps and headsets meant for VR did not have accessibility features and are completely unusable for someone with limited mobility. Here are some of our observations:

  • Drawing and selection of tools can be done only through hand held controllers at present
  • Drawing using voice by mentioning coordinates is not intuitive
  • State of natural language processing is not advanced enough to perfectly transform the user’s commands to strokes

STAKEHOLDER INTERVIEWS

To help us understand how we should develop our idea, we needed to get insights from experts in the field and potential users. Since we were foraying into an unexplored territory, we required new perspectives to better understand the interactions and complications involved. Some of the interviews we did were with:

  • Todd Bryant, NYU Professor for VR who helped us understand VR technicalities and also pointed out there was no accessibility currently in VR
  • Serap Yigit, a User Experience Researcher at Google who helped us learn about user research techniques and usability functionalities
  • Claire Kearney-Volpe, NYU Professor at the Ability Lab who guided us to focus on multi-modal interactions and put us in touch with potential users at Adapt Community Network
  • Erica Wayne, Account Manager at Tobii Pro who helped us understand the eye tracking mechanisms
  • Peter Cobb, Director of Adapt Community Network who explained how users with limited physical mobility currently create art
  • Adapt Community members who suffered from cerebral palsy and were quite excited about our multi-modal tool to create digital art

These interviews gave us a real world experience of how users reacted to our idea. Some of the key insights were as follows:

Personas

Based on our interview insights, we identified four unique persona to focus on for our project. Each persona had their own goals, motivations and frustrations and different mobility levels as well. Hence it helped us make our design more universal in nature.

PERSONA SPECTRUM

We also established the persona spectrum, which indicate the exclusions we would be solving for as it relates to their story. By designing for someone with a permanent disability, someone with a temporary ailment or situational limitation can also benefit. We defined various ways the solution can be applied across multiple scenarios and contexts of different people with similar motivations.

PROTOTYPING & ITERATION

After understanding our persona spectrum and user flow, we proceeded to create basic prototypes to demonstrate and test the concept of our idea. To make our tool accessible, we had to empathize for our users and identify pain points and intuitive ways of interaction. We decided to go for multi modal interaction, since the user has limited hand mobility, we wanted to make use of voice and eye gestures to perform tasks. We wanted to experiment the entire user flow with both eye interactions and voice, to see which is more intuitive. We started with creating a video to demonstrate how voice commands and eye gaze can be used to operate the tool without hands. This prototype helped us demonstrate our idea and make it understandable to our users as it was quite a new and unfamiliar concept to many people.

Paper Prototype

After working on the video prototype, we compiled our product features and interactions. Once we got insights from professionals, we decided to build a paper prototype. The idea was to build something simple which we could take to our users and test it out. We were looking to add common and easy to comprehend tools. We came up with a list of tools which included drawing tools, system tools and functionality tools. After this we printed out each tool icon on a A4 sheet as demonstrated in the pictures below.

USER TESTING

After our prototype was ready, we wanted to come up with a cheap and easy solution to demonstrate eye tracking. We concluded that a laser pen would be the most optimal way to do that. Our plan was to attach the laser to pen on a hat and once the person moved their head, the beam of the laser would move. This would give an idea to the user where they were pointing on the paper prototype. Once the user gazes at a particular tool, the selected tool is highlighted using the blue-violet light. To draw, the user would move the laser point across the canvas and a student would trace the trajectory using a marker. Some of the tools such as scale and color palette were made expandable and we used different sheets of paper to make pop ups. We decided to test it with designers and artists with experience in experience in VR as well as 2D and 3D software drawing tools and normal hand movements to get quick feedback on usability. We then planned to test it with the members of the ADAPT community who had limited hand movement, but were interested in art and the concept of drawing using eye movements.

FINAL DEMONSTRATION

After understanding our user journey and building our information architecture by reorganizing the tool structure and finalizing on our interactions, we proceeded to create the final hi fidelity prototype using the Unreal Engine. We also did a mockup of an on-boarding AI assistant called Crafty to help new users get familiar with the interface. The assistant guided the user with the features and functionalities of each tool and was available if the user got stuck at any point. The final prototype process included Movement tracking, Interactable user interface, Time-based gaze selection, Teleport function, Painting function and changing the Environment. These tasks helped us accomplish a working prototype which users could interact with in the immersive environment.

Next Steps

We have received good feedback for our prototype but we aim to work towards improving it even more especially in the following areas:

  • Use eye movements to draw
  • Incorporate voice functionality by using machine learning and artifical intelligence
  • Use natural language processing for building an effective AI assistant
  • Test with more users and refine the user interface
  • Brainstorm on how to transform the drawing to 3D coordinates
  • Release it on the Oculus Store and Vive Store

We hope that this tool would be extremely useful for people with limited mobility to not only experience VR but also to create interesting content and share it with the world.

NYC MEDIA LAB

This project was chosen to be showcased at the NYC Media Lab Summit on Sep 20, 2018 where we received positive feedback from the viewers and the project was well appreciated by all.

XR Startup Bootcamp by NYC Media Lab

Our team also won a grant of $10000 grant for the NYC Media Lab XR Startup Bootcamp for this project. The bootcamp ran from Sep to Nov 2018 and it helped us create a business model for this application and perform customer discovery and analyze product market fit. We spoke to 100+ people to gain insights and get feedback on our concept.

Lean Methodology

We followed an evidenced based entrepreneurship model at the bootcamp which focused on customer discovery, rapid prototyping and quick responsive development. The focus was to create a repeatable and scalable business model which can have a huge impact to the world through the medium of virtual reality, augmented reality and mixed reality. We followed the approaches given by:

  • Startup Owners Manual: Steve Blank, Bob Dorf
  • Business Model Generators: Alexander Osterwalder
  • Customer Discovery Vimeo Videos: Lean Launchpad
  • How to build a startup on Udacity: Lean Launchpad with Steve Blank, Kathleen Mullaney

Business Model Canvas

We analyzed our project, the basic concept and our vision for this in future and started creating the business model canvas. This was filled in and refined every week as we spoke to more people and tested out hypotheses. Our final version of the business model canvas was as shown below:

Hypotheses to Test

During the course of the entire bootcamp, we listed out hypotheses to test on a weekly basis to see which of our assumptions were true after talking to customers. Some of our key hypotheses to test were as follows:

  • People with both permanent and temporary limited limb mobility will be comfortable drawing art in VR
  • The transition from using hand controllers to eye gestures and voice commands will be a seamless experience
  • Our tool would be needed by all our targeted customer segments and they will find the tool easy and useful
  • Therapy Institutes, Disability Centers and Hospitals will be our B2B clients and they will be fine with the pricing model
  • HMD Manufacturers will be willing to partner with us and Influencers will be willing to promote our product
  • Drawing with other people in VR would be engaging for our users and children with disabilities will also be comfortable using our tool
  • Product needs to be customized for specific customer segments and In-App Store will create employment opportunities

Key Considerations for Early Version Prototype

Testing with Potential Users

We visited the Adapt Community Network which is a powerful community of people with Cerebral Palsy and spoke to them about our project. We encouraged them to try out our application with the VR headset and they simply loved the experience.

We also visited the Axis Project which is a multidisciplinary center committed to providing high quality services for those with physical disabilities. We asked people to interact with our application and recorded their feedback about the experience which was extremely positive and encouraging and they were excited about purchasing this application when launched.

Key insights from the user testing sessions at Adapt Community and Axis Project were as follows:

  • Eye tracking can be a good way for people with limited limb mobility to interact
  • Lack of space is a big limitation to enjoying VR
  • There is a need to define accessibility guidelines for VR
  • Fluidity of eye gazing should be taken into consideration
  • Cognitive dissonance and limitations of users must be kept in mind
  • Care centers will invest in devices for the entire community to promote recreational activities
  • The first experience with VR for the aged was incredible and super exciting
  • The interface should be more intuitive for our users, more environments should be added and we should explore other areas apart from Art
  • Customers would not mind investing in this product if its useful to them
  • Products like this could change the facet of accessibility in VR
  • Users would like to interact with others in the VR environment
  • Children with disabilities would adapt better with our product
  • People with spinal cord injuries might have an issue in using the tool for long because of head movement
  • Voice command needs to recognize different dialects and accents
  • Old age homes would definitely invest once we incorporate more possibilities
  • Able bodied Tilt Brush artists will be willing to pay for it as the multiple forms of interaction will be useful when working long hours
  • Disability Centers invest a lot of money in assistive technology
  • Elderly care centers would be willing to invest money in our product for recreational uses
  • Parents of children with limb disabilities will be keen to buy our product for their children
  • We should include tutorials to help people use the app effectively 

Demo at Events

We setup a demo of our application prototype at a number of events and conferences including the following:

  • Samsung NEXT Office Visit
  • NYC Media Lab Summit
  • Verizon 5G Labs Happy Hour Evening
  • NYVR Expo at the Javits Center
  • Executive Breakfast & MVP Science Fair at BCG Digital Ventures
  • Exploring Future Reality

These opportunities helped us receive feedback as well as talk to potential partners, venture capitalists and HMD manufacturers. These demos helped us better understand the gaps in our concept and how we could improve them.

Presenting Pitch at Exploring Future Reality

At the end of the bootcamp, we presented the pitch of our concept at the Exploring Future Reality event in front of 300+ people where we received immense positive feedback. The pitch video for Craft can be seen below:


Privacy Preference Center