We designed a new way of creating art in immersive realities. Our solution was a multi-modal tool called Craft that enabled people with limited mobility in their arms to draw in Virtual Reality using their voice commands and eye movements thereby removing the dependency on hand controllers.
RoleUX Research, PrototypingYearMar 2018 - Nov 2018Linkwww.youtube.com
Microsoft Inclusive Design Challenge – Design a product, service, or solution to solve for exclusion in a deskless workplace by integrating inclusive design principles to build a solution to enable people with disabilities working in deskless workplace to improve diversity in design.
Problem Statement – How might we design a tool for people with limited mobility in their arms to create art in Virtual Reality?
Solution – Our team designed a tool called ‘Craft’ using principles of multi-modality‘, that allows people with limited mobility in their arms to create 3D art in Virtual Reality using voice commands and eye movements. Multi modal human-computer interaction refers to the interaction with the virtual and physical environment through natural modes of communication.
Team: Cherisha Agarwal, Joanna Yen, Pratik Jain, Shimin Gu, Raksha Ravimohan, Srishti Kush
My role: I actively contributed to the UX design process by conducting the user interviews and testing to analyze the results and document the insights and helped in the various prototypes for our project. I worked on designing the personas and various other digital assets. I was responsible for designing the style guide for our project booklet and designed the entire booklet layout and graphics. I was also responsible to lead the team for the XR Startup Bootcamp for this project concept where we analyzed the business model and product market fit.
Tools used: Adobe InDesign, Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro, Sketch, Unreal
Since we had the freedom to create any product we wanted to, based on our interests we came up with different professions and disabilities that the team wanted to focus on based on our target audience. After deciding the professions, the following process was followed:
To get more clarity on our idea, we decided to speak with artists and creative technologists, to understand their workflow. These artists were mainly students from NYU or working professionals who worked on illustrations, cinema, music, 3D modelling, graphic design and game design. The students helped us in figuring out the different pain points and learning curve. The responses were very insightful in conveying tools and softwares used and how to frame solutions if the same softwares were to be used by people with disabilities. We also discussed the idea of multi-modal input as a means of interaction for persons with limited hand mobility. They provided useful insights where eye tracking could work in the softwares.
The current market is flooded with options to draw art in Virtual Reality. Some of the options are TiltBrush, Quill, Medium, Block by Google. Apart from these popular tools there are many other upcoming tools which are now available including Make VR, Gravity Sketch and Mozilla A-Painter. We tried on the VR headset to get a sense of the 3D art interactions in Virtual Reality by immersing ourselves with Medium, TiltBrush and HoloLens to understand the current features and functionalities. Surprisingly, the apps and headsets meant for VR did not have accessibility features and are completely unusable for someone with limited mobility. Here are some of our observations:
To help us understand how we should develop our idea, we needed to get insights from experts in the field and potential users. Since we were foraying into an unexplored territory, we required new perspectives to better understand the interactions and complications involved. Some of the interviews we did were with:
These interviews gave us a real world experience of how users reacted to our idea. Some of the key insights were as follows:
Based on our interview insights, we identified four unique persona to focus on for our project. Each persona had their own goals, motivations and frustrations and different mobility levels as well. Hence it helped us make our design more universal in nature.
We also established the persona spectrum, which indicate the exclusions we would be solving for as it relates to their story. By designing for someone with a permanent disability, someone with a temporary ailment or situational limitation can also benefit. We defined various ways the solution can be applied across multiple scenarios and contexts of different people with similar motivations.
After understanding our persona spectrum and user flow, we proceeded to create basic prototypes to demonstrate and test the concept of our idea. To make our tool accessible, we had to empathize for our users and identify pain points and intuitive ways of interaction. We decided to go for multi modal interaction, since the user has limited hand mobility, we wanted to make use of voice and eye gestures to perform tasks. We wanted to experiment the entire user flow with both eye interactions and voice, to see which is more intuitive. We started with creating a video to demonstrate how voice commands and eye gaze can be used to operate the tool without hands. This prototype helped us demonstrate our idea and make it understandable to our users as it was quite a new and unfamiliar concept to many people.
After working on the video prototype, we compiled our product features and interactions. Once we got insights from professionals, we decided to build a paper prototype. The idea was to build something simple which we could take to our users and test it out. We were looking to add common and easy to comprehend tools. We came up with a list of tools which included drawing tools, system tools and functionality tools. After this we printed out each tool icon on a A4 sheet as demonstrated in the pictures below.
After our prototype was ready, we wanted to come up with a cheap and easy solution to demonstrate eye tracking. We concluded that a laser pen would be the most optimal way to do that. Our plan was to attach the laser to pen on a hat and once the person moved their head, the beam of the laser would move. This would give an idea to the user where they were pointing on the paper prototype. Once the user gazes at a particular tool, the selected tool is highlighted using the blue-violet light. To draw, the user would move the laser point across the canvas and a student would trace the trajectory using a marker. Some of the tools such as scale and color palette were made expandable and we used different sheets of paper to make pop ups. We decided to test it with designers and artists with experience in experience in VR as well as 2D and 3D software drawing tools and normal hand movements to get quick feedback on usability. We then planned to test it with the members of the ADAPT community who had limited hand movement, but were interested in art and the concept of drawing using eye movements.
After understanding our user journey and building our information architecture by reorganizing the tool structure and finalizing on our interactions, we proceeded to create the final hi fidelity prototype using the Unreal Engine. We also did a mockup of an on-boarding AI assistant called Crafty to help new users get familiar with the interface. The assistant guided the user with the features and functionalities of each tool and was available if the user got stuck at any point. The final prototype process included Movement tracking, Interactable user interface, Time-based gaze selection, Teleport function, Painting function and changing the Environment. These tasks helped us accomplish a working prototype which users could interact with in the immersive environment.
We have received good feedback for our prototype but we aim to work towards improving it even more especially in the following areas:
We hope that this tool would be extremely useful for people with limited mobility to not only experience VR but also to create interesting content and share it with the world.
This project was chosen to be showcased at the NYC Media Lab Summit on Sep 20, 2018 where we received positive feedback from the viewers and the project was well appreciated by all.
Our team also won a grant of $10000 grant for the NYC Media Lab XR Startup Bootcamp for this project. The bootcamp ran from Sep to Nov 2018 and it helped us create a business model for this application and perform customer discovery and analyze product market fit. We spoke to 100+ people to gain insights and get feedback on our concept.
We followed an evidenced based entrepreneurship model at the bootcamp which focused on customer discovery, rapid prototyping and quick responsive development. The focus was to create a repeatable and scalable business model which can have a huge impact to the world through the medium of virtual reality, augmented reality and mixed reality. We followed the approaches given by:
We analyzed our project, the basic concept and our vision for this in future and started creating the business model canvas. This was filled in and refined every week as we spoke to more people and tested out hypotheses. Our final version of the business model canvas was as shown below:
During the course of the entire bootcamp, we listed out hypotheses to test on a weekly basis to see which of our assumptions were true after talking to customers. Some of our key hypotheses to test were as follows:
We visited the Adapt Community Network which is a powerful community of people with Cerebral Palsy and spoke to them about our project. We encouraged them to try out our application with the VR headset and they simply loved the experience.
We also visited the Axis Project which is a multidisciplinary center committed to providing high quality services for those with physical disabilities. We asked people to interact with our application and recorded their feedback about the experience which was extremely positive and encouraging and they were excited about purchasing this application when launched.
Key insights from the user testing sessions at Adapt Community and Axis Project were as follows:
We setup a demo of our application prototype at a number of events and conferences including the following:
These opportunities helped us receive feedback as well as talk to potential partners, venture capitalists and HMD manufacturers. These demos helped us better understand the gaps in our concept and how we could improve them.
At the end of the bootcamp, we presented the pitch of our concept at the Exploring Future Reality event in front of 300+ people where we received immense positive feedback. The pitch video for Craft can be seen below: