CRAFT – A new way of creating art in immersive realities
Microsoft Design Challenge
Design a product, service, or solution to solve for exclusion in a deskless workplace. The prompt of this challenge was to integrate inclusive design principles to build a solution to enable people with disabilities working in deskless workplace to improve diversity in design.
After exploring multiple options, our team decided to build a multi-modal tool called ‘Craft’ for people with physical disabilities to create 3D art in Virtual Reality. The idea was to build a tool that enabled people with limited mobility in their arms to draw using their voice commands and eye movements.
Problem Statement – How might we design a multi-modal tool for people with limited mobility in their arms to create art in Virtual Reality?
Solution – We designed a tool using principles of multi-modality – ‘Craft‘, that allows people to create art in Virtual Reality using voice commands and eye movements. Multi modal human-computer interaction refers to the interaction with the virtual and physical environment through natural modes of communication.
Successfully completed the Microsoft Inclusive Design Challenge and went on to win grant of $10,000 for the NYC Media Lab XR Startup Bootcamp
Team: Cherisha Agarwal, Joanna Yen, Pratik Jain, Shimin Gu, Raksha Ravimohan, Srishti Kush
My role: Working with a diverse team from different backgrounds and skill sets really helped push the boundaries of this project to visualize an impactful idea and bring it to life. We all participated in the design process right from research and interviews to prototyping and testing. I actively contributed to the UX design process by conducting the user interviews and testing to analyze the results and document the insights and helped in the various prototypes for our project. I worked on designing the personas and various other digital assets. I was responsible for designing the style guide for our project booklet and designed the entire booklet layout and graphics. I was also responsible to lead the team for the XR Startup Bootcamp for this project concept where we analyzed the business model and product market fit.
Duration: Feb – April 2018, Sep – Dec 2018
Tools used: Sketch, Unreal, Adobe InDesign, Adobe Photoshop, Adobe Illustrator, Adobe Premiere Pro
Research & Brainstorming
Since we had the freedom to create any product we wanted to, based on our interests we came up with different professions and disabilities that the team wanted to focus on. After deciding the professions, the following process was followed:
- Listing all the tasks that the person does, in order to figure out the point at which they will not be able to work normally due to a disability
- Card sorting to come up with problems we potentially wanted to work on. Card sorting helped to narrow down our ideas to three different scenarios and disabilities: Artist with limited hand mobility, Cashier using a kiosk and having physical disabilities.
- After discussing the idea with our Professor Dana Karwas and getting feedback from the Microsoft team, we decided to go ahead with the idea of artists with limited hand mobility and trying to create drawings in VR.
Insights from Artists and Designers
To get more clarity on our idea, we decided to speak with artists and creative technologists, to understand their workflow. These artists were mainly students from NYU or working professionals who worked on illustrations, cinema, music, 3D modelling, graphic design and games. The students helped us in figuring out the different pain points and learning curve.
The responses were very insightful in conveying tools and softwares used and how to frame solutions if the same softwares were to be used by people with disabilities. We also discussed the idea of multi-modal input as a means of interaction for persons with limited hand mobility. They provided useful insights where eye tracking could work in the softwares.
The current market is flooded with options to draw art in Virtual Reality. Some of the options are TiltBrush, Quill, Medium, Block by Google. Apart from these popular tools there are many other upcoming which are now available including Make VR, Gravity Sketch and Mozilla A-Painter.
We tried on the VR headset to get a sense of the 3D art interactions in Virtual Reality by immersing ourselves with Medium, TiltBrush and HoloLens to understand the current features and functionalities. Surprisingly, the apps and headsets meant for VR did not have accessibility features and are completely unusable for someone with limited mobility. Here are some of our observations:
- Drawing and selection of tools can be done only through hand held controllers at present
- Drawing using voice by mentioning coordinates is not intuitive
- State of natural language processing is not advanced enough to perfectly transform the user’s commands to strokes
To help us understand how we should develop our idea, we needed to get insights from experts in the field and potential users. Since we were foraying into an unexplored territory, we required new perspectives to better understand the interactions and complications involved. Some of the interviews we did were with:
- Todd Bryant, NYU Professor for VR who helped us understand VR technicalities and also pointed out there was no accessibility currently in VR
- Serap Yigit, a User Experience Researcher at Google who helped us learn about user research techniques and usability functionalities
- Claire Kearney-Volpe, NYU Professor at the Ability Lab who guided us to focus on multi-modal interactions and put us in touch with potential users at Adapt Community Network
- Erica Wayne, Account Manager at Tobii Pro who helped us understand the eye tracking mechanisms
- Peter Cobb, Director of Adapt Community Network who explained how users with limited physical mobility currently create art
- Adapt Community members who suffered from cerebral palsy and were quite excited about our multi-modal tool to create digital art
These interviews gave us a real world experience of how users reacted to our idea. Some of the key insights were as follows:
Based on our interview insights, we identified four unique persona to focus on for our project. Each persona had their own goals, motivations and frustrations and different mobility levels as well. Hence it helped us make our design more universal in nature.
We also established the persona spectrum, which indicate the exclusions we would be solving for as it relates to their story. By designing for someone with a permanent disability, someone with a temporary ailment or situational limitation can also benefit. We defined various ways the solution can be applied across multiple scenarios and contexts of different people with similar motivations.
Prototyping & Iteration
After understanding our persona spectrum and user flow, we proceeded to create basic prototypes to demonstrate and test the concept of our idea. To make our tool accessible, we had to empathize for our users and identify pain points and intuitive ways of interaction. We decided to go for multi modal interaction, since the user has limited hand mobility, we wanted to make use of voice and eye gestures to perform tasks. We wanted to experiment the entire user flow with both eye interactions and voice, to see which is more intuitive. We started with creating a video to demonstrate how voice commands and eye gaze can be used to operate the tool without hands. This prototype helped us demonstrate our idea and make it understandable to our users as it was quite a new and unfamiliar concept to many people.
After working on the paper prototype, we compiled our product features and interactions. Once we got insights from professionals, we decided to build a paper prototype. The idea was to build something simple which we could take to our users and test it out. Taking cues from the video we shot earlier, we listed down the features we wanted to add to the prototype. We were looking to add common and easy to comprehend tools. We came up with a list of tools which included drawing tools, system tools and functionality tools. After this we printed out each tool icon on a A4 sheet as demonstrated in the pictures below.
After our prototype was ready, we wanted to come up with a cheap and easy solution to demonstrate eye tracking. We concluded that a laser pen would be the most optimal way to do that. Our plan was to attach the laser to pen on a hat and once the person moved their head, the beam of the laser would move. This would give an idea to the user where they were pointing on the paper prototype. Once the user gazes at a particular tool, the selected tool is highlighted using the blue-violet light. To draw, the user would move the laser point across the canvas and a student would trace the trajectory using a marker. Some of the tools such as scale and color palette were made expandable and we used different sheets of paper to make pop ups. We decided to test it with designers and artists with experience in experience in VR as well as 2D and 3D software drawing tools and normal hand movements to get quick feedback on usability. We then planned to test it with the members of the ADAPT community who had limited hand movement, but were interested in art and the concept of drawing using eye movements.
After understanding our user journey and building our information architecture by reorganizing the tool structure and finalizing on our interactions, we proceeded to create the final hi fidelity prototype using the Unreal Engine. We also included an on-boarding AI assistant called Crafty to help new users get familiar with the interface. The assistant guided the user with the features and functionalities of each tool and was available if the user got stuck at any point. The final prototype process included Movement tracking, Interactable user interface, Time-based gaze selection, Teleport function, Painting function and changing the Environment. These tasks helped us accomplish a working prototype which users could interact with in the immersive Environment.
We have received good feedback for our prototype but we aim to work towards improving it even more especially in the following areas:
- Use eye movements to draw
- Incorporate voice functionality by using machine learning and artifical intelligence
- Use natural language processing for building an effective AI assistant
- Test with more users and refine the user interface
- Brainstorm on how to transform the drawing to 3D coordinates
- Release it on the Oculus Store and Vive Store
We hope that this tool would be extremely useful for people with limited mobility to not only experience VR but also to create interesting content and share it with the world.
NYC Media Lab
This project was chosen to be showcased at the NYC Media Lab Summit on Sep 20, 2018 where we received positive feedback from the viewers and the project was well appreciated by all.
XR Startup Bootcamp by NYC Media Lab
Our team also won a grant of $10000 grant for the NYC Media Lab XR Startup Bootcamp for this project. The bootcamp ran from Sep to Nov 2018 and it helped us create a business model for this application and perform customer discovery and analyze product market fit. We spoke to 100+ people to gain insights and get feedback on our concept.
We followed an evidenced based entrepreneurship model at the bootcamp which focused on customer discovery, rapid prototyping and quick responsive development. The focus was to create a repeatable and scalable business model which can have a huge impact to the world through the medium of virtual reality, augmented reality and mixed reality.
We followed the approaches given by:
- Startup Owners Manual: Steve Blank, Bob Dorf
- Business Model Generators: Alexander Osterwalder
- Customer Discovery Vimeo Videos: Lean Launchpad
- How to build a startup on Udacity: Lean Launchpad with Steve Blank, Kathleen Mullaney
Business Model Canvas
We analyzed our project, the basic concept and our vision for this in future and started creating the business model canvas. This was filled in and refined every week as we spoke to more people and tested out hypotheses. Our final version of the business model canvas was as shown below:
Hypotheses to Test
During the course of the entire bootcamp, we listed out hypotheses to test on a weekly basis to see which of our assumptions were true after talking to customers. Some of our key hypotheses to test were as follows:
- People with both permanent and temporary limited limb mobility will be comfortable drawing art in VR
- The transition from using hand controllers to eye gestures and voice commands will be a seamless experience
- Our tool would be needed by all our targeted customer segments
- The customer will find the tool easy and useful
- Therapy Institutes, Disability Centers and Hospitals will be our B2B clients and they will be fine with the pricing model
- Children with disabilities will be comfortable using our tool
- HMD Manufacturers will be willing to partner with us
- Drawing with other people in VR would be engaging for our users
- Product needs to be customized for specific customer segments
- Influencers will be willing to promote our product
- In-App Store will create employment opportunities
Key Considerations for Early Version Prototype
Testing with Potential Users
We visited the Adapt Community Network which is a powerful community of people with Cerebral Palsy and spoke to them about our project. We encouraged them to try out our application with the VR headset and they simply loved the experience.
We also visited the Axis Project which is a multidisciplinary center committed to providing high quality services for those with physical disabilities. We asked people to interact with our application and recorded their feedback about the experience which was extremely positive and encouraging and they were excited about purchasing this application when launched.
What we Learnt
After talking to customers, we were able to verify some of our hypotheses and learn something new as well. Some of our key learnings were as follows:
- We found out that a lot of people use Tilt Brush for fun or stress-relief
- Eye tracking can be a good way for people with limited limb mobility to interact
- Lack of space is a big limitation to enjoying VR
- If effectively developed, the concept could have interesting applications in other avenues
- There is a need to define accessibility guidelines for VR
- We should include emergency/help feature in the application
- Fluidity of eye gazing should be taken into consideration
- Cognitive dissonance and limitations of users must be kept in mind
- Care centers will invest in devices for the entire community to promote recreational activities
- The first experience with VR for the aged was incredible and super exciting
- The interface should be more intuitive for our users, more environments should be added and we should explore other areas apart from Art
- Customers would not mind investing in this product if its useful to them
- Products like this could change the facet of accessibility in VR
- Users would like to interact with others in the VR environment
- Children with disabilities and would adapt better with our product
- People with spinal cord injuries might have an issue in using the tool for long because of head movement
- Voice command needs to recognize different dialects and accents
- Old age homes would definitely invest once we incorporate more possibilities
- Hospitals would be open to using this on a subscription model
- Able bodied Tilt Brush artists will be willing to pay for it as the multiple forms of interaction will be useful when working long hours
- Disability Centers invest a lot of money in assistive technology
- Rehabilitation centers are interested in VR but want to use it for other purposes as well
- Elderly care centers would be willing to invest money in our product for recreational uses
- Parents of children with limb disabilities will be keen to buy our product for their children
- Subscription Model will be a more sustainable pricing model for us
- We should include tutorials to help people use the app effectively
We created four different buckets – physical, financial, human and intellectual to categorize all the different resources that would be a part of our project to understand the different aspects of integration with the business model canvas.
Demo at Events
We setup a demo of our application prototype at a number of events and conferences including the following:
- Samsung NEXT Office Visit
- NYC Media Lab Summit
- Verizon 5G Labs Happy Hour Evening
- NYVR Expo at the Javits Center
- Executive Breakfast & MVP Science Fair at BCG Digital Ventures
- Exploring Future Reality
These opportunities helped us receive feedback as well as talk to potential partners, venture capitalists and HMD manufacturers. These demos helped us better understand the gaps in our concept and how we could improve them.
- 3 months
- Launch our website, release beta version and apply for funding
- 6 months
- Partnerships with HMD manufacturers, release accessibility guidelines for VR and raise seed funding
- 9 months
- Launch In-App marketplace to sell art, hire marketing & sales professionals and grow clients
Presenting Pitch at Exploring Future Reality
At the end of the bootcamp, we presented the pitch of our concept at the Exploring Future Reality event in front of 300+ people where we received immense positive feedback. The pitch video for Craft can be seen below: