We started to create Spaatify with a grand vision of enhancing the experience customers have in high-end day spas. Our vision was to provide a way for customers to have a more personalized experience that was specifically tailored to their needs.
After iterative testing and design, we made our final touches on the prototype mainly in Three areas: Tutorial, Content Browsing, Gesture and Lighting
Through user testing, we found that users often had a difficult time understanding what actions looked like despite having a walkthrough tutorial. Because gesture recognition can be a little tricky, and often requires more precise user input, we decided that a more robust tutorial system was necessary. Another interesting observation from testing is that some users didn't quite get the gestures right even after the tutorial, but after we demoed the correct gesture to them, they were able to perform those gestures much more accurately.
Based on this, we decided to use Real Person Demo instead of Cartoon Animated Actions. This way, the user could see exactly what the motion should look like with respect to their body and follow the gestures more naturally.
While showing our demo with our coach, Doug, we really attempted to explore the idea of usability and whether the displayed items on the screen were contextually intuitive. What do we mean by contextually intuitive? We mean that the information we provide on the screen is relevant to actions that user is making and also, conversely, making sure that information on the screen wasn’t providing information that was irrelevant to the actions and gestures the user could actually do. In other words, we wanted to make sure the user had no way to be confused with what the action was mapping to.
Cover Flow descriptions
To strengthen the connection between the cover flow and the category, i.e. to let users more aware that when they're selecting categories, the cover flows correspond to what they've chosen. We added the icon for each category in the cover flow description. In addition, we artificially darkened the coverflow view while the category selection bar is active. This lets the user know that the gestures they perform to control the user interface are not connect to the coverflow directly, despite the fact that the videos for each respective category is being updated in coverflow as the user changes categories.
Provide feedback on invalid gesture
Often times, users, after having used Spaatify, will try to swipe right or left in the hopes of moving to the next video or just leaving the full screen mode, forgetting that they must join their hands together. As a result, we decided to provide some feedback to users in the case where they accidentally do one of the gestures (Swipe Right or Swipe Left) and provide a reminder that in order to stop the video, users must join their hands together.
One common experience everyone has with natural user interfaces is a higher chance of failed inputs or input overload. With the kinect, we increased the sensitivity of gesture detection, so we often would have an overload of gestures being detected that would lead to some bad user interaction components. Specifically, a user might be meaning to swipe our cover flow left, but because they move their hand right momentarily, it might move you right, and then left and then right in a continuous loop. As a result, the user would have to do some odd, unnatural gestural behavior in order to break this detection loop. Per input from our coach, Doug, we decided to implement some noise reduction. What we decided to do was, detecting a gesture, we would 1) remove the ability to detect gestures until the actions related to the gesture were completed 2) wait until the animation on the screen had completed before allowing further gestures to occur. This minor change helped to make the experience a bit more fluid through the reduction of errors being detected.
We have now finished integration of the Philips Hue lighting directly into the system. The system will change the lighting based on the categories now. Again, the purpose is to use lighting in order to provide a soothing environment for users that really helps to set the mood.
Holding to change colors removed
Originally, we offered the ability for users to manually change the color of the bulbs randomly by holding their hand over a location above their head. However, because there were some issues with usability, specifically it was difficult for users to understand how to change colors and also cause confusions when we present too many options and color choices, we decided to remove this function for time being but work on integrating brightness and color themes with videos.
Given the time constraint, we weren't able to realize every possible design inspiration we have. But hey, Spaatify can even go further though it's already pretty awesome now. Here are some thoughts:
Show what NOT to do
Additionally, when observing people using Spaatify in action, we often users making the same mistake in a repetitive fashion. For example, users need to join their hands together in order to start and stop videos. However, users often joined their hands at the chest level rather than the lower waist level, resulting in the gesture not being detected. As a result of this behavior, we decided to provide more detailed responses in the tutorial. Specifically, after a certain amount of time, we will show users what they are NOT suppose to do, and what they are suppose to do. This way, we provide some additional feedback in order to make the process of walking through the tutorial a bit less stressful and at the same time a bit more intuitive.
Switching between videos in play mode
In the future, we would highly consider implementing the ability for users to directly switch to the next video in the same category just by swiping left or right directly. This would help provide a more contextually intuitive usage of the interface. Users would then only have to return to the coverflow view in order to change categories. However, the challenge is we need a higher fidelity way to ensure that user DOES want to change the video - as swiping might be triggered by accident more easily than close-hand gesture, and switching to another video when user doesn't intend to can be highly annoying...
Provide video information
One component of the full screen mode that was not present is information about the video that is being played. Online video services like Hulu and Netflix provide title, description, and playback time remaining on screen. We think that users would like the ability to view and access this information. One way to do this can be to overload the "raise your hand" gesture so that users can simply raise their hand and see information about the title of the video they are watching, detailed description, and the time position of the playback. This information should appear for a few seconds, and then slowly fade into the background again.