Touchpad for Locomotion in VR
This is my mid-term assignment for the elective Designing User Interfaces with Emerging Technologies. Dr. Rong-Hao guided us to make a touchpad using conducting materials and technologies like laser cutting. The touchpad was made in the shape of a matrix, and with the scripts for voltage detecting and data transmission, the touchpad could recognize several uni-finger or multi-finger gestures.
Background
Virtual Reality (VR) provides immersive experience by simulating real-world visual perception. Most VR applications use controllers for locomotion and interaction. Taking the Oculus Quest 2 as an example, there is already a paradigm for interaction with controllers: joysticks for moving and turning, buttons for grabbing and triggering, etc. Even with higher intuitiveness and naturalness, compared to conventional controllers, hand gestures have no clear paradigm for locomotion. In most cases, the in-air hand gestures are used to perform other tasks, which brings more difficulty to moving in the scene.
Design Concept
Therefore, the multi-touch pad is an effective solution to the aforementioned problem. When attached to a wearable, the flat pad could work separately from the in-air hand gestures and thus reduce chaos in interaction. Besides, the multi-touch pad provides space for various gestures, which could fit all the basic motions that users need to perform. The final concept went for the gloves. When the pad is attached to one hand, the other hand could work with it seamlessly, regardless of the height of the hands. Simultaneously, wearing gloves does not significantly affect the recognition of hand shapes.
A 6*6 multi-touch pad is attached to a right glove on top of it (the back of hand). To sustain stable pressure under the pad to ensure steady performance, the pad is firstly attached to a card, which together gets attached to the glove, as shown in Figure 1. Through the Arduino UNO Board and the Muca Board, the pad is connected to the laptop, receiving power and transmitting gestural data. The access to VR is implemented by an Oculus Quest 2. For this specific context, the hand-tracking function is activated in the headset.
Based on the number of touch points and the dimensional threshold of motions, the Processing program is able to recognize nine gestures: swiping up/down/left/right, turning left/right, tapping, pinching, and spreading. Reflecting on the motion in VR, swiping refers to moving, and tapping refers to stopping, as shown in Figure 2. Apart from the basic motions, pinching and spreading are used to hide and show the Graphical User Interfaces (GUI), which resemble the action of minimizing and maximizing things.
​
The demonstration is available below.
Reflection
When applying the pad to the design concept, the realization of output in VR (character movement, GUI effects) was not actually challenging, as I was already rather familiar with working in Unity and VR. The point out of my comfort zone was building the data connection between VR (Unity 3D) and gestural data / status. By making the prototype, I had my first opportunity to approach OSC, which turned out to be a convenient solution to basic data transmission. I learned to set up OSC transmitters and receivers in different platforms, as well as setting the host and channel to make sure that the devices were successfully matched. To summarize, this project has been a significant practice in the integration of multiple hardware and software.
Putting the burden of conventional controllers aside, the interaction mechanism of controllers actually lies in the reflection of real-world hand gestures in the virtual world, plus some basic 2D interfaces (joysticks & buttons). When I reflect on my design concept, I found that what I did was actually dissecting the functionalities of the almost paradigmatic controllers into different parts, realizing them and putting them together.
​
To be precise, corresponding with the controllers achieving 3D hand gesture recognition with grabbing and triggering buttons, this design concept does that using hand tracking. For manipulating 2D interactions (moving and turning in the scene), this concept bases it on the touch pad instead of joysticks. I have found that this mindset is quite useful when studying interaction techniques, since it helps to dive into every interaction paradigm: why is it designed like this, and how can these techniques be improved or even substituted.
​
So far in my master’s study, with the interest in XR technologies, I have worked with natural hand tracking with Leap Motion and precise tracking with Manus Meta-gloves. I like to explore different interaction techniques, see the common and divergent features. I have found that the touchpad-based wearables are somehow an intermediate choice between those two. It relies on the form of wearables, but brings less burden compared to the heavy sensors; it enables not-so-natural interactions, but is more flexible as it is not limited by the tracking space.