handful

handful

A playground for accessible gesture-based interactions

App Development
Accessibility

handful is a digital playground with accessibility and practicality in mind. With handful, you are able to play various mini-games with camera-based gestural movements.


Our solution

A playground that utilizes contactless gestural interaction to operate and mimic the feeling of physical movements and interaction. We were inspired by our love for childhood games, and how we could recreate that nostalgia in a digital environment so that feeling can be shared by friends around the world. In addition to that, we wanted to explore other sensory experiences when interacting with our devices. Upon these goals, we came up with the idea of creating an interactive playground that allows you to have fun with your friends or by yourself through multi-sensual mediums, with an emphasis on hand gestures for this project.

Big Question

How might we bring physicality to digital interactions?

User Flow

User Flow
Connectivity

Connectivity

We submitted to the Connectivity track for this hackathon, a track that emphasizes how people connect and collaborate with each other. While our final build didn't include any connectivity features, our Figma prototype envisioned an experience where you and your friends could share a physical experience (e.g., Rock Paper Scissors) through digital means.

Imagine the feeling ofreally playing Rock Paper Scissors with someone on the other side of the world.

Hand Scanning

Hand Scanning

At the center of handful is our hand scanning. It's what controls our gesture detection and how we can interpret your joint motion.

Using the Vision framework, we got points of significant hand features (PIPs, DIPs, MCPs, etc.) From here, we tied them together and to the wrist, giving you a good look at your internal joint structure. Based on how your joints react individually and together, we can parse that into several different gesture actions - all of which can be utilized in our Rock Paper Scissors demo.

Designing for Motion

Designing for Motion

One of our biggest challenges was trying to make gestural interaction feel fun, especially with how different it is from traditional touch interactions. Throughout the design process, we landed on a bubbly and light tone that contrasts the technical and focused gesture recognition system.

Animations and real-time hand mapping show the user how their hand directly influences the actions they can make. While the early results have flaws, with time and better vision models, most problems can be alleviated. Deeper models would allow for even more specific gestures without active joint tracking.

Built with


  • SwiftUI

    SwiftUI is a declarative UI framework from Apple. While it's still a bit in its infancy, we can speed up development time and iterate quickly. This allowed for developing the design quickly and combining AVKit and Vision through UIKit.

    SwiftUI Website
  • AVKit

    With AVKit and UIKit, I built a real-time video feed from the front-facing camera, also enabling integration with the Vision Framework.

    AVKit Documentation
  • Vision

    The Vision framework performs face and face landmark detection, text detection, barcode recognition, image registration, and general feature tracking. I used it to track hand poses, get twenty different joints, and identify which fingers.

    Vision Documentation

Two apps with one singular goal.

  • handput

    A machine vision camera-based gestural interface, designed for accessible non-touch interactions. This is the foundation of handful's functionality.

    handput GitHub Repo
  • handful

    The implementation of handput, built into an experience playground designed with accessibility and physicality in mind.

    handful GitHub Repo

More work