North (formerly Thalmic Labs), the creator of the Myo armband, was acquired by Google in June 2020. Myo sales ended in October 2018 and Myo software, hardware and SDKs are no longer available or supported. Learn more.

3 minute read

In the Lab: Developing Myo

In the Lab: Developing Myo

I’m Scott Greenberg and I lead Thalmic Labs’ Developer Relations team. I field emails from devs sinking their teeth into applications for the Myo armband, reach out to hackers with interesting ideas, and work to make sure we're building and enabling a rich ecosystem of gesture control applications for the Myo armband.

Gesture control and wearables are incredibly hot topics right now in the tech world, but there’s nothing quite like Myo. The unique use of sensors -- combining medical EMG sensors with gyroscopes, accelerometers, and a magnetometer -- opens a myriad of possible uses.

Take my favorite: the connected smarthome. Using our hands to manipulate our environment is an ancient human activity, but it’s been lost in a GUI obsessed world. The modern smartphone has transformed our relationship with computing by putting processors in our pockets, and it’s been enormously helpful to the species. But clunky apps and screen-based interfaces make them an intrusion into our lives.

This is the evolution we need.

Everyone is facing the same challenge: how do we get more useful information from a user while asking them fewer questions? Talented designers are working to make screen and touch-based interfaces more elegant, but the Myo armband offers a completely different perspective on the problem. It’s able to disregard screens completely, and combine natural gestures with the user’s context to let an individual control technology in a completely untethered way.

While the armband itself doesn’t have location-based tools, developers are using beacons inside their homes to provide contextual information to Myo. If I’m in my bedroom, raise my hand, deliberately make a fist and roll it counterclockwise, Myo can be reliably certain that I’m trying to dim the lights and issue the correct commands to a connected lamp or light bulb. Any one of those pieces of information alone wouldn’t be enough for Myo to be sure of what I’m asking it to do, but context combined with motion and gesture control make for a stunning synergy. It’s the kind of seamless interaction that makes Myo so special, and why we’re so excited to see early prototypes of this technology in the developer community.

There are a few pieces of this computing ecosystem that are currently missing, but people are hard at work making all of these tools. Some devs, like Danny Murphy with AutoHome, are already using such technology to improve their lives. More are writing to me with imaginative solutions every day. As we expand the language of gesture control, a future full of digital devices at your fingertips will be in reach. I’m excited, and privileged, to be a part of the team making this vision a reality.

You've successfully subscribed to The Lab!