Samuel L. Jackson, as Nick Fury, in a long leather coat touching figures on a transparent screen.

What Would Accessible Gesture Controls Look Like?

If the world included a perfectly accessible gesture/multitouch recognition system, what would that system look like? What accessibility features might it have? Here are fifteen user needs which the developer of a hypothetically perfect system should be aware of …

Recently I was introduced to the Ractiv Touch+ system, which promises to turn any surface into “multitouch and more”. It uses a pair of cameras and computer vision technology to recognise gestures on or above a surface.

It is immediately apparent that this technology has potential to help many people with accessibility needs! Ractiv’s introductory video shows the system successfully tracking fingers and hands on and above rough and smooth surfaces, and even tracking a dancer in motion:

Gesture Controls For OS X

Aside from the forthcoming Touch+ what motion/gesture controls are available for OS X? Answer: Not much.

  • Leap Motion is already commercially available for OS X.
  • Thalmic’s Myo armband is currently taking pre-orders and also works with Mac OS X.
  • It is in theory possible to set up Kinect for Mac OS X but it’s a long, tedious, and very technically complicated process I would not recommend to anybody but the geekiest geeks.

Accessible Gesture Controls – Use Cases

While I was thinking about what maximally accessible multitouch/gesture control systems would look like, I had of course to think about the users who might need the accessibility of these systems. This is the set of user needs that I came up with – it’s not specific to Ractiv’s system but would apply similarly to any advanced multitouch/gesture recognition system.

Almost every example on this list is based on the needs of a person who I know, either locally or virtually. I have no statistics about how common each of these needs would be, either in absolute or relative terms, I just know that all of these needs – and more – exist in the community.

Tom Cruise faces the camera, hands raised and fingers outstretched. Computer graphics hang in the air around his hands.
Tom Cruise uses the fictional gestural interface in Minority Report.

A fully accessible multitouch/gestural control system must deal with as many as possible of the following user needs:

1. Users who wish to use standard multitouch gestures but need a “zero-impact” mode where gestures occur over a surface without any actual contact with that surface. For example: users who experience pain triggered by even very small impacts.

2. Users whose movement is constrained in some way and may need to perform gestures in a much larger or smaller virtual area than the software expects. For example: some users are impaired in finger/hand movements but able to make large gestures from the shoulder or elbow, others are impaired in large gestures but can make reliable movements in a very small area of a few square inches with finger-tips only.

3. Users whose movements are unavoidably jerky and who need software smoothing of that movement before the computer interprets the gesture. SteadyMouse is an out-of-date example of a very old mouse-only version of what I am thinking about in terms of smoothing.

4. Users who are able to perform some standard gestures but not others and need to “remap” standard gesture definitions so that the limited set they are able to perform has the most useful functions for them. This may include having multiple gesture “sets” which the user can easily switch between, for example by using a specific gesture to swap sets or tapping a physical keyboard key, or these “sets” may contextually change depending on the state of the computer (which app is in the foreground, etc.)

5. Users who are able to perform some gestures reliably but not any of those gestures which are commonly used by standard users. These people would need some way to “train” the system to recognise their specific unique gestures as meaningful and interpret their commands, preferably with a system that is friendly to end-users such as demonstrating a gesture to the system several times in a training mode.

6. Users whose hands do not look the same as “standard” hands due to congenital limb difference, amputation, injury, etc., or who need to use a different body part to perform gestures and need the camera to reliably track their appropriate body part for gestures. A requirement to wear a small non-intrusive marker on the body part to be tracked (similar to the reflective dot used by current head tracking technology) would probably be acceptable here, though ideally would not be needed.

White man sits in a wheelchair with his hands flat on the wheelchair tray. There is a computer screen and a bunch of electronic equipment behind him.
Giesbert Nijhuis wears a tiny reflective dot on his forehead so his head-tracking mouse will function.

7. Users who have issues such as myself with severe weakness and fatigue and may need different accommodations at different times, and wish to make sure there are simple ways to switch between different sets of accommodations.

8. Users who wish to use a variety of input methods such as gesture, speech/voice input, switch input, keyboard input, joystick input, standard mouse input, etc., and need to make sure that the different methods can reliably work together with each other in a useful way.

9. Users who need to use other assistive technology to access their device’s output, so making sure any software include with gesture/touch controllers is as accessible to possible to VoiceOver users, deaf users, switch control users, etc.

10. Users who may not be very aware of the position of their limbs relative to the tracking camera and need feedback (including auditory and/or visual options) when the area being tracked is lost to the camera because it’s out of frame or occluded by something else.

11. Users who have controlled movement of a small body part paired with uncontrolled movement of a large one. For example a user with severe cerebral palsy who has voluntary control of finger movements but uncontrolled jerky movements in their arms and wrists. The same occurs with other users for toes/legs/feet. Being able to track the small voluntary movements of fingers/toes while ignoring the larger movements of the limb its attached to is something I don’t think is currently available in any standard system and is a huge problem for people in this situation – most physical switches need to be attached to a relatively immobile surface which really limits opportunities in this case.

Pre-teen blonde boy in a wheelchair, with switches under the ball of each foot. An unseen partner has a hand under the sole of each foot.
Mac Burns has good control of ankle movements for his dual foot switches, but not of the position of his legs. Communication partners use a hand under his foot switch (shown) to give him something to press against.

12. Users who may take significant time to perform a gesture, and it may be performed much more slowly than the system usually expects. Gestures which are time-sensitive (like a ‘double-click’ multitouch movement) need to have options to adjust the maximum time taken and also offer time-insensitive alternatives for those who need them.

13. Users who, once they have performed a gesture, will uncontrollably repeat that gesture a number of times (perseveration). The system needs to be able to be set up to only accept the first successful gesture of a series of identical gestures, then wait for the user to still and/or wait a set amount of time before recognising another gesture.

14. Users who will need repeated practice, with multi-sensory feedback, to master a gesture. Being able to be shown, in a practice mode, exactly how their gesture is failing to match the one they are attempting.

15. Users who have certain movement ‘tics‘ which the system should specifically ignore. For example a user with Tourette’s Syndrome may wish the system to interpret all movements normally except one specific gesture, which should always be ignored.

System developers need to also recall that many disabled users have complex disabilities and will fit into more than one of the above categories!

I realise this is a pie-in-the-sky wish list and the likelihood of a system supporting all of these cases is pretty much zero, but my sense is that developers really don’t understand the full range and complexity of disabilities and their impact on disabled users of gesture and multitouch control systems. I think that even if developers choose to support only a few of these cases, knowing that other cases exist is still a very valuable thing.

Did I miss your specific needs? Leave a comment below and I’ll add things to the list as we go.

– Ricky

Some of the links in this article are affiliate links. This means that if you purchase the products that I've linked to I'll get a commission - a small percentage of the sale price. It won't cost you anything and it will help to support me and ATMac.

7 thoughts on “What Would Accessible Gesture Controls Look Like?”

  1. Nice, comprehensive post Ricky. I would say I mostly fall into #2 and #9. The Ractiv Touch+ is really intriguing but I can’t use my fingers or wrists so I’m not certain how useful it would be for me. I can waive my right arm around though.

    I’m familiar with Leap Motion but that just doesn’t seem like it would work for me which is what worries me about the The Ractiv Touch+. One thing you may not be aware of is something called “BetterTouchTool”. It allows you to program gestures, mouse movements, and keyboard shortcuts in a variety of ways. They’ve even integrated Leap Motion into it. I use it in conjunction with their iPhone trackpad app.

    http://www.boastr.net/

    What I would like to see is something that allows me to customize simple arm waving gestures into mouse clicks I can’t do with my Headmaster Plus (right-clicks, middle-clicks, etc.).

  2. That’s an impressive long list!

    I probably missed it, but permitting the user to shift in relationship to the computer, and thus the gesture interpreter, is important for me. I wiggle to minimize painful skin compression. Sometimes I want to rest the heels of my hands, sometimes lever from my elbows, and similar fidgets.

    No. 14 is particularly relevant. In addition to the teaching/practice mode you describe so well, a feedback mode where the gesture is acknowledged by a unique tone sequence, as well as a very subtle and large screened command names across the display, would serve as a cognitive soft landing for the learning I’ve just focused on.

    As long as I’m dreaming, I would love to have the gesture interpreter read fingerspelling; 36 unique gestures!

  3. Jesse the K maybe you should contact the motion savvy guys and get your finger spelling on their beta testing list… There’s more options around for that to be a reality (also could end up a keyboard rather than app with iOS now allowing third party keyboards)
    http://techcrunch.com/2014/06/06/motionsavvy-is-a-tablet-app-that-understands-sign-language/

Leave a Reply

Your comment may be held up by our moderation or anti-spam software: please be patient if your comment does not immediately appear. You can include some HTML in comments, but including links or web addresses makes it more likely your comment will be delayed by moderation. Please stick to the comment policy.