If the world included a perfectly accessible gesture/multitouch recognition system, what would that system look like? What accessibility features might it have? Here are fifteen user needs which the developer of a hypothetically perfect system should be aware of …
Recently I was introduced to the Ractiv Touch+ system, which promises to turn any surface into “multitouch and more”. It uses a pair of cameras and computer vision technology to recognise gestures on or above a surface.
It is immediately apparent that this technology has potential to help many people with accessibility needs! Ractiv’s introductory video shows the system successfully tracking fingers and hands on and above rough and smooth surfaces, and even tracking a dancer in motion:
Gesture Controls For OS X
Aside from the forthcoming Touch+ what motion/gesture controls are available for OS X? Answer: Not much.
- Leap Motion is already commercially available for OS X.
- Thalmic’s Myo armband is currently taking pre-orders and also works with Mac OS X.
- It is in theory possible to set up Kinect for Mac OS X but it’s a long, tedious, and very technically complicated process I would not recommend to anybody but the geekiest geeks.
Accessible Gesture Controls – Use Cases
While I was thinking about what maximally accessible multitouch/gesture control systems would look like, I had of course to think about the users who might need the accessibility of these systems. This is the set of user needs that I came up with – it’s not specific to Ractiv’s system but would apply similarly to any advanced multitouch/gesture recognition system.
Almost every example on this list is based on the needs of a person who I know, either locally or virtually. I have no statistics about how common each of these needs would be, either in absolute or relative terms, I just know that all of these needs – and more – exist in the community.
A fully accessible multitouch/gestural control system must deal with as many as possible of the following user needs:
1. Users who wish to use standard multitouch gestures but need a “zero-impact” mode where gestures occur over a surface without any actual contact with that surface. For example: users who experience pain triggered by even very small impacts.
2. Users whose movement is constrained in some way and may need to perform gestures in a much larger or smaller virtual area than the software expects. For example: some users are impaired in finger/hand movements but able to make large gestures from the shoulder or elbow, others are impaired in large gestures but can make reliable movements in a very small area of a few square inches with finger-tips only.
3. Users whose movements are unavoidably jerky and who need software smoothing of that movement before the computer interprets the gesture. SteadyMouse is an out-of-date example of a very old mouse-only version of what I am thinking about in terms of smoothing.
4. Users who are able to perform some standard gestures but not others and need to “remap” standard gesture definitions so that the limited set they are able to perform has the most useful functions for them. This may include having multiple gesture “sets” which the user can easily switch between, for example by using a specific gesture to swap sets or tapping a physical keyboard key, or these “sets” may contextually change depending on the state of the computer (which app is in the foreground, etc.)
5. Users who are able to perform some gestures reliably but not any of those gestures which are commonly used by standard users. These people would need some way to “train” the system to recognise their specific unique gestures as meaningful and interpret their commands, preferably with a system that is friendly to end-users such as demonstrating a gesture to the system several times in a training mode.
6. Users whose hands do not look the same as “standard” hands due to congenital limb difference, amputation, injury, etc., or who need to use a different body part to perform gestures and need the camera to reliably track their appropriate body part for gestures. A requirement to wear a small non-intrusive marker on the body part to be tracked (similar to the reflective dot used by current head tracking technology) would probably be acceptable here, though ideally would not be needed.
7. Users who have issues such as myself with severe weakness and fatigue and may need different accommodations at different times, and wish to make sure there are simple ways to switch between different sets of accommodations.
8. Users who wish to use a variety of input methods such as gesture, speech/voice input, switch input, keyboard input, joystick input, standard mouse input, etc., and need to make sure that the different methods can reliably work together with each other in a useful way.
9. Users who need to use other assistive technology to access their device’s output, so making sure any software include with gesture/touch controllers is as accessible to possible to VoiceOver users, deaf users, switch control users, etc.
10. Users who may not be very aware of the position of their limbs relative to the tracking camera and need feedback (including auditory and/or visual options) when the area being tracked is lost to the camera because it’s out of frame or occluded by something else.
11. Users who have controlled movement of a small body part paired with uncontrolled movement of a large one. For example a user with severe cerebral palsy who has voluntary control of finger movements but uncontrolled jerky movements in their arms and wrists. The same occurs with other users for toes/legs/feet. Being able to track the small voluntary movements of fingers/toes while ignoring the larger movements of the limb its attached to is something I don’t think is currently available in any standard system and is a huge problem for people in this situation – most physical switches need to be attached to a relatively immobile surface which really limits opportunities in this case.
12. Users who may take significant time to perform a gesture, and it may be performed much more slowly than the system usually expects. Gestures which are time-sensitive (like a ‘double-click’ multitouch movement) need to have options to adjust the maximum time taken and also offer time-insensitive alternatives for those who need them.
13. Users who, once they have performed a gesture, will uncontrollably repeat that gesture a number of times (perseveration). The system needs to be able to be set up to only accept the first successful gesture of a series of identical gestures, then wait for the user to still and/or wait a set amount of time before recognising another gesture.
14. Users who will need repeated practice, with multi-sensory feedback, to master a gesture. Being able to be shown, in a practice mode, exactly how their gesture is failing to match the one they are attempting.
15. Users who have certain movement ‘tics‘ which the system should specifically ignore. For example a user with Tourette’s Syndrome may wish the system to interpret all movements normally except one specific gesture, which should always be ignored.
System developers need to also recall that many disabled users have complex disabilities and will fit into more than one of the above categories!
I realise this is a pie-in-the-sky wish list and the likelihood of a system supporting all of these cases is pretty much zero, but my sense is that developers really don’t understand the full range and complexity of disabilities and their impact on disabled users of gesture and multitouch control systems. I think that even if developers choose to support only a few of these cases, knowing that other cases exist is still a very valuable thing.
Did I miss your specific needs? Leave a comment below and I’ll add things to the list as we go.
Some of the links in this article are affiliate links. This means that if you purchase the products that I've linked to I'll get a commission - a small percentage of the sale price. It won't cost you anything and it will help to support me and ATMac.