Universal sign language

“Decide” is what is known as a telic verb—that is, it represents an action with a definite end. By contrast, atelic verbs such as “negotiate” or “think” denote actions of indefinite duration. The distinction is an important one for philosophers and linguists. The divide between event and process, between the actual and the potential, harks back to the kinesis and energeia of Aristotle’s metaphysics.
 
One question is whether the ability to distinguish them is hard-wired into the human brain. Academics such as Noam Chomsky, a linguist at the Massachusetts Institute of Technology, believe that humans are born with a linguistic framework onto which a mother tongue is built. Elizabeth Spelke, a psychologist up the road at Harvard, has gone further, arguing that humans inherently have a broader “core knowledge” made up of various cognitive and computational capabilities. 
 
...
 
In 2003 Ronnie Wilbur, of Purdue University, in Indiana, noticed that the signs for telic verbs in American Sign Language tended to employ sharp decelerations or changes in hand shape at some invisible boundary, while signs for atelic words often involved repetitive motions and an absence of such a boundary. Dr Wilbur believes that sign languages make grammatical that which is available from the physics and geometry of the world. “Those are your resources to make a language,” she says. As such, she went on to suggest that the pattern could probably be found in other sign languages as well.
 
Work by Brent Strickland, of the Jean Nicod Institute, in France, and his colleagues, just published in the Proceedings of the National Academy of Sciences, now suggests that it is. Dr Strickland has gone some way to showing that signs arise from a kind of universal visual grammar that signers are working to.
 

Fascinating. Humans associate language with intelligence to such a strong degree, I predict the critical moment in animal rights will come when a chimp or other monkey takes the stand in an animal testing court case and uses sign language to give testimony on their own behalf.

Reading the test methodology employed in the piece, I wonder if any designers out there have done any similar studies with gestures or icons. I'm not arguing a Chomskyist position here; I doubt humans are born with some basic touchscreen gestures or base icon key in their brain's config file. This is more about second-order or learned intuition.

Or perhaps we'll achieve great voice or 3D gesture interfaces (e.g. Microsoft Kinect) before we ever settle on any standards around gestures on flat touchscreens. If you believe, like Chomsky, that humans have some language skills (both verbal and gestural) hard-wired in the brain at birth, the most human (humane? humanist?) of interfaces would be one that doesn't involve any abstractions on touchscreens but instead rely on the software we're born with.