| INFORMATION SOCIETY - Man-machine: new means of communication
IT is looking for new ways of communicating with human beings, using all of their polysensory potential. The computer is becoming sensitive to evermore subtle tactile commands, interprets words that we speak and is capable of responding to them. It is starting to decipher our eye movements. IT specialists are exploring ideas that are practically science fiction, where the machine manages to interpret “mental” commands or extend its ramifications to the clothes we wear, in order to look after our well-being. This type of technology, known as interface technology, must first of all meet the needs of our most vulnerable citizens – the elderly or disabled. But, beyond these priority groups, these innovations could radically transform all means of communication between man and machine.
Sight and hearing, speech and touch
In IT, interface technologies between the user and the machine constitute a specific software area, known as the API level (Application Programming 13 Interface), which is added on a strictly independent basis to the internal architecture of the computer. A long-established example: text handling is separate from the keyboard used to type in the characters. Between the two, there is the API, which is responsible for receiving information transmitted by this or that keyboard and converting it into “universal” instructions that can be used for text handling, whatever the “linguistic script” used.
At the present time, the two interface controls are the screen – based on the sense of sight – and the mouse, that multi-functional touch control. Born in the 60s, and then becoming enormously popular in the 80s, this extraordinary invention introduced a fundamentally new tool in terms of the secular usage of typewriter keys. However, the mouse is of fundamental importance in the process that allows “anything graphic”. What would we do without it?
An innovation that has been studied for a long time (IBM was already working on it in the 70s) is the use of voice and hearing. Speech recognition and synthesis are becoming standard currency in numerous applications. A future step will be their adaptation to text handling, whether for dictation or listening: algorithms capable of extracting a string of phonemes from a text and transferring it to a vocal synthesiser, to reproduce it audibly, will soon be part of standard PC equipment.
Another foreseeable development is associated with the sense of touch – what specialists call the haptic (1) interface. Partners in the “Grab” project (cf. panel) are therefore developing robotic simulation tools to trace the contours of a three-dimensional image with the tips of the fingers. Such an application extends beyond the simple adaptation of standard IT tools for the visually impaired by proposing the seizure of digital information via this totally new channel of tactile perception. Will architects be using it for producing “hand-made” digital models? Will tele-surgeons be able to “touch” the organs of a patient they are operating on several thousand miles away using a robot? These technologies are opening up application possibilities in many areas, extending far beyond the world of the disabled.
From look to thought
But innovation prospects go much further – not without causing a certain amount of fear regarding the power of intrusion that they afford the machine – when the interface is capable of interpreting someone’s gaze or even their mental activity. The European “Cogain” project (cf. panel)is dedicated to the tens of millions of people suffering from paralysis in the world, aiming at communication between man and machine controlled by eye movements. Päivi Majaranta, from Tampere University (FI), its coordinator, believes that this technology “can also find outlets in other applications that render computers attentive to the user, for example,in the field of games or video-conferencing”. Big Brother is watching you?
For people who are paralysed, the ultimate solution would, however, be to manage without any movement at all. Are we in the world of science fiction when we imagine controlling a computer using only our thoughts? The marriage between cybernetics and neurosciences allows us to envisage this scenario. Certain researchers are experimenting with so-called intrusive techniques, where electronic chips, implanted in the human body, can be connected directly to the neurons. Consequently, John Donoghue’steam (Brown University – USA) recently published details, in Nature, of a 26 year old paraplegic, Matthew Naggle, who has a “neuromotorprosthesis” inserted in the area of the brain controlling voluntary movements. This prosthesis logs the activity of the neurons and relays this to computers, which translate this impulse in order to control a cursor.
Other researchers are working on more gentle methods, which do not call for surgical intervention. A helmet fitted with electrodes envelops the operator’s cranium in order to capture electrical waves produced by the upperpart of the brain, the cerebral cortex. Neuron information, provided, for example, by an electroencephalogram and relayed to software, is identified and classified in such a way that it corresponds to a simple, pre-defined command (such as moving the cursor to the right or to the left, as inthis case).
The challenge of cerebral plasticity
The computer must therefore associate a command with a mental pattern. The first challenge is of a technological nature: you have to teach the machine to recognise and identify signals produced by the brain. The other difficulty is human. Although, overall, neurosciences associate active areas of the brain with certain mental processes, man is incapable of identically reproducing the activation of the same neuron just by thinking the same thing. This plasticity of cerebral mechanisms means that the ECG for the same person imagining movement of his right hand will be different every time. For José del R. Millán, Coordinator of the European “Maia” project (Mental Augmentation through Determination of Intended Action), “brain/computer interfaces certainly have the potential to offer a new means of communication with computers. However, they cannot currently be used outside of the laboratory, and have to be overseen by experts.” According to him, the main channels to be explored are interaction and adaptation of the two main protagonists in the system – man and machine. The machine has to learn to adapt, at all times, to the changing cerebral patterns of the operator.
But the interface cannot do everything. The autonomy of the elderly and the disabled is also involved in other types of research, namely the telecommunications aspect of certain projects. Therefore, Galileo, which should lead to the future European “GPS”, will offer a satellite positioning system capable of indicating a position to the nearest metre. Thanks to this accuracy and a detailed geographical information system, everyone will be able to ascertain their exact position and calculate the best way to get to the bus stop or the nearest pedestrian crossing. For the visually impaired, it will simply be a matter of creating a voice interface providing a complete “audio map” of the area they are in.
Another great challenge over the coming decades is home support for the elderly. Projects like “Healthservice24”, “Chronic” or “MyHeart”(cf. panel) propose vital signs analysers that can be used in the home and which relay the information logged directly to the medical services via UMTS or GPRS (transmission protocols for mobile telephones). This information will be processed immediately in healthcare centres and will lead to action when necessary. Practitioners will then be able to contact the patient, thanks to a video-conferencing system, via the Internet or digital television. It is therefore a complete infrastructure that is being studied: monitoring of patients in their homes, remote consultation and storage of a person's medical data allowing the doctor to access complete information on their patient.
(1) Taken from the Greek haptein (touch), the term haptic refers to the science of touch, by analogy with acoustic or optic.