An interview with Panayotis Mouzourakis
A physicist-turned-interpreter explores the interpretation booths of the third millennium and debunks popular assumptions about new technologies.
- Last updated:
Not everything that is feasible is good or even useful, says European Parliament staff interpreter Takis Mouzourakis to Communicate!. All the same, Information and Communication Technologies (ICT) are moving willy-nilly into the booth, whether we like it or not. Interpreters had better organise themselves and think collectively about what they need if they wish to have a say in the design of the booth of the future.
Few interpreters seem to be interested in whatever impact new technologies will have on their work in the near future. Why do you think that is so?
When you think about it, our working environment has not undergone any fundamental changes in the last 50 years. Of course, technical standards have improved dramatically, especially in terms of sound quality, soundproofing and overall comfort, but the standard equipment used by interpreters - microphones, consoles and headphones - certainly has not changed. This may have lulled interpreters into believing that ICT does not concern them.
But at the same time, the Internet has brought about a revolution in the way information is distributed and accessed. Documents are increasingly available in electronic form only. And multimedia, such as video and audio, are making their way into the meeting room. All of these new developments are likely to have a considerable effect on the way interpreters work in the future.
You seem to imply that interpreters should be more pro-active in this debate.
Indeed, they should. They should certainly not let events overtake them. The forthcoming European Union enlargement is a good example. In the space of the next ten years, the EU might end up with as many as twenty-two official languages, twice the current count. This will create enormous pressure on its institutions if the existing system of interpretation is maintained after enlargement. One of the challenges they will have to face, apart from the need to train interpreters in the new languages, is finding enough space in meeting rooms for all the interpreter booths, above and beyond that needed for the extra delegates.
There is already the suggestion that remote conferencing should provide a way out by removing the interpretation booths and relocating the interpreters. Whether this actually happens or not, this is emblematic of the way new technologies will be expected to provide solutions to the complex linguistic problems posed by enlargement.
Could you define remote conferencing?
Remote conferencing generally refers to any meeting where all of the participants are not physically present in one place but are linked via video and/or audio. In the specific context of interpreting, this implies that interpreters work in front of a screen without direct view of the meeting room or the speaker. This is different from video-conferencing where the interpreter is still physically present in the meeting room where most delegates are gathered, except for one or more participants who are attending remotely via a video link-up.
In your opinion, what issues remain to be solved before remote conferencing becomes a valid option?
There are many such issues. Unfortunately, interpreters have been conspicuously absent from a debate which might be crucial to the future of our profession but has -- so far -- been dominated by administrators and technicians. To participate in this debate as equal interlocutors it will not be enough to merely put across our specific perspective as interpreters; we should also strive to improve our understanding of the technical, medical and economic issues involved and propose concrete solutions for them.
It is technical issues that have been most prominent up to now in this debate, especially concerning the quality of sound transmission over ISDN connections. I believe that issue will ultimately be resolved as better sound encoding technologies such as MP3 become the norm; indeed much progress has been achieved on that score. In a remote interpreting experiment conducted at the United Nations last year, some interpreters even found the sound quality to be better than what they were getting in their normal booths.
It will be more difficult to compensate for the loss of direct view into the meeting room, which, as every professional interpreter knows, is essential. And this is not just because you need to be able to follow that PowerPoint presentation as well as any delegate. It has been estimated that as much as 40 percent of the information contained in a speech is conveyed by non-verbal cues. Even the best split screen arrangement and camera work cannot compete with the interpreter simply being able to look around the meeting room at will, picking out whatever cues are relevant to the situation at hand. Very often it is the reactions of specific participants rather than the face or body language of the speaker alone that give the interpreter the full story. As there is no way that interpreters can individually control the camera angle, this information will usually be lost.
A second group of issues are directly related to the human factor in remote conferencing. To this day, we only have scant information about the medical and psychological consequences of conference interpreters being physically separated from the meeting rooms.
It is fair to assume that having to interpret for days on end in front of a screen will have an impact on health. Medical questions such as the resulting eye fatigue in particular must be considered before any technical solution is adopted. It is important that adequate standards concerning screen quality and placement be established and respected. Strict working time limits for interpreting under such conditions will also have to be enforced. This is where a professional association such as AIIC has a major role to play.
But there are also more subtle psychological issues. For instance, all interpreters who have already experimented with remote conferencing have reported a nasty feeling of alienation. It appears that empathy with the speaker, a sense of participation in what is going on, are much more difficult to achieve in remote interpreting conditions. What is very definitely missing from a remote conferencing situation is feedback from the interpreter's direct clients. Lacking such feedback, the interpreter is in a perpetual state of uncertainty: is the message getting through or not? This can lead to a feeling of insecurity.
Motivation is an additional problem: interpreters only accept the stress of constantly resolving verbal puzzles and making split-second decisions because there is someone at the receiving end, because human communication is a value in itself. Working for a barely visible two-dimensional image is not enough to get their adrenaline running, let alone provide them with any degree of self-fulfilment and work satisfaction.
And then there is the crucial problem of quality. Interpreters participating in remote conference have reported that they have had to struggle to maintain an adequate level of performance; this probably explains the abnormal fatigue, headaches and other ill effects they experience. Even so, they are unanimous in that the quality of their work necessarily suffers under such conditions.
You stated that interpreters should be more aware of the economic aspects of ICT.
Absolutely. Interpreters are subject to market forces like everybody else. But conference interpreting is in certain respects the equivalent of a luxury good; it is only worth paying for if it provides the high level of quality expected of it. And quality comes at a certain cost. While this has been accepted by most conference organisers until now, there is a real danger that new technologies might be used not to improve the quality of interpreting but instead to cut costs, at the expense of quality. This is why addressing only the technical and the medical or psychological aspects of remote conferencing -- important as they are - while ignoring its economics is not enough.
In theory, the introduction of remote conferencing would allow major users or providers of interpretation to "globalise" interpretation resources and use them more efficiently. It would make it possible to reuse interpreters from a cancelled meeting or from one that ended early, and also to reallocate interpreters covering rare languages between different meetings as needed. It would also be possible to "outsource' interpretation to interpreters in distant, low-wage countries, thereby making substantial savings. At the same time, meetings would continue to be held in traditional meeting rooms, with the extra bonus of freedom from the constraints of providing space for interpreter booths.
The perspective of such a scenario should lead conference interpreters, and in particular AIIC as their representative organisation, to establish a clear framework for the exercise of our profession. For instance, is it acceptable that interpreters be assigned to many different meetings the same day, which they cannot possibly prepare? Clearly, remote interpreting would require a whole new set of binding rules for working conditions, in terms of technical standards, working time, team strength.
This debate is essential because there it is inevitable that the introduction of remote conferencing will have a negative impact on quality and might even have the same effect as hiring sub-standard, non-professional interpreters. We all know that if the quality of interpretation drops below a certain threshold it becomes useless or worse. If we allow this to happen, it might not be very long before broken English becomes the only lingo in international gatherings.
This will no doubt make many an interpreter's hair stand on end. Still, there is a saving grace in the brave new world you have sketched: it is still inhabited by interpreters. This, contrary to many predictions that language engineering would make interpreters redundant. Would you like to comment on that?
Translators have already had to live with such prophecies of doom for a long time. Machine translation was one of the first applications computers were used for, as far back as the 1950s, and it has been part of the Holy Grail of artificial intelligence (AI) ever since. Yet, in spite of the tremendous increase in computing power over the past fifty years, the results achieved in this field have been disappointing, as you can easily verify by using translation engines on the Internet.
Understanding and translating human language is a very different task from crunching numbers, which is what computers were originally built to do. Even after parsing a sentence and looking up all the words in an electronic dictionary, a computer still doesn't have the slightest idea of what that sentence means. Spoken language is even more elusive: over and above mere strings of words meaning depends on such things as prosody and intonation. Most speech acts are inherently ambiguous as long as their specific context has not become clear. Being aware of this context, as the interpreter has to be, takes an incredible amount of world knowledge and first hand experience of the way human beings interact in a given culture. Nobody knows how to even start cramming all this information into a computer.
The argument for interpreters becoming redundant rests on the empirical observation that at fixed cost computing power doubles every 18 months. At this rate, an average PC is expected to match the processing ability of the human brain somewhere around the year 2030. Some AI experts have claimed that it will then become possible to replicate the human brain on silicon or even download it at will. They also expect that robots to equal and then overtake human intelligence, perhaps even acquiring consciousness in the bargain. I feel this is too simplistic, especially as there is increasing evidence that information processing is not confined to the brain or nervous system alone but also takes place at the cellular or even subcellular level. That would increase the processing power required for intelligent robots by a few orders of magnitude.
This argument has at least the merit of reminding us that there is a number of hard questions we still have to come to grips with : What is cognition? What is consciousness? Why do we have feelings? In the process of finding an answer we will hopefully narrow down the existing gap between science and philosophy, between hard facts and their ultimate meaning.
Our translator colleagues realised long ago that they could make the most of ICT. A whole set of tools - online databases, computerised glossaries, and translation memories - have made them more efficient. No specific tools have been developed for interpreters. Is there such a thing as computer-assisted interpretation?
Not yet. And it is a pity. Interpreters are confronted with a growing complexity in subject matters - in all possible fields. To properly prepare meetings, they need to be able to research specialised themes, browse through background documents and meeting notes, and compile terminological information of their own.
Interpreters increasingly access this information in electronic form, whether it is on their own PCs or the Internet and private intranets. It is also increasingly in electronic form that they collect terminology and exchange information after a meeting. What is clearly missing is access to electronic information in the booth during the meeting itself, especially for up-to-date meeting documents.
Asking all interpreters to equip themselves with personal digital assistants (PDAs) or laptops and carry them into the booth is not really an option; such devices can be obtrusive and noisy, and would anyway need to be configured to connect to the local network.
Using interpreter consoles for this purpose, augmented by a touch-sensitive display screen, would avoid both difficulties. With state-of-the-art consoles increasingly becoming fully digital, adding an Internet and intranet connection and a browser interface should not be a problem either; the interpreter would then be able to view session documents or log in to terminological databases on the Web without leaving the booth.
We can also think of other types of interpreting aids. Voice recognition techniques could be used in some limited domains where they can provide reliable results. Computers could, for instance, take over the task of identifying numbers and acronyms when they occur in a speech and then transcribe them on the interpreter's console screen. I believe that most interpreters would find that extremely helpful.
You make the point that technology should not disregard people's real needs. Even then, it is fair to assume that the learning curve will be rather steep for most colleagues. Would you like to comment on that?
The real challenge for technology would be to provide interpreters with the means for improving their effectiveness, at the same time making their work more rewarding for them. This would be of benefit to everyone: clients, conference organisers and interpreters alike. It is the needs of interpreters and their clients that technology should serve rather than some abstract notion of "progress".
Introducing new technologies will not make a lot of sense unless the necessary training is provided for interpreters to use them. This will be relatively easy for the new generation of interpreters who have come to take such things as the Internet for granted - much less so for older colleagues.
Because of this generation gap some tension will be inevitable, but I do not think that the introduction of new technologies will have to be as traumatic as the transition from consecutive to simultaneous interpretation after WWII. Interpreters will not need to acquire a new technique or abandon center stage, which they have left long ago. Still, the best way to ensure the successful introduction of ICT in interpreting is to involve interpreters directly in it, encouraging them to assume direct responsibility for the future of their own profession.
Articles published in this section reflect the views of the author(s) and should not be taken to represent the official position of AIIC.
Panayotis (Takis) Mouzourakis studied physics in the United States and in England. He holds a PhD from Geneva University. He worked at CERN as a physicist before branching out into interpretation, first as a freelance and since 1983 as a staff interpreter in the Greek booth for the European Parliament. His change of religion is due to the fact that "scientists work with objects, and interpreters with people", says he -- with a twinkle in his eye! Takis can be reached at PMouzourakis@fefqmdn2d.europarl.eu.int
Recommended citation format:Vincent BUCK. "An interview with Panayotis Mouzourakis". aiic.net March 23, 2000. Accessed January 27, 2020. <http://aiic.net/p/121>.
Anything to say?
You must be logged in to comment. Sign-in