How should we interpret? A counterpoint to Panayotis Mouzourakis' article

Interpreting is not just about language. Communication theory, as it applies to both monolingual and interlingual communication, must also be brought into the equation.

I was most pleasantly surprised by Panayotis' piece How do we interpret?. This kind of reflection should be a constant presence in any professional forum. That having been said, I cannot but take issue with some of his remarks.

That sense is different and independent from language is simple to demonstrate: we can remember lots of things that we learnt through reading or listening, but cannot even remember what language we "understood" it in. If we can remember what was said without remembering… what was said, then what was said is different and independent from what was "said" (i.e. there is an ontological different between the conventional meaning of the signs and the message that is conveyed by means of them). Also, if sense were not different or independent from language, it could not be conveyed by non-linguistic signs (gestures, iconic or non-iconic images, voice modulation, etc.). If nothing else, our very métier proves it beyond doubt: if I can make the same "sense" despite the fact that not a single original word remains, then that same sense that is made by the speaker in his language and me in mine must be different and independent from either.

Why, then, has Seleskovitch's theory been - indeed why does it remain - so moot? I suggest that because it is generally right but not particularly so. Let me explain. According to Seleskovitch, the interpreter apprehends every few milliseconds (it depends on several variables, not least the interpreter himself and his familiarity with the speaker and the subject) “chunks” of intended meaning or unites de sense, which become the basis of his own reverbalisation. Meaning/sense is apprehended and (re-)verbalised unité by unité.

This is, indeed, the way it normally works in the booth (and I submit that it sometimes substitutes for real critical comprehension), but not quite in consecutive or dialogue interpreting (or, of course, written translation). Sense - even as comprehended in real time and on line by simultaneous interpreters - is always a metarepresentation based on a semantic representation derived from the meaning of the signs (linguistic or other). Indeed, there cannot be any kind of semantic representation without language (or any other semiotic code). People -including, most surprisingly and notably, many interpreters and translators - mistake the one for the other. If someone says to a friend who has just played Beethoven's 8th piano sonata “That was a remarkable Pathétique,” and another listener says “Bravo!” and yet a third adds “C´était vraiment très très bien!” they are not saying “the same thing”: there is semantic information missing or added with each utterance. Yet they all said, on a metarepresentational level, basically the same thing.

Another important aspect that is normally lost sight of is that true comprehension is always critical. When interpreters or translators put their intelligence service to rest, when comprehension bypasses critical metarepresentation - out of stress, absent-mindedness or wanting proficiency - contresense and, especially, nonsense shine forth. During the GA general debate in New York, right after the devastating earthquake in Mexico, at least one colleague let the automatic pilot take over and congratulated the Mexican government and people on it. This colleague was not understanding critically. One has to be mindful not only of what a speaker is actually saying, but of what he actually intends to say (he may be misspeaking) or, crucially, of what he cannot possibly be saying – and that alone is a metarepresentation based on the metacommunicative context, not on language. By the way, nobody noticed – because, except for few exceptional exceptions, nobody actually cares about the initial flowers. They are as ads before the movie on TV - noise whose sole functionality is letting listeners know that the speech has not yet begun but is nigh, which explains why even the most uncompromising advocates of no-questions-asked-no-matter-what-let-the-audience-be-the-judge-that’s-their-problem completeness do not feel too guilty about tampering with them.

True comprehension, moreover, requires emotive motivation. A speaker is presumably interested in that which he has to say, as are, presumably, his interlocutors, but interpreters often do not give a hoot about it. There cannot be critical comprehension without actual emotive interest in understanding.

But let us go back to our pianist. How about a fourth listener who did not like it at all and, in an ostensively facetious way says “Sure! The best I've heard!” Did he or did he not say the same thing? What about a fifth that, also in an ostensively facetious way says “I have heard better, but not bad for an interpreter,” making it clear, however, that he has indeed liked it a lot? There are semantic differences that are neutralised at the metarepresentational level and there are semantic similarities that, metarepresentationally, become opposites. The question is, then, how we would go about interpreting the utterances themselves in that situation (sense is always negotiated ad hoc – no matter what dictionaries may lead us to believe. Situationally, yes may very well mean no, or rather, a speaker “saying” yes, may actually mean to be understood as saying no). Would we or would we not care about the metarepresentations induced in our interlocutors by our own renditions? Would we just "translate" and wash our hands of the metacommunicative effects, or would we very much mind them and verbalise accordingly – “manipulating”, if need be, the speaker's utterance or, even, meaning presumably meant?

I submit that a consecutivist would have limited himself to noting a simple “!” with some kind of counter marker to signify irony (how come the same sign for so different a series of utterances?), and that upon restituting (reverbalising!) the metarrepresented sense stored in his medium or even long-term memory, activated by the semiotic stimulus of that “!”, he would probably come up with an utterance completely different from any of the above - and would not have failed thereby to do a proper job, at least in most plausible contexts. Provided approval with or without irony is properly conveyed (i.e. approval or disapproval disguised as approval), the actual semantic form of the interpreter's utterance is inconsequential.

And how could he or anybody else tell that he was indeed doing the right thing? This is where Relevance Theory carries the day. The cognitive (and, I add, emotive or qualitative) contextual effects would not significantly vary with the actual semantic content of that extra-verbal [very good!]. If they did, then our colleague could be taken to task for not having done a good job - or, at least, not the best possible. For instance, if the first speaker meant also to let his friends know that he had recognised the piece, this would have been "lost in translation". Now what if the interpreter's audience did not give a hoot about the speaker having recognised the piece? Then the rendition would have been wanting for the speaker, but not for the interpreter's interlocutors, who would have been blissfully spared (for them) useless processing effort. And before anybody comes to tell me (as so many often do) that it is not up to the interpreter to "censor" the speaker, let me remind you once again of the systematic pruning of the flowery Latin flowers by the Anglo-Saxon booth… or the inevitable pruning in consecutive (unless, of course, the speaker’s words are more important depending on the interpreting mode, i.e. unless the notion of accurateness, precision, completeness and, generally speaking, fidelity to an original, is mode-dependent - which is logically absurd).

What happens is that both saying and listening are purposeful activities governed by their own ends, and these ends are a function of the metacommunicative purposes of communication: what the different actors want to do by making themselves understood or by understanding, the way they intend to change the (their) world. Hence, relevance is always individual and ad hoc. Your interest in this piece will not be the same the second time around, and you will probably skip some passages or, conversely, spend more time pondering others. The corollary is that relevance for the speaker never coincides totally with relevance for the interlocutors. When the consecutivist understands “That was a notable Pahétique!” and chooses to note “!” knowing full well that when the moment comes he will not even bother to remember to say “That was a remarkable Pathétique!” he is - rightly or wrongly - opting for relevance for his audience.

Good interpreters are systematically more perspicuous than speakers (it is, after all, their speciality), especially than those forced to speak a language they do not fully master. More perspicuous means almost invariably less verbose, i.e. paring down the semantic information in full knowledge that it is the metarepresentation triggered by it (and the effects it will produce in the subject) that counts. A most notorious case is the total omission of semantic information made redundant by the image on a screen.

Kintsch & van Dijk speak of micro- and macro-propositions. It is, of course, another way of looking at the phenomenon, but, as with Seleskovitch's, this model has an Achilles' heel: it views discourse production and comprehension as a strictly cognitive process, and people never speak or understand with a view to get just information. They seek to go beyond understanding or producing micro- and macro-propositions, they try to achieve relevant metacommunicative effects; and these effects, in the end, are always pragmatic, i.e. emotive.

Will or may an interpreter "manipulate" a speaker's utterance - and even the metarepresentations he presumably intends to induce? It depends: sometimes yes, sometimes no. It depends, of course, on objective factors such as speed, accent, etc., over which an interpreter has no control. But it depends more crucially on metacommunicative factors, such as whose metacommunicative purposes he chooses to serve: the speaker's, those of the client who has hired him, those of one or more of the interlocutors' or, even, his own (have we never deliberately mocked a pompous style to provoke ridicule rather than admiration?). By the way, even when forced by circumstances to convey information selectively, how is an interpreter to select without a conscious or unconscious notion of relevance for somebody (more often than not, I submit, his own clients)?

That relevance is never symmetrical and that an interpreter cannot but choose whose notion to cater to is most glaringly seen in adversarial court proceedings. In a judicial setting, the defendant has no intention to come across as hesitating, yet a legal interpreter must choose relevance for the court (his client). All manner of infelicities that would be anathema to reproduce in a conference must be conveyed most faithfully. Interpreting in private between the defendant and his lawyer as they rehearse their strategy in the court room, the same interpreter will translate the same utterances from the same speakers differently, with relevance to them paramount in private (the lawyer wants to make damn sure that he understand his client and, above all, that his client understands him - and will be well advised to trust the interpreter to take care of all the necessary "manipulations" in order to achieve this metacommunicative end), and relevance to the court carrying the day in public.

There is, then, no contradiction between Seleskovitch's théorie interprétative, Kintsch & van Dijk's model and Relevance Theory: they look at the same object - communication- from different perspectives. Indeed, sense is comprehended as a non-verbal metarepresentation; indeed macro-propositions are derived from micro-propositions and these are understood first; and, indeed, both micro- and macro-propositions are processed according to the principles of relevance. All three models are right - and at the same time, wanting. Only when one brings in the non-cognitive, pragmatic, emotive metacommunicative purposes and effects of communication do the hitherto hidden corners of the stage come fully into view.

And once the stage is fully lit, it is up to me, the interpreter, to determine: (a) the overall metacommunicative purposes, i.e. why these people even bothered to come to speak to each other; (b) the metacommunicative purposes of the different parties, most particularly my own audience; (c) the metacommunicative purposes of the client, i.e., why he has bothered to spend money on an interpreter in the first place; (d) how to weigh the different notions of relevance when they do not overlap sufficiently; then and only then, (e) what this speaker is now saying, and finally, (f) what I, the interpreter, have to say for it to count as the best possible service under the circumstance, i.e. do I or do I not water down offensive language? Do I or do I not simplify an explanation? Do I or do I not "correct" linguistic and factual mistakes? Do I or do I not translate the joke? Do I or do I not come up with a joke of my own? Do I or do I not convey this piece of information that I know will be totally useless or confusing to my audience?

In the course of a single speech - let alone a single event - an interpreter will be called upon to do more, less and other than "translating", i.e. than saying the same thing or even than conveying the same message. How is he to decide what to do when and how?

This has nothing to do with language and everything to do with communication theory, and it applies to monolingual as well as interlingual communication, whether written or oral; simultaneous or consecutive or dialogic; literary or pragmatic; legal, medical or technical; to the translation of opera librettos for the printed programme or for the little screen above the stage or to be sung in turn, etc. This is what, to my mind, should be taught first and foremost… pity that it very seldom is.

Author’s note: These ideas are developed in my Teoría general de la mediación interlingüe, Publicaciones de la Universidad de Alicante, 2004. If anybody cares to read the English version, I can make it available as a Word file.

Recommended citation format:
Sergio VIAGGIO. "How should we interpret? A counterpoint to Panayotis Mouzourakis' article". May 24, 2005. Accessed July 10, 2020. <>.

Message board

Comments 1

The most recent comments are on top

Jim Nolan


Thank you, Sergio, for your very perceptive comments, particularly about the emotive dimension of language, which is often carried not by the words but by the tone, color and inflection of the speaker’s voice, not to mention his manner or demeanour. Due to that emotive dimension, words can sometimes mean something entirely different from what they appear to mean, or what they would mean if read on a printed page instead of being spoken and heard, i.e. conveyed aurally. The most relevant example you give, irony, is the clearest case (another being figures of speech). I must share with you this anecdote: At a conference last year of the Canadian Language Industries Association (AILIA), a team of computational linguists presented a machine translation programme they had developed which actually produced remarkably good translations of straightforward sentences. Indeed, so sophisticated was this programme that it actually produced three plausible English translations of François Villon’s famous verse “Mais où sont les neiges d’antan?” Nevertheless, during the question-and-answer period following their presentation, I raised (rather perversely, I admit) the question of what the machine would do when a speaker meant the opposite of what he said (or something quite different from what he said). They seemed stumped, quite at a loss to recognize that such a thing actually happens, though it did occur to one of them that perhaps what I had in mind was “sarcasm”. Yes, that is one of the things I had in mind, but there are many others, all too familiar to a practicing interpreter and, apparently, still unfamiliar to computational linguists. Nuances, shades of meaning, emphasis, hues of emotion, degrees of conviction carried by the voice –all are part of the message. Computers, at the present stage of their evolution, can be forgiven for not yet having learned to recognize them and render them. We, as interpreters –especially if we fancy ourselves to be competent diplomatic interpreters—cannot. That is why, in my training manual, Interpretation: Techniques and Exercises (published this year by Multilingual Matters, UK) I devoted a chapter to humor and tried to distinguish situations in which humor may be translatable from situations in which it is best not even to attempt it. We must never forget that, although we may have learned most of our translation techniques from the printed page, what we are dealing with in the booth are sounds –I would even say, in some cases, a kind of musical composition. Another event last year brought this home to me: The Connecticut Chapter of the Alliance Française sponsored a contest to see who could produce the best French translation of Lincoln’s Gettysburg Address. All the entries were good, and one was judged better than the rest, but the contest was ultimately declared a failure. Why? I suspect it was because, no matter how faithfully the words and style were rendered, no one succeeded in reproducing the sheer majesty of that short but flawless composition, in which every phrase flows and builds on the last one to produce a thing of sheer beauty. There’s no better example of the fact that when we “translate” a speech, we are not just dealing with words.

Jim Nolan

Total likes: 0 0 | 0