Interpreters are some of the most resilient professionals I know. We have proved so again and again over the last few years – and we are about to prove ourselves once more.
About six or seven years ago, SIDP (simultaneous interpreting delivery platforms) such as VoiceBoxer and Interactio burst onto the interpreting scene with global organisations such as Amnesty International adopting location-neutral models for their day-to-day meetings and team sessions. This kicked off a very heated debate about the validity and viability of remote online interpreting.
Then, COVID-19 happened… but we all know how that story went.
And now, machine interpreting systems – or ‘interpretators‘, as I like to call them – are making their grand global debut at the hand of tech giants and language service providers. According to them, artificial-intelligence tools are (or will be) able to render spoken statements in as many languages as needed in the form of automatically generated subtitles and/or computer generated ‘oral’ renditions.
We, the flesh-and-bone professional interpreters of the world, worried for a few minutes, but quickly realised their botched appearance on the global stage, such as at the Grammys, is fortunately giving us a moment to catch our breath and regroup.
Which leads me to the question du jour in the interpreting community: supposing AI-powered language interpretation works, what would differentiate it from actual people-powered interpreting?
Which, in turn, begs the age-old question of what value interpreters truly bring to the table.
In two months, I’ll be celebrating my first 20 years in interpreting and at least for the last 18 of those, I have been certain that, provided the basic requirement of excellent translation/ interpreting skills is met, the secret to outstanding interpreting lies in the interpreter’s command of the necessary support skills for each job.
It is through these support skills that we put – and will continue to put – the human in interpreting.
It is our ability to detect and properly decode nuance – to connect the dots and understand the relationship between contexts, situations, and past experiences – that makes the difference.
It is our capacity to ‘read the room’ and understand body language – to embody cultural sensitivity – that helps our clients save face. It is our innate ability to fill in the gaps and meet our speakers halfway that allows us to honour the true communicative intent behind their message.
It is our flexibility and resourcefulness that allows us to respond to and fulfil our clients’, speakers’, and audiences’ expectations.
Each time we sit behind a microphone, we do a lot more than ‘just change’ one language into another. On its own, linguistic codes barely scratch the surface of a message, and any interpreter who thinks of themself solely as a linguistic code converter is selling themselves short and equating their whole being to a microchip processor.
However, it seems that despite all our might and skills, interpreters around the world will now need to add a new language to our combinations alongside Globish: ChatGPT.
Last week alone, four different colleagues told me they suddenly found themselves interpreting AI-generated speeches and presentations, some of which barely made any sense.
This might very well be a temporary trend but, should it continue, the 21st-century interpreting tech saga will not find human interpreters being replaced by interpretators but instead forced to interact and work with them – to lend them our minds and voices and, by allowing them to benefit from our expertise and carefully crafted manoeuvring, bring them to life – much like in The Terminator, or Pinocchio before it.
Article originally published in the February 2023 edition of the ITI London Regional Group Newsletter under A View From The Booth