Project’s concept: My DA focused on the the relationship between AI and human interpreters. With the rise of machines, can human still make a living? When it comes to the occupations like translator and interpreter, what are the advances AI have made and what are the challenges with the fact that there are subtle differences among different languages. In the short, medium and long term, what kind of role machine will play in this relationship?
Iteration: At first, I didn’t figure out what DA I really want to demonstrate in my first video pitch until I got the feedback from Callum Harvey and Alexander Mastronardi. Callum suggested me to do some research on ‘whether AI would replace human interpreters ‘ while Alexander gave me a clue that I can focus on ‘how current interpreters are already using emerging technology’. So I did some research as they suggested and found more interesting ideas beyond them. At last, I chose to clarify my viewpoint on the aspects of technology and language.
Methodology: I was going to create a website on the topic of the future of interpreting as I said in my beta presentation. However, it is a pity that I didn’t make it. I’ve purchased a domain but didn’t figure out how it can be connected to my wordpress. So based on the fast, inexpensive, simple and tiny (FIST) design principles, I wrote a blog post with all the research I’ve done and the relevant videos in the hope of demonstrating ideas clearly. Then I sent it to my Twitter and Facebook account as well as my Wechat moments in order to hear some different opinions about this topic.
Utility: The aim of my DA is to give others a relatively objective idea about the relationship between AI and human interpreter. One should take a comprehensive attitude when facing the rise of machine. It is unreasonable to have extreme ideas like AI would fully replace most of the jobs and more people will be unemployed in the future after you explore and investigate the whole situation. Some repetitive jobs can easily be replaced but when it comes to sophisticated job like simultaneous interpreter, there is still a long way to go for the AI.
Overall trajectory: At first, I only have a thorough idea about what I’m about to express. Because my major is English Interpreting, it is a great concern of me to predict the future of interpreting if I want to devote myself in this career. After doing research on some specific points, I found out it is of great significance to have a proper attitudes towards the rise of machine. So I narrowed my research to the relationship between AI and human interpreters and investigated the language and culture barriers on the process.
Project’s success and limitations: The biggest success of this project is I have a clearer idea about the future trend of interpreting as well as a profound perspective on investigating things. However, the biggest limitation of my DA is I haven’t got enough feedback from the social media to have further discussion about this interesting topic. I reflected the reason why is that I only have a small number of followers on Twitter and Facebook which can limit the spread of my ideas. Another reason is I released pretty late so that I cannot be able to receive some feedback before I finish this essay. However, the subject finished, but I’ll continue to refresh my ideas on wordpress and share with people who are interested.
How you approached the future cultures DA challenges: There are always some myths about the machine will take over human in the future. Such imaginations can also be seen in some futuristic movies. Keeping a close eye on the trend of AI and at the same time, learn as much knowledge as you can by using our brain are the smartest choice in my opinion. Be irreplaceable and always have a core competitiveness can make us address future challenges a lot.
Nowadays, there is a group of people calling themselves ‘AI refugee’, that is to say their jobs are more and more being threatened by artificial intelligence. It is predicted that in less than 5 years, 6% of jobs will be replaced by AI, and in 20 years, 60% of the jobs will be threatened by AI. With the rise of machines, can human still make a living? In terms of the job like interpreting, different people hold different views. For those who are not familiar with what the occupation of interpreting is about, it is a translational activity in which one produces a first and final translation on the basis of a one-time exposure to an expression in a source language. Generally speaking, there are two modes of interpreting: consecutive interpreting (CI), which is done at breaks to the exposure and simultaneous interpreting (SI) which is done at the time of the exposure to the source language.
When speaking of the future of interpreting, some people believe that AI will replace human interpreters with the development of neural machine translation (NMT), a new AI technology that saves information about the meaning of phrases, rather than just direct phrases translations. However, others hold the view that although AI can threat interpreters to some extent, there is still a long way for machines to reach the sophistication of human interpreters. As far as I am concerned, the relationship between machine and human beings is interdependent, namely, human interpreters cannot accomplish their works without the assistance of machines like headset, microphone, laptop etc., whereas machines alone cannot convey complicated cultural connotations, humor, emotions, non-verbal communication etc. As a result, taking a proper attitude towards the future of interpreting is of great significance.
The advances and
challenges of AI in interpreting
As the world’s most popular translation
software, Google translate supports over 100 languages and serves over 500
million people every day by using AI technology. In September 2016, Google
Translate switched from Phrase-Based Machine Translation (PBMT) to Google
Neural Machine Translation (GNMT), which translates ‘whole sentence at a time, rather
than just piece by piece’. There is even an algorithm called Bilingual
Evaluation Understudy (BLEU) aiming at evaluating the quality of text. ‘The
closer a machine translation is to a professional human translation, the better
Except from Google Translate, there are a few other machine interpreting solutions in the market including a telephone-based service launched by Israeli startup Lexifone in 2013, the Nara Institute of Science and Technology’s translation app VoiceTra, Clik earbuds deployed by UK-based startup Mymanu etc. All of these efforts prove to be of great promotion on the previous way of interpreting. People nowadays can have the access to understand foreign languages while traveling, working and communicating only through an application on their smartphones. Furthermore, researchers from the Nara Institute are now understood to be working on a lag-free interpreting system for the 2020 Tokyo Olympics, which will reportedly transpose the games’ Japanese commentary in real-time.
Jonathan Rechtman, a Chinese-English
conference interpreter with more than 10 years’ experience believes that AI is
probably all upside in 3, 10, or 15 years. He also argues that there’s going to
be tremendous increase in efficiency and a lot of value that will be unlocked.
However, he points out that the rise of machines and the ultra-rich that
capture the wealth created by those machines are threatening and dominating the
masses of knowledge workers.
This is an extract of a debate between Jonathan Rectman and Hu Yu, CEO of one of China’s most well-established AI companies iFlyTec. The topic is about AI’s impact on interpreting. In the video, Jonathan stressed that we are standing at a turning point in history and the relationship between human and machines is about to be upside down. He also mentioned that AI is not only challenging our job, but also our dignity.
The picture above was released by New Yorker in October, 2017 shows that they believe robots are the new white-collar workers of the future while humans are left as pathetic beggars on the street.
Shoshan is the CEO of One Hour Translation. In his point of view, neural
machine technology will carry out more than 50% of the work handled by the $40
billion market within one to three years. His optimistic ideas about neural
machines are as follows:
“Today with neural machines, for a growing amount of material and categories, they only need to make a very small number of changes to what a machine outputs, in order to get a human-quality translation.”
Furthermore, Shoshan acknowledges that today on average 10% of a machine-translated document needs to be fine-tuned by humans to meet the standards expected by his company’s Fortune 500 clients. Just two years ago, that figure was around 80%. He also assumed that if machines can do what you can do, then you have a problem. A lot of translators and agencies will tell you that there are certain highly specialized translation services which will require a human touch for the foreseeable future – and that may be true. But for the bulk, 80% of the material that corporate customers pay to have translated on the market today, it will be machine translatable in the next one to three years. And he even stressed that “And importantly, we’re not talking about five to ten years; we’re talking one to three years.”
From what has been discussed above, we can see that people from different trades are all concerned about AI and automation. With the ability of deep learning, some repetitive tasks and even expert jobs are being challenged. AI is working 24/7 sucking up information on global trend from all sorts of sources including social media, financial data etc. The advances of technology did give us a shock but at the same time, the challenges, glitches and hilarious moments AI faced in the field of interpreting should not be neglected.
“How arrogant!” he thundered to the crowd. “How could I be so arrogant?” But the real-time subtitling program valiantly struggled to render his rhetorical device. “How?” the subtitles asked. “Aragon, I looked at myself and i”.
To conclude, in order to meet the basic requirements of real-time communication without hilarious mistakes, there is still a long way for the AI to truly be qualified as a bridge between two languages. In the translation industry, AI now is equipped with the function of Neural Translation which increases the accuracy and quality of a machine’s output. Teaching machines to truly understand natural language has been one of the biggest challenges facing computer scientists working to advance artificial intelligence for decades. Moreover, interpreting can also be regarded as a cultural exchanges between two languages. The huge differences among different languages can also obstruct the convey of meanings.
The language and culture barriers in interpreting
Today there are between 6000 and 7000 languages in the world, of which about 1000 have some economic significance. That is to say, the technology would need to develop all these languages in order for translation technology take over humans. We shouldn’t neglect that Google Translate supports only about 80 languages, there is still a very long road ahead. Translation is not only about understanding the meaning behind each word, but also the interpretation of context and culture, which captures subtle connotations and nuances in the target language.
From linguistic and cross-cultural perspective, human interpreters must pay much attention to language and expression differences, poetic elements, cultural and custom differences, context, verbal and non-verbal communication etc. when they are dealing with the interpretation work from source language to target language. These are usually not an easy work to do for some compound bilingual individuals who learn the languages in the same environment and context and often use them concurrently or even interchangeably. These situations are happened when a child is raised by bilingual parents and both languages are used in the home so that the two languages are not separate and can be switched between at will, even while speaking. Even though bilingual individuals already have skills for two languages, they are not qualified enough to be an interpreter before professional training. And the United Nations will take fully consideration of the actual language skills before recruiting conference interpreters.
Nowadays, there are more and more critical voices about applications and services for their inability to convey the meaning accurately. As for human interpreters, they usually use context to obtain the meaning of words and take the non-verbal elements into consideration. Due to the complexity of words and meanings in different context, figurative and metaphorical translations are accurate from time to time. This human instinct cannot be easily imitated by the machine through an advanced algorithm.
In a word, AI is threatening most of the low-end repetitive work, and have large probability to replace the people who doing these jobs. However, the relationship between AI and interpreting will not change too much in the short and medium term in the future. Namely, before AI technology is mature and smart enough to notice the subtle differences among languages and can be able to break down the cultural barriers without making hilarious mistakes, we translators and interpreters are safe. Technology serves as the tool and assistant for human interpreters while huge amount of resources can be collected and analyzed and deep-learned by the machine. The balance of this relationship should be kept till the emerging and breakthrough of a more powerful technological transform in the long term.
My DA is going to address the future cultures challenge with regard to the future of interpreting. With the development of technology, more and more people believe that we don’t need any human interpreters in the future. So my responsibility is to do some research and draw an objective conclusion. The feedback I’ve received from previous have been taken into consideration which made me focus on specific aspects of the impact of AI on interpreting.
The method I’m going to take is creating a website on the topic of the future of interpreting. The social utility of this topic is to arouse people’s awareness on the important role interpreters play in the international arena and make them ponder about prospects of language barriers.
Then the researches I’ve done in this area are including two parts: First, from the perspective of some official organizations such as the United Nations, how do they feel about the future of interpreting. Second, from the perspective of some private owned translation companies including google translate, what are their contributions on assisting human interpreters by providing emerging technologies?
From week 8 to 12, I was more used to live tweeting the film each week, and it is indeed a great example of presentational media. All in all, I harvested a lot from this experience, but there are still things I need improving. The following parts can best express my advantages and disadvantages during the whole process.
The following link is the tweet I liked best. It’s a sentence I quote from the film review of ‘Blade Runner 2049’. I got six likes and one retweet. It proves that I gained some resonance from my fellow students about deeper issues like gender equality. https://twitter.com/Guo_Jiafeng/status/1131341277939716096
Then, there are also some disadvantages. As far as I am concerned, there are two main reasons. From the object perspective, it was my first time using twitter. So far, I have followed 152 accounts with only 30 followers. The number of tweets I sent is 273, the likes is 804. After I checked the tweets from some of my classmates, I found these numbers matter. Because the voices I spoke have very limited influence and because English is not my mother tongue, the ideas I expressed can be shallow, the words I used are very easy. As a result, my experience of live tweeting were definitely not the same as my classmates. Even though I found live tweeting very intriguing, I bet my classmates were more enjoyable than I was. I was a little jealous to be honest for they can talk about issues critically and deeply. From the subjective perspective, I found it difficult to connect ideas and concepts of weekly lectures to the live tweeting experience. For instance, week 10’s lecture is about drone stories while the film is Majorie Prime. It seems the two things are not so relevant. And also, because those ideas are new to me, what I did is to look up both Chinese and English meanings in the website. For instance, what is digital artefact? When I typed it in Baidu, the biggest search engine in China, I couldn’t find any useful information but a poor literal translation. Then I checked Google, I got the definition but still felt I cannot understand it 100%. As a result, I only made it once in my live tweeting to combine lecture ideas and the film together. That’s ‘the Matrix’. The methods I can take to address this problem are repetition and brainstorming. By repetition, when I don’t understand a specific term, I try to find its usages in different contexts and consolidate them with reading important passages on it repeatedly. Normally, after I read a passage three to four times, I have the idea of what’s going on. By brainstorming, I think of some important points first and then list them all on a piece of paper. After I wrote things down, some unclear parts are obvious and the connections can be found more easily.
In conclusion, the five weeks’ live tweeting make me more clear about the things I’m doing. And I was also very gratitude for the trials I did on the first six weeks. They were not so perfect with childish viewpoints and stupid questions. But I cherish them as they are all precious fundamental bricks to build a solid house in the future.
is from Emily Koletti. It is about the evidence of ‘black mirror’ theories predict things precisely. Emily did a great job on explaining what #BCM325 is about, what is DA, what is Black Mirror, how she make progress after receiving feedback from her peers, which are really helpful for someone unfamiliar with black mirror like me. I found the structure of her presentation very clear and realized I can learn from her. And I’m also impressed that Emily joined a Black Mirror Community to gain deeper and further understanding of theories so that she can input others opinions and output her own ideas in a better way.
As for the suggestions, since black mirror talks about the unanticipated consequences of technology to humanity, I found it interesting that there are some different voices against the idea of black mirror, believing it emphasized too much on dark side of humanity and the bad influence of technology. So I suggested that different ideas should be welcomed in case of focusing too much on the disadvantages of technology. Charlie Brooker has a famous comment on Black Mirror:‘It’s not a technological problem we have, it’s a human one’. And there are some negative comments after Netflix purchased the programme in September 2015. Someone think the plot is too much like Hollywood and scenes including sex, violence have becoming denser, making it’s less deep than the previous series. Will these elements influence the next episode of ‘Black Mirror’?
Furthermore, there is another interesting reflective journal about entertaining reflections of digital technology’s darker effect. And I also put forward the questions of how we should do after we realized the potential threat technology has posed and will pose in the future. It’s a question everyone of us need to consider carefully.
is from Lachlan Smith. It is about the future of AI in fashion industry. On his presentation, the thing I learned is he used his personal instagram to have interactions with his audiences and also did research and put polls on his account. The feedback he got let him narrowed down to technological and environmental factors of AI in fashion industry. I’ve also noticed that the first reference of ‘Challenges and Opportunities of Artificial Intelligence in the Fashion World‘ is Alibaba official web page. So I did some further research and give feedback to him.
As for suggestions, I focused on two main points: one is environmental factor, the other is the actualization of AI concept into fashion industry.
First, Fast fashion can bring pollution and maltreatment of animals. Nowadays, companies like Zara and H&M have already made changes to make the industry greener and more environmentally friendly.
Second, the AI concept store of Alibaba is a useful example. Because Alibaba is a Chinese company and the population in China is large enough to support some ideas like online retailing, Alipay and AI concept stores.
is from Susan Alderman. It is about the sustainability of fashion industry. Quite similar to Lachlan’s, but some points are different. She focused on elements like circular industry, young people and UN SDGs. I feel her presentation ambitious enough to link those big elements together. And the workshop she owned is very interesting and impressive.
In this beta presentation, the future of translation in the short, medium and long term will be discussed. In April 2006, Google launched a translation service based on the Statistical Machine Translation(SMT) and then switched to a neural machine translation(NMT) system in November 2016. In the short-term, this technology can become more mature with higher accuracy.
In the medium-term, NMT technology can make much more progress in dealing with translation issues. It will work more like a sophisticated human translator or interpreter. However, how can people cope with the problem of putting this technology into practice with easier access and lower cost? Can other explorations like “universal translator” “babel fish” and “iflytek” better address this issue?
In the long-term, I am optimistic about there is no language barrier anymore. By implanting a chip, you can talk to anyone in the world freely. Nevertheless, when it comes to literature translation, there are still some efforts need to be made.
For this live tweeting experience, I’d like to share three points, my ponder about this way of teaching, my problems, and my reflections.
This tweet was almost the first tweet I sent on the first class. It’s very kind of Brooke to answer my stupid questions patiently. Actually when I looked through my 6 weeks tweets, there are lots of stupid questions like that, but what surprised me is the lecturer and my fellow students didn’t laugh at me. I felt gratitude for the peer assistance appeared in the live tweeting. Everyone is busy watching, tweeting, researching, but some of them are willing to help others understand the plot and answer some questions. I can recall in ‘A Space Odyssey’, the Blue Danute was played many times. I felt familiar about the tune but cannot remember the name. then I saw someone’s tweets with the name of this tune. I was very glad and even shocked because this way of teaching and communicating is totally new for me. Twitter provides a platform for the #BCM325 to discuss the film and other related topics. There was even an stranger curious about our course and asked me what is #BCM325? And I am very proud to told him it’s the name of our course! Even though I have no intention of teaching in the future, this way of giving lessons and interactions make me ponder a lot. When students are getting older, they seems to be more silent and do not want to have interactions with teachers and fellow students in class. Live tweeting is good way to activate them and share some brilliant thoughts with each other. This makes learning things more interesting and not isolated. That’s why I like this course most among all my courses.
This tweet mainly shows my biggest problem on live tweeting. This didn’t happen in the first class because ‘Metropolis’ is a silent film. I could have time to look up some background knowledge when watching the film. But in ‘A Space Odyssey’, there was no subtitles, if I checked something online, my listening would be influenced. This happened the same while live tweeting ‘Ghost in the Shell’. I felt fair this time because my fellow students cannot understand Japanese either! Haha!
Except for listening problems, I find it difficult to understand some tweets from my fellow students. Josh left me a great impression. I guess he is very good at tweeting because he can send deep thoughts(which I don’t understand) with many words(which is difficult and unfamiliar to me) in a short period of time! Every time I look his tweet, I just press ‘like’ and dare not to say anything. This can be called a barrier I guess. Even though I am a postgraduate in China, majoring in English. I’ve been studying English for many years. I still have too many problems on understanding local people’s ideas. I grew up in a different background and social system with them. But these are not causes to have such a poor comprehension! I reflected a lot about myself.
This tweet is actually my favorite one. I got 4 likes and 1 repost. I feel I can enjoy the film watching and live tweeting and make some contributions to others’ understanding about the film. It felt really good! I’ve also appreciated for some good links about the film shared by my fellow students. Through reflection, I think my tweets are too skindeep compared with my peers. I only talked about my feelings and some insights in a quite shallow height. I made up my mind to do some research work before watching the film so that I can catch up others’ tweets hopefully and make some contributions too!
To conclude, I’d like to extent my heartfelt gratitutes to Chris for his guiding and accompanying role in this course. I’d also like to thank my fellow students for their kind help, deep thoughts and useful links. I’ve never had access to courses with media and communication major before, this course really broads my horizon and give me a different perspective of observing the world and the future!