History of Machine Translation

History of Machine Translation | Storia della Machine Translation

History of Machine Translation


Machine Translation refers to translation carried out by software, or a portal, that translates texts from one language to another, even without human intervention.

1) Early Years in the History of Machine Translation
2) From the ALPAC report to the 90’s
3) The 2000s

The origins of Machine Translation can apparently be traced back to the distant 800 AD. The Arab scientist and cryptographer Al-Kindi was the first to devote himself to developing rudimentary linguistic translation techniques, still partly used today in automatic translation processes.

However, the actual concept of Automatic Translation developed more concretely around the 1930s, when Franco-Armenian Georges Artsrouni and Russian engineer Peter Troyanskij presented two innovative proposals for the first translating machine patents. These worked in three stages:

  • an editor organized the words in the source language into their logical forms
  • the machine then translated the text into the target language
  • finally the editor normalized the final output

Early Years in the History of Machine Translation

The two proposals remained virtually unknown until the 1950s, when a mathematician and research at the Rockefeller Foundation, Warren Weaver, developed what is now considered the first document on Machine Translation: the Translation Memorandum. Weaver’s proposals, based on the successes of information theory developed during the Second World War, contributed to defining the role of the computer in translations and provided a strong impetus for research in the field.

A couple of years later, in January 1954, an event was held at IBM’s New York office that sparked the interest of the public: it was the Georgetown experiment, the first public demonstration of a Machine Translation system. During the experiment, 49 sentences were translated from Russian into English using a system that was able to process 250 words from the dictionary. The system undoubtedly had significant quantitative limitations, but it helped to stimulate public interest and research in the field of machine translation globally.

At the same time, the first operating systems were born, which allowed the speed of translation systems to be improved. At the same time, however, the main limits of machine translation were highlighted. Mathematician and MIT member Yehoshua Bar-Hillel said that a fully automated translation could only be achieved if one was prepared to accept a low standard of quality in the final result.

Furthermore, he believed that semantic ambiguity and syntax complexity were the two main obstacles to the development of a Fully Automatic High Quality Translation. The mathematician therefore undertook to develop a new model of higher quality automatic translator.

From the ALPAC report to the 90’s

Research continued over the following decade, focusing in particular on the English-Russian translation relationship and the translation of technical-scientific documents. A turning point came in 1966, however, when the ALPAC report, commissioned by the USA, was delivered by the Automatic Language Processing Advisory Committee. The report dampened the enthusiasm for research in the field and highlighted the limitations of machine translation.

In particular, it highlighted the failure to achieve progress and the marked differences compared with human translation. American research slowed down sharply for about ten years, with the exception of some translation projects developed on American soil. In 1977, for example, the METEO system used to translate weather forecasts from English into French, was installed in Canada.

The globalization process that became established in the 1970s, contributed to an increasing demand for low cost translation systems for technical documents in Canada, Japan and Europe. Throughout the following decade, several companies took advantage of the availability of mainframe machine translation systems, which were particularly popular at the time.

In the 1980s, research in this field focused instead on translation through intermediate linguistic representations, which involve morphological, syntactic and semantic analyses. New processes related to the machine translation sector also took over at the same time, and IBM developed new statistical translation methods.

In the 1990s, research shifted instead to speech synthesis translation and the use of automatic translation increased thanks to the advent of powerful low-cost personal computers.

The 2000s

In 2003, Franz-Josef Och won a machine translation speed competition and soon found himself heading Google’s Translation Department. Around nine years later, Google announced that its “Translate” could translate enough text to fill a million books a day. Automatic interpreting combines three IA technologies: speech recognition software, speech synthesis software and machine translation.

Significant commercial use of machine translation, however, only began in the new millennium. In fact, starting from 2017, with the introduction of the first artificial intelligence and Deep Learning systems, machine translation experienced a real explosion. Rule-based machine translation (RBMT) systems were the first commercial systems based on linguistic rules. This type of translation in fact relies on a large number of built-in linguistic rules and just as many bilingual dictionaries for each language combination.

While the quality of translation achieved by Neural Machine Translation systems is incredibly high today, human intervention is always needed. Machine translation processes in fact end with the Post-Editing stage of Machine Translation (MTPE), which consists of a review, by a professional linguist, of the translations carried out by a Machine Translation engine.


Creative Words
creativewords@stats.dpsonline.it


Related posts

QUESTIONS? WE’VE GOT ANSWERS!

Creative Words, servizi di traduzione, Genova