Found in Transation

Agency
Client
  • Google

Visualizations show how the machine learning model clusters words from different languages by semantic similarity, and translations are presented typographically and auditory across 24 languages.

The Challenge

On one hand, the installation should show the magic of Google Translate and how it works today.

On the other hand, it should show recent advances in machine learning research around machine translation: Using the data sets from many languages instead just two actually improves translations across all languages.

Found in Translation Video

Photo Credit: Taiyo Watanabe

Photo Credit: Taiyo Watanabe

Photo Credit: Taiyo Watanabe

Design & Execution

The solution was to distribute information across interactivity, time and the spatial composition. TheGreenEyl created a room with 24 screen panels and 24 speakers, and a microphone at the center of it.

When a visitor enters the room they are asked a question which they can answer verbally. As this sentence is being translated, they see a visualization of the entire data set on a central screen, with language-pair-specific visualizations on each individual screen. These all then resolve into the translation displayed in typography, different languages and writing systems, and each speaker plays back voices for each sentence. Since viewers get a sense of the underlying data modeling they get a sense of which words and  which languages are closer together than others. They can also try out different sentences to do further analysis and comparisons. In addition there is a text panel at the back of the gallery explaining some of the underlying concepts.

Photo Credit: Taiyo Watanabe

Photo Credit: Taiyo Watanabe

Photo Credit: Taiyo Watanabe

Slideshow

Found in Translation