Mimeisthai – A Spoken-Word Generative Trending Installation [TEDx]

 Premiered at TEDx Sydney 2012

“Mimeisthai is a wonderful hybrid of technology, social media and human curiosity.”
– Neville Brody, D&AD President. 2013 D&AD Awards Top 5 President Picks.


Adfest Grand Prix – Interactive
Adfest Grand Prix – Innovation
Adfest Bronze – Best Use of Social Media
One Show Design Bronze – Data Visualisation
One Show Interactive Finalist – Interaction Design
One Show Entertainment Finalist – Experiential
One Show Entertainment Finalist – Branded Apps
New York Festival Silver – Avante Garde
New York Festival Silver – Event Promo
New York Festival Bronze – Ambient
New York Festival Bronze – Digital Design
New York Festival Finalist – Environmental Design
AWARD Silver – Best Use of Digital in a Promo
AWARD Bronze – Emerging Digital
D&AD In book – Spatial Design – Installations
Cannes Lions Finalist – Cyber
London International Awards Silver – Weird Wonderful Work
London International Awards Silver – Innovative Use of Digital
London International Awards Bronze – Environmental Installations/Displays
Digital Asia Gold – Media Innovation
Digital Asia Silver- Best Use of Social Media
Digital Asia Bronze – Online PR
Spikes Silver – Use of Media
Spikes Finalist – Environmental Design
Spikes Finalist – Other Digital Channels
The FWA Website of the Day

“What Twitter Would Look Like, Without A Laptop Or Smartphone” – Fast Company

Created in the spirit of TED’s mission, “ideas worth spreading,” once a year TEDx Sydney creates a forum designed to give Australian communities, organisations and individuals the opportunity to stimulate dialogue through TED talks.

TEDx wanted their audience to connect with each other to build upon ideas they had heard in the room. Increasingly they found, come their intermission the audiences buried themselves in their smartphone or tablet, not connecting with their peers in the room.

Good ideas don’t come from a lone genius glued to an iPhone tweeting nearly as often as they come from interactions between geniuses. So we created something that took away the encumbrance of the hardware and liberated the fluidity of conversation.

“The World’s First Spoken-Word Trending Engine” – SohoHouse

We turned their 1072 square meter Carriage Works Sydney theatre into a giant, real-time emerging topics trending engine. We installed an array of directional and parabolic mics strategically through the theatre. Each line-out connected directly to a dedicated speech-to-text engine. This took snippets of conversations spoken live from the theatre, and generated a live visualisation portraying ideas as they spread through the audience.

Think twitter, except without the need for a smartphone, laptop or tablet – to trend a topic all you need to do…is speak.

The audience was freed from their devices to converse and build upon the ideas in the room. The visualization generated a live topic feedback-loop on the big screen, and an online hub captured topics for later.

“The Future of Social Media” – HUHMagazine

The taking away of the tangible is happening all around us. In the gaming industry flapping arms and wiggling bums have fast replaced controllers and remotes in home entertainment. This has been described as the era of invisible technology. Whilst gaming is leading the way, Mimeisthai has ensured social is not too far behind. Fast Company recently wrote:

“From mouth to screen, in an instant; no need for a computer or smartphone, the technology is invisible. That might sound a little terrifying to those of us who value that quaint relic of the 20th century called privacy, but…Mimeisthai [is] less of an Orwellian surveillance system than as a way to wed the cold data of social networks to the quick, easy intimacy of face-to-face conversations.

Mimeisthai’s potential lies with social interactions relative to the environment. Whereas the encumbrance of hardware can skew the true nature of free flowing thoughts and ideas, Mimeisthai can be used to disseminate topics in an unbiased format, from visualising the flow of ideas at public forums like TEDx, or reporting on parliamentary legislative discussions, to polling live TV audiences ala [ask the audience on] Who Wants To Be A Millionaire?”

“Prepare to be stopped in your tracks because this is off the hook” – It’s Nice That

“A New Social Era?” – Zé Studio

“Leave it to the Aussies to bring the art of conversation back in to the real world” – Really New Media



About The Technology We UsedJustin James Clayden

The brief was to develop a system that would take sampled conversations from the TEDx 2012 Sydney audience and run them through speech to text algorithms, yielding coherent textual statements which where enter into a system that would eventual transfer these statements to a visualisation, which would display them within a particle swarm.

The back-end for this system was developed in Javascript, within the framework of node.js, a highly performant web server.  Server control pages (for example the pages used to enter the text) where served up using the Express web framework.  Higher-bandwidth streams relied on socket.io and used the xhr-polling transport.  (The server was hosted on heroku.com; it supports this transport.)

The visualisation was *initially written in processing.js, but it was quickly realised that the performance afforded by this was not adequate, and so we switched to using ‘native’ Processing.

The visualisation was a particle swarm, using flocking code sourced from openprocessing.org. We implemented a flocking parameter keyframe system that allowed us to control the look of the swarm at any time in the animation.

As the textual utterances were released, the particles required to display them were ‘borrowed’ from the swarm and were cosine interpolated into their positions, held there for a moment, and then re-interpolated back into the swarm.  We offset the delay of this interpolation so that the letters formed and dissolved one after another.

The visualisation would poll the server at certain periods of time (that could be controlled on the fly to allow us to bring more/fewer in at certain times) and pull down the next utterance to display.

We used 25000 particles, which was just about the limit with which we could achieve an acceptable frame rate of 30fps.

Agency: Clemenger BBDO

Executive Creative Director: Paul Nagy
Installation & Creative Director: James Théophane
Exec. Producer: Denise Mckeon
Producer: Jonathan Gerard
Copywriters: Rees Steel & Joel Hauer
Programming: Justin James Clayden
Data Visualisation: Small Multiples
Sound Design: Anthony Tiernan
User Experience: Claire Alexander
Toby Royce – Online and Animation
Lucas Vazquez – Editor
Anthony Tierman – Sound Designer
Web Development: John Knutsson & Joshua Brown
Infrastructure: Viocorp
Capture: Crystal Rata
IT Infrastructure: Viocorp

Special thanks to all at TEDxSydney and Kyle McDonald for use of “Clouds Are Looming”, an Open Processing engine.

Kyle McDonald works with sounds and codes, exploring translation, contextualization, and similarity. With a background in philosophy and computer science, he strives to integrate intricate processes and structures with accessible, playful realizations that often have a do-it-yourself, open-source aesthetic.

*”Clouds Are Looming” by Kyle McDonald, code used with the permission of the author.

One thought on “Mimeisthai – A Spoken-Word Generative Trending Installation [TEDx]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s