Posted by on Nov 27, 2012 in |

In this project, data from a mold species growing in a petri dish are continuously measured through bioelectric sensors, a digital microscope and optical recognition software. This real-time input is mixed into algorithms that govern the emotional expressiveness of a virtual actor, shown in the form of projected video, affecting both its appearance and speech. Depending on the degree to which facial recognition software trained on viewers “likes” or “dislikes them,” the virtual human’s facial features start to distort with increasing intensity, and glossolalia (speaking in tongues) gradually phases out comprehensible speech.