Design Research Area 1

Augmented Reality Interactive Installations

Face-to-Face:

Interactive expressions

between Viewer and

Virtual Sculpture. 

This interactive virtual sculpture project is about face-to-face communication between an artist and an audience The virtual sculpture is aware of the viewer: it catches your facial expression and tries to understand your emotional feeling. The sculpture then mirrors or seeks to change the viewer’s emotional expressions. The installation user facial expression recognition supported by convolutional neural networks (Kim, Kum, et al., 2012). These networks can be challenging to train when the amount of data available is limited. 

This project aimed to classify facial expressions into distinct emotions such as happy, angry, neutral, and surprised and have the sculpture respond appropriately or engage in facial Virtual Sculpture catches 4 distinct emotions, in this image showed “Surprised”

Preparation: The facial expression recognition dataset comes in a CSV format "fer2013.csv", which we convert into a dataset of images in the PNG format for training and testing the model. Dependencies: Dependencies and interaction logic of the 3D facial data and animations are formed using Python 3, OpenCV, and Tensorflow/Keras. 

Display at Robot Virtual Exhibition 2021 

Funded: National Science Foundation I-Corps / Littman Library, NJIT

Previous
Previous

A.I. Artificial Interruption

Next
Next

Projection Mapped & 3D Interfaces