FOREST is the performative outcome of a National Science Foundation funded project aimed at enhancing trust between humans and robots through sound and gesture. As part of the project, Gil Weinberg and his team trained a deep learning network to generate emotion carrying sounds to accompany robotic gestures during human-robot interaction. They also developed a rule-based AI system for creating emotional human-inspired gestures for non-anthropomorphic robots. The performance aims at creating trustful connections between human and robotic musicians and dancers, which can lead to novel creative and artistic ideas for both humans and machines. One of the main innovations in their establishment of human-robot trust was the use of prosody – elements of speech such as pitch, intonation, stress, and rhythm that do not carry linguistic meaning. The robots in FOREST use these prosodic elements to convey emotion. With these prosodic elements, we studied emotional contagion, the processes of spontaneous spread of emotions between humans.