Droplets of Sati
Droplets of Sati is an interactive two-person meditation system investigating liquid as a common interface for nonverbal communication. Through converting participants’ breathing rhythm into dripping liquid, the system enables each person to gradually come into contact with the other person’s rhythm after attuning to their own physical tempo.
Research Team
Lilith Ren
Melo Chen
Raphael Li
Role
Designer
Location
MIT Medialab
Dates
25 Fall
Installation
AI System

Problem
Shared memories are rarely symmetrical. Even when two people agree on facts, they often carry divergent emotional tempos. Most memory-sharing technologies assume negotiation must occur through narrative, where experiences are explained and aligned through words.
Solution
Participants sit across from the liquid interface, wearing respiratory sensors that translate breathing into discrete droplet events. In the initial self-attunement phase, each participant hears only their own droplet rhythm, allowing them to ground attention in personal respiratory tempo through synchronized sound and liquid motion.
Interaction
Participants enter the experience with a shared memory and participate in a guided meditation while their narrated memories create a soundscape. Synchronization appears as a type of nonverbal negotiation between emotional states through listening to liquid instead of words.



Liquid as Shared Interface for Memory and Communication
We explore liquid as a shared interface for memory by manipulating the water surface through datalized bio-signals. This transforms the water surface into a responsive canvas and explores the technical and experiential potential of water as a visual medium.





Material Behaviour
The interaction concludes in shared silence as participants observe the final state of the liquid surface. Diffused ink and oil form an irreversible material trace shaped by the timing, duration, and convergence of their breathing rhythms. This material residue functions as a shared archive of the interaction, enabling reflection on the negotiated experience through synchronized sensation rather than verbal recollection.

Dynamic physical surface by overlapping density variations across water, oil, alcohol, and ink.
The patterns contrast two individual participants’ emotional states. The left pattern shows an irregular breathing rate, with varied droplet sizes and tightly grouped scatter spacing across the liquid surface, reflecting a negative emotional state. The right pattern has a more uniform spatial distribution and consistent droplet size, reflecting a calmer, more positive emotional state.


Patterns of two participants’ individual emotional states.

Participants A and B’s dynamic pattern snapshots in the dish.



Pattern transformation process.
System Workflow
The hardware system comprises three fluidic actuation channels orchestrated by an Arduino Nano R4, selected for its compact form factor and high-resolution analog input support. Physiological input from two participants is captured via custom wearable respiratory belts incorporating stretchable conductive rubber elements, whose resistance varies with chest expansion. Signals are sampled at 100 Hz using the microcontroller’s 14-bit ADC, smoothed with an exponential moving average (α = 0.15), and briefly calibrated at startup to establish adaptive baselines and detection thresholds.





Software Design
The software architecture is split into two coordinated subsystems: a high level Narrative Engine that handles semantic analysis and audiovisual generation, and a low level Biofeedback Controller that manages real time respiration sensing and hydraulic actuation. This division prioritizes deterministic timing for physiological feedback while reserving large model inference for session initialization.

