Interactive Installation


Two tables, positioned in front of a wall-mounted video projection, present a physical object/model that users can manipulate directly. The object features movable and articulated components that can be rearranged by the user. It is mounted on a rotating base, and a camera captures its “internal” views. This video feed is transmitted in real time to an AI-based image generation system.



On the adjacent table (next to the one holding the object) there is a 32-inch display. This screen hosts the system’s control interface, the Deep Mirror app. By reorganizing the object’s components and rotating it in front of the camera, users can observe how the system reconstructs and transforms the images in real time.

When approaching the screen, users can interact with the system’s primary interface, which exposes a set of adjustable parameters. Through buttons and sliders, these controls regulate the application of Operative Atmospheres to the images of the interactive object, transforming them into visualizations characterized by dynamically shifting lighting conditions, textures, materials, and formal configurations.
Users may engage with these transformations in two distinct modes. 





In the more accessible, surface-level layer (Basic Mode), interaction is streamlined and intuitive. 




In the deeper layer (Advanced Mode), users gain more granular control over the image-generation process, allowing for a more refined and technically precise modulation of the system’s outputs.
Although the Advanced Mode exposes a wide range of parameters (potentially daunting at first encounter), it reveals itself to be remarkably approachable and open to exploration, even for novice users. Since every element of the interface produces a discernible effect on the generated image, users are naturally prompted to test and recalibrate these variables, fostering a sustained, dynamic exchange with the machine.



Deep Mirror – Interaction

Two tables, positioned in front of a wall-mounted video projection, present a physical object/model that users can manipulate directly. The object features movable and articulated components that can be rearranged by the user. It is mounted on a rotating base, and a camera captures its "internal" views. This video feed is transmitted in real time to an AI-based image generation system.

On the adjacent table (next to the one holding the object) there is a 32-inch display. This screen hosts the system's control interface, the Deep Mirror app. By reorganizing the object's components and rotating it in front of the camera, users can observe how the system reconstructs and transforms the images in real time.

When approaching the screen, users can interact with the system's primary interface, which exposes a set of adjustable parameters. Through buttons and sliders, these controls regulate the application of Operative Atmospheres to the images of the interactive object, transforming them into visualizations characterized by dynamically shifting lighting conditions, textures, materials, and formal configurations.

Users may engage with these transformations in two distinct modes. In the more accessible, surface-level layer (Basic Mode), interaction is streamlined and intuitive. In the deeper layer (Advanced Mode), users gain more granular control over the image-generation process, allowing for a more refined and technically precise modulation of the system's outputs.