
Aquablocks
AquaBlocks is an interactive toy that lets children build marine creatures using blocks and explore immersive ocean environments. With a wind-up mechanism to switch scenes, it encourages creative storytelling and learning about ocean myths.
The project aims to showcase the beauty of the ocean, sparking curiosity and a desire to protect it.
Team Member
Lilith Ren
AI Interactive Toy
LLO
Immersive Environment
Location
MIT
Dates
25 Spring

Keywords
Interactive children's toys, marine creatures, immersive environments, creative storytelling, ocean myths, education, environmental awareness, modular design.
Awards
Muse Design Silver
UX Design Award
IF Product Design Finalist
Context
Most tools remain abstract and distant, leaving children distant from the cause. Our project creates a journey where a child begins by building marine creatures with blocks, then sees them animated in AI-generated ocean scenes.
Problem
Our users are children aged 5–10. What we found is that most current educational stories are top-down and prederermined, offering little space for imagination, while many AI story machines deliver finished narratives that replace children’s voices. This limits memory, empathy, and lasting engagement.
Target User
By letting children build their own creatures and stories, our project makes learning playful, creates emotional bonds with marine life, and helps them truly care about ocean protection.
Innovation
AI should not be a replacement for human creativity. Instead, it should serve as a powerful tool to bloom human imagination. AI here becomes a co-creator to support children’s narratives and creativity rather than automating them, offering an inclusive model for educational tools.

Block Design and Gesture Recognition
The core component of Aquablocks consists of 10 ocean myth creatures, each represented by modular blocks. These blocks are divided into 2-3 parts, each part being magnetically connected to allow for free assembly. This modular design enables children to create various marine creatures by mixing and matching the different parts. The flexibility of the design encourages creativity and offers opportunities for children to invent new, unique creatures.


Creature Recognition and Scene Mapping
Upon capturing the image, the system uses machine learning algorithms to identify the shape of the newly created creature. This recognition process enables the system to map the identified creature to a corresponding life environment prompt that is pre-configured in the system’s database.



Opening Animation
System Design
Technical Drawings





Lora Model Training, Prompt Generation and State Machine





Environment Generation
After recognizing the creature, the system retrieves the appropriate environment prompt and utilizes the GPT-4 API to generate a detailed description of the creature’s habitat. Several key factors influence the generated environment, including the creature’s mythical origins, the visual attributes of its habitat, and the season in which the scene will be set. The system applies a seasonal tone to the environment, with four predefined seasonal styles: Spring, Summer, Autumn, and Winter. Each seasonal tone affects the colors, lighting, and ambiance of the generated environment, allowing for the creation of varied and dynamic settings for the creatures.
Stable Diffusion Image Generation




Following the creation of the environment prompts, the system uses a local Stable Diffusion model to generate images that reflect the four distinct seasonal environments. Stable Diffusion was selected due to its ability to produce high-quality images that align with the mythical and dreamlike aesthetic required for this project. To ensure visual consistency across all generated images, a custom-trained LoRA model was used to fine-tune the Stable Diffusion model. This ensures that the generated environments retain the same artistic style, maintaining a cohesive visual identity throughout the four seasonal scenes.
