Experiential Design - Final Project: Completed Experience
Lew Guo Ying / 0365721 / Bachelor of Design in Creative Media
Experientail Design
Final Project: Completed Experience
Index
Instructions
For our final task, we are required to complete both a Final Project and an E-Portfolio, which together account for 40% of our individual grade. The Final Project involves integrating the knowledge from previous tasks to develop a fully functional product prototype. We need to refine the design, enhance the user experience, and deliver a working version of the application. Additionally, we must submit an E-Portfolio reflecting on our teamwork, social competencies, and learning journey throughout the design studio. This portfolio should include online posts, evidence of our reflections, and a video walkthrough to present the final product.
Together with my teammate Winnie Ho, we successfully built the MystiAR MVP prototype in Unity. Below is a breakdown of the features we completed:
Final Enhancements Completed in This Task
Several fine-tuned enhancements were added to improve usability and visual quality:
-
Arrow Indicator: A dynamic arrow is now fixed to the screen edge, constantly pointing toward the pet’s location. Since the AR pet moves freely, this feature ensures users can always find it quickly.
-
Voice Feedback with Text Display: To enhance voice command usability, a text status indicator appears on the left side of the screen when the voice button is activated. It informs users whether the app is listening or if the command failed.
-
Lighting Optimization on Startup: The opening scene lighting has been adjusted to better highlight the 3D pet. The new directional white light enhances contrast and makes the pet’s colors appear more realistic.
-
Resettable Features: Previously, potion crafting was a one-time interaction requiring an app restart to repeat. Now, all features can smoothly reset, allowing users to reuse them multiple times without interruption.
Progression of Final Project
|
|
| Fig1.1 Wit.ai |
For the voice control feature in MystiAR, I integrated Wit.ai, a natural language processing platform developed by Meta. Wit.ai allows developers to build apps that understand voice or text commands and convert them into actions. In our project, Wit.ai serves as the bridge between user speech and AR pet behavior. By processing audio input, Wit.ai identifies user intent and sends structured data back to Unity, enabling our AR pet to respond accordingly.
|
|
| Fig1.2 Intent and execution |
The workflow of the voice command system is as follows:
-
Intent Setup in Wit.ai
-
In the Wit.ai dashboard, I created an intent called move_pet.
-
Various utterances (e.g., "kiki come here", "come here", "kiki 来这边") were added and trained under this intent to teach the model how to recognize commands.
-
Using the Understanding page, I entered example phrases and used Train and Validate to improve recognition accuracy.
-
-
Integration with Unity
-
The Wit.ai Server Access Token is required to authenticate API calls.
-
In Unity, I wrote a script named WitTest to handle voice input.
-
The script records the user’s speech, converts it to audio data, and sends it to Wit.ai’s speech API (
https://api.wit.ai/speech). -
Wit.ai processes the audio, identifies the intent (e.g.,
move_pet), and returns the result to Unity.
-
-
Execution of Commands
-
Unity parses the response and executes the corresponding action—such as making the AR pet walk into the scene.
-
This process enables a hands-free and natural way to interact with the AR environment.
-
|
|
| Fig1.3 Voice feedback UI |
The next step in implementing the voice command feature was to connect Wit.ai with the in-game Voice Control Button in Unity. This button acts as the user’s gateway to activating voice recognition. When the button is tapped, it immediately turns red, visually signaling to the user that recording has started.
To make the feedback even clearer, I added text indicators that dynamically update during the entire process. Users can now see messages like "Recording…", "Processing voice input…", and the final transcribed command or error feedback directly on the screen. This ensures users always know whether the system is listening, processing, or if an error occurs.
|
|
|
Fig1.4 Script |
The functionality is powered by two main scripts:
-
WavUtility: Handles audio data conversion, transforming Unity’s recorded AudioClip into the proper PCM WAV format required by Wit.ai’s API. -
VoiceCommandManager: Manages the button’s behavior, handles the microphone recording, sends audio to Wit.ai, interprets the response, and executes the appropriate in-game action.
Once a voice command is successfully recognized, it not only displays the recognized text back to the user but also triggers the AR pet to return to its spawn point. This feature elegantly addresses the common issue where the pet may wander too far or become hard to locate. The interaction simulates calling your cat in real life, creating a deeper sense of immersion and emotional connection in the AR world.
2. Dynamic Health & Mood Bar
|
|
| Fig1.5 Hierarchy and Asset |
When building the final version of MystiAR, one of the critical enhancements I wanted to achieve was creating dynamic, responsive status bars to represent the pet’s Health and Hunger in a visually engaging and functional way. These bars are not just decorative UI elements; they play a central role in the gameplay loop, communicating the pet’s needs to the user and influencing how the user interacts with the AR environment.
During Task 3, I implemented status bars by stacking two rectangular layers—a background and a fill layer—while using a Horizontal Layout Group to handle alignment. Although this method was straightforward, it introduced significant limitations:
-
The fill animation was jerky and lacked smoothness, breaking immersion.
-
Adjusting the fill dynamically in response to pet behavior felt inefficient and difficult to fine-tune.
-
The overall UI lacked flexibility when scaling or reusing across different states.
These drawbacks became more apparent as I began connecting the bars to real-time events like feeding, potion crafting, and playing fetch. It became clear that a more refined approach was necessary to ensure both smooth animation and precise control over the bar’s behavior.
|
|
| Fig1.6 Health and Hunger Bar |
To overcome these issues, I redesigned the bars entirely in Figma, giving them a modern and polished look with rounded corners and subtle gradients. This not only improved aesthetics but also provided a foundation for better technical control.
After exporting the assets, I implemented them in Unity using:
-
Sliced backgrounds, ensuring the bar edges remain sharp while allowing the body to scale seamlessly.
-
Filled color layers using Unity’s
Image.FillMethod, which enables smooth, real-time changes as the value adjusts.
This method proved far superior, as it allowed the fill to dynamically scale based on percentage values without distortion. Once I perfected the first bar, I duplicated it to create the second and converted them into prefabs, streamlining adjustments and guaranteeing consistency.
|
|
| Fig1.7 Status bar manager |
To make the bars functional, I developed a dedicated script, StatusBarManager, which controls their behavior dynamically. The script connects directly to the CatController, ensuring that changes in the pet’s state—such as hunger or health loss—are immediately reflected on the UI.
All critical parameters were intentionally kept public, allowing me to tweak values in real time during testing. This flexibility made balancing the game mechanics much smoother.
Core Parameters & Behaviors
-
Initial Values: Both Health and Hunger start at 100.
-
Hunger Depletion: Every 3 seconds, the hunger bar decreases by 5 points.
-
Sleep Trigger: When hunger drops to 20, the pet enters a temporary sleep state, signaling it needs to eat.
-
Critical State: If hunger reaches 0, the pet falls into a permanent sleep animation, and at this stage, the Health Bar starts depleting—5 points every 5 seconds—to indicate a worsening condition.
-
Health Depletion: Only begins when hunger is already at zero, emphasizing the urgency to feed the pet.
To balance the status decay, I implemented three recovery mechanics, each tied to one of our core features:
-
Feeding: Restores 50 Hunger and triggers a future-ready slot for eating effects (e.g., particle effects, animations).
-
Potion Crafting: Restores 20 Health, reinforcing its magical utility.
-
Playing Fetch: Restores 10 Health, rewarding playful interaction.
These recovery actions give players multiple strategies to care for their pet, making the game loop feel dynamic and interactive. The bars update in real-time as these actions are performed, offering instant feedback that feels satisfying.
|
|
| Fig1.8 Scripts |
Additionally, the bar’s behavior was carefully synchronized with the pet’s animations and states. For example, when the pet is in sleep mode due to low hunger, the bar’s slow decrease reflects this passivity. When the user intervenes with food or potions, the bars jump back up in a way that feels both rewarding and natural.
A. Arrow Indicator
|
|
| Fig1.9 Arrow Indicator |
At first, I considered implementing a screen-edge glow effect, where soft gradients would appear on the side of the screen pointing toward the cat. Although this idea was visually appealing, it was technically difficult to execute cleanly across different screen sizes and AR scenarios. Therefore, I opted for a more feasible and dynamic solution: a gradient-colored arrow that orbits the edges of the screen and constantly points to the pet’s exact position. This keeps the interface elegant while providing clear guidance.
|
|
| Fig1.10 Script |
-
Continuously calculates the cat’s world position and converts it to screen coordinates.
-
Detects if the pet is off-screen and clamps the arrow to the edge while pointing in the correct direction.
-
Dynamically rotates the arrow to make its orientation intuitive and easy to read.
This approach ensures that the arrow always feels responsive and accurate, no matter where the pet moves. Combined with its gradient design, it adds a subtle yet effective visual cue without overwhelming the screen. Thanks to this feature, players never feel lost or anxious about their pet’s location. Instead, they can relax and enjoy the experience, knowing that they can always rely on this indicator to guide them.
B. Voice Feedback with Text Display
|
|
| Fig1.11Voice Feedback |
While implementing the voice control system with Wit.ai, I realized that users could easily become confused if there wasn’t clear feedback about what the system was doing. For example, after tapping the voice button, users might wonder if the microphone is recording, whether the system is processing the command, or why nothing seems to be happening. To address this, I added voice feedback with text display to make the entire process transparent and user-friendly.
|
|
| Fig1.12 Script |
-
When the voice button is tapped, it turns red to visually confirm that recording has started.
-
At the same time, a text label appears on the screen showing "Recording...", giving the user immediate confirmation.
-
Once the audio is sent to Wit.ai for transcription, the text changes to "Processing voice input...", indicating the system is working.
-
If the command is successfully recognized, the recognized phrase is displayed back to the user as confirmation.
-
In case of failure (e.g., unclear speech or network error), specific messages like "I didn’t catch that. Try speaking clearly." or "Network error. Please try again." appear, guiding the user on what went wrong and how to fix it.
This feature ensures that players always know what state the system is in—recording, processing, success, or error. By combining visual (button color) and textual (status messages) feedback, the experience becomes more interactive, intuitive, and stress-free.
|
|
| Fig1.13 Light improvement |
In the final stage of development, I focused on polishing the visual experience and solving usability issues related to feature reusability.
Firstly, I optimized the opening scene lighting. The cat model performs a smooth 3D movement toward the player to create the feeling of it leaping into the user’s arms. However, due to lighting inconsistencies, the cat’s white fur sometimes appeared discolored. To fix this, I added a Directional Light aimed directly at the cat, ensuring proper illumination and restoring its natural white tone.
|
|
| Fig1.14 Font |
|
|
| Fig1.15 Script |
-
Feeding and Playing Fetch already worked with reusable spawning mechanisms, so they required no changes.
-
However, Potion Crafting and Ingredient Collection were problematic. Initially, ingredients were randomly spawned, which sometimes caused them to disappear or become hidden behind objects. I redesigned this by spawning them at fixed elevated positions, requiring players to look up, down, or around to find them—adding a light physical engagement while avoiding bugs.
-
Previously, these objects became inactive after use and could not be reused without restarting the app. To solve this, I implemented reset functions (
ResetPotion()andResetIngredient()) that reposition objects to their original coordinates and reactivate them when needed. The cauldron also resets and disappears appropriately after potion use, ensuring smooth repeat interactions.
With these adjustments, all features became reusable and player-friendly. The system now allows endless interaction without forcing a restart, while also maintaining visual quality and user immersion.
Walkthrough Video:
Feedback
He highlighted that every planned feature—from core interactions like feeding, playing fetch, and potion crafting, to the enhanced mechanics such as voice command, arrow indicator, and dynamic status bars—was successfully implemented and polished. According to him, the project felt cohesive and immersive, offering exactly the interactive AR pet experience we aimed to create.
Receiving this feedback was extremely rewarding. It validated the countless hours of iteration, debugging, and design refinement we invested throughout the development process. More importantly, it confirmed that our efforts in combining technical features with user-friendly interactions paid off, resulting in an experience that feels alive and engaging.
This moment not only marked the successful completion of our final project but also became a milestone in our learning journey—proving that with persistence, teamwork, and creativity, we can turn an initial concept into a fully realized AR experience.
Reflections
Experience
Observations
Findings
Comments
Post a Comment