Experiential Design - Task 1: Trending Experience
Index
Lectures
This week, Mr. Razif introduced the module by briefing us on the MIB and the expectations for our upcoming project. He also showcased senior blogs and their previous works — including AR apps for object assembly, relaxation quizzes, and smart furniture setups.
After that, we were asked to form groups to start brainstorming ideas for our own AR application. I chose to work in a pair for easier discussion and faster iteration. Mr. Razif also encouraged us to use AI bots to help us expand and refine our ideas.
|
|
| Fig1.1 Week 2 Exercise Presentation |
This week’s lecture focused on the foundations of experiential design. We learned that XD (Experience Design) is a multidisciplinary practice that designs not just products or interfaces, but complete experiences — including physical, emotional, and cultural aspects.
We were introduced to:
-
Empathy Maps: Tools for understanding users’ feelings, thoughts, and behaviors.
-
Journey Maps: Diagrams showing the steps a user takes when interacting with a service or system.
Then we did a group exercise where we chose Tokyo Disneyland as our context. We mapped out the full user journey — from arriving at the park to leaving — and identified pain points, gain points, and possible solutions at each stage. Finally, we presented our findings.
This week, we deepened our understanding of AR and how it can be applied to solve real-world problems. Each group selected a scenario and came up with an AR app idea to improve the user experience in that setting.
Our group chose a hospital setting. We proposed an AR app that helps doctors explain medical conditions to patients — by visualizing the affected body parts and symptoms more clearly using AR.
After presenting our ideas, we began our first Unity tutorial, where we learned how to use Vuforia to create a basic marker-based AR experience by scanning an image and placing a virtual object.
We continued our Unity lessons this week. This time, we focused on creating simple animations within the AR environment. We practiced triggering animations when the image target is detected.
After that, we consulted Mr. Razif about our final project topic, where we presented our refined idea and received feedback on feasibility and improvements.
Instructions
- Students explore current market trends by doing exercises to understand new technologies and development platforms. Through research and experiments, they discover the strengths and limitations of these technologies to guide their final project choices.
- Complete all exercises to show understanding of platform fundamentals.
- Write a short reflective blog post based on research findings.
-
Propose 3 potential ideas for the final project.
1. Research & Exploration
A. Understanding the difference between MR, VR, XR, AR
|
|
| Fig2.1 Differences between MR, VR, AR |
Before diving into AR experiences, it's important to understand how AR fits into the broader world of immersive technology. The terms AR, VR, MR, and XR are often used interchangeably, but they represent different approaches to blending digital and physical worlds:
-
AR (Augmented Reality) adds digital elements (like 3D objects or text) onto the real world through a screen or glasses. Example: Pokémon GO.
-
VR (Virtual Reality) places users in a fully digital environment, often using headsets. Example: Beat Saber.
-
MR (Mixed Reality) blends digital and real worlds in a way that allows physical and virtual objects to interact in real time. Example: HoloLens experiences.
-
XR (Extended Reality) is an umbrella term that includes AR, VR, and MR. It refers to any technology that merges the physical and digital.
B. How AR works?
After understanding the basics of XR, I focused on AR and how it works. AR technology generally falls into two types:
-
Marker-based AR uses printed images or QR codes as visual triggers to display virtual content.
-
Markerless AR (also called World Tracking) uses plane detection and sensors to place digital objects directly into the real world without the need for a marker.
|
|
| Fig2.2 Marked Based AR Example |
- Natural History Museum (London) – Visions of Nature (Oct 2024): Mixed-reality experience using Microsoft HoloLens 2 to depict futuristic ecosystems, including hybrid species like “narlugas.” It raises awareness about environmental issues.
- De Young Museum (San Francisco) – Fashioning San Francisco: Visitors virtually tried on iconic designer outfits via AR try-on booths developed with Snap Inc.
- Muséum national d’Histoire naturelle (Paris) – REVIVRE: Visitors interacted with life-sized 3D models of extinct animals, bringing vanished species like giant tortoises to life.
- The National Gallery (London): AR app enabled classic and modern artworks to appear in public spaces across the city, expanding the museum’s reach beyond its physical walls.
- National Museum of Singapore – Story of the Forest: Used AR to turn natural history drawings into animated 3D creatures, creating an interactive, educational scavenger hunt.
- Art Gallery of Ontario (Toronto) – ReBlink: Digital artist Alex Mayhew reimagined classic paintings with modern twists (e.g., subjects using phones), encouraging dialogue about technology’s role in daily life.
- Smithsonian Institution (Washington D.C.) – Skin and Bone: Used AR to reconstruct muscles and movements over skeletons, reviving historic exhibits like vampire bats in flight.
- Pérez Art Museum (Miami) – Invasive Species: A fully digital AR installation that transformed museum spaces into speculative futures overtaken by climate-threatened invasive species.
- Kennedy Space Center (Florida) – Heroes and Legends: AR holograms recreated astronaut Gene Cernan’s intense 1966 spacewalk, enhancing historical storytelling with immersive visuals and voice narration.
Unlike marker-based AR, markerless AR relies on spatial tracking (SLAM) to detect flat surfaces and anchor virtual objects. Apps like IKEA Place and Snapchat filters use this method. I tried several AR demos and found it surprisingly intuitive. Being able to move freely around objects added a deeper sense of realism and presence.
|
|
| Fig2.4 SLAM |
SLAM is the technology behind markerless AR. It allows the device to recognize flat surfaces and understand the environment by building a 3D map in real time. This enables stable placement of virtual objects.
|
|
| Fig2.5 IKEA Place App |
IKEA Place is an app that uses markerless AR to let users visualize furniture in their home. You can select a sofa and see exactly how it fits in your room using your phone camera.
Effect: Helps users make better purchase decisions by previewing size and style.|
|
| Fig2.6 Snapchat AR filters |
Snapchat filters use facial recognition and markerless AR to add effects to
faces or surroundings, like dog ears, makeup, or 3D hats. It tracks the
user’s facial features in real-time.
Effect:
Enhances creativity and expression in social communication.
Through research and testing, I explored AR’s wide range of applications. Here are six that I found especially relevant and inspiring:
-
Education & Culture AR – Projects like WarisanXR show how AR makes cultural content more engaging.
-
AR Navigation – Like Google Maps Live View, helps users navigate real environments with virtual cues.
-
AR Art Installations – Immersive exhibitions like TeamLab turn spaces into interactive digital galleries.
-
Body Interaction AR – Tracks movements for AR yoga, dance, or fitness coaching.
AR is increasingly used to turn static cultural content into interactive experiences. I looked into projects like Google Expeditions and WarisanXR’s cultural AR showcase. It was fascinating to see how AR brought stories, tools, and rituals to life. It also helped users better understand abstract cultural elements.
|
|
| Fig2.7 Google Expeditions |
This app allows students to explore historical sites, space, or underwater
environments through AR/VR. Using AR, teachers can project volcanoes,
skeletons, or planets in the classroom.
Effect:
Makes learning visual and memorable.
|
|
| Fig2.8 WarisanXR’s cultural AR showcase |
WarisanXR is a Malaysian AR project that brings cultural artifacts to life.
Users can scan a traditional object and see how it was used—e.g., how a
weapon is held or a dance is performed.
Effect:
Preserves intangible heritage and improves cultural understanding.
AR navigation overlays directional prompts and spatial information onto the real world. I tested AR walk navigation in Google Maps and explored mall directory concepts. It felt futuristic, especially when arrows and pins appeared directly in my environment.
|
|
| Fig2.9 AR walk navigation in Google Maps |
Google Maps’ Live View feature overlays arrows, directions, and landmarks on
your camera view during walking navigation.
Effect:
Reduces wrong turns and improves spatial awareness.
|
|
| Fig2.10 AR mall navigation |
AR mall directories use ceiling or floor anchors to project routes inside
shopping malls. Users can follow a floating path to their destination.
Effect:
Improves wayfinding in large, complex indoor spaces.
I explored AR art pieces such as TeamLab's immersive experiences and virtual graffiti overlays. These installations blend physical and digital spaces to create interactive storytelling. Some artworks responded to viewer movement or sound, which added depth to the experience.
|
|
| Fig2.11 TeamLab's immersive experiences |
TeamLab combines digital art with AR to create immersive installations where
visuals react to user movement and touch.
Effect:
Turns art into a living, interactive space.
|
|
| Fig2.12 virtual graffiti |
AR graffiti apps allow artists to paint virtual graffiti in public spaces
without damaging property. Viewers can see animated or 3D artworks through
their phone.
Effect:
Promotes street art in legal and dynamic ways.
Body interaction AR uses gesture tracking and full-body movement as input. I looked into AR yoga guides and fitness apps that respond to posture and timing. It felt personal and responsive, especially when instructions aligned with my actual pose.
|
|
| Fig2.13AR yoga guides |
These guides use pose detection to correct users’ yoga postures in real
time. AR overlays show the ideal form beside or over the user’s body.
Effect:
Helps users improve technique at home.
|
|
| Fig2.14 AR Fitness Apps |
Apps like FitXR or Supernatural use AR/VR to gamify workouts. Users follow
motion-based instructions, and the app tracks their movement accuracy.
Effect:
Makes exercise more engaging and motivating.
This research gave me a new appreciation for how AR blends storytelling, functionality, and design. I was especially surprised by how accessible marker-based AR still is in cultural settings. On the other hand, I found markerless AR more immersive, making it ideal for everyday and personalized use.
I also realized that AR has the power to turn passive viewing into interactive experiences, whether through education, fitness, or even public art. It made me think deeper about how experiential design can be shaped not just by visuals, but by space, motion, and participation.
A good AR tool can enhance our experiences in many areas — not just in exhibitions or games, but also in daily life. For example, many cars overseas already use AR for navigation, making routes clearer. In contrast, in places like Malaysia, confusing road signs and complex junctions often cause people to take wrong turns and waste time. AR could help solve that.
Similarly, museums often rely heavily on text panels. If a visitor doesn't understand the language, they may lose interest quickly. But with AR, visual explanations and animations can bridge that gap. As technology continues to grow, it's clear that we are moving toward simpler, more intuitive ways of living and learning.
2. Weekly Reflections on Class Activities and Exercises
|
|
| Fig3.1 Week 2 Exercise Presentation |
This week, we were introduced to the concepts of empathy maps and journey maps. Our group selected Tokyo Disneyland as the user scenario for the journey map exercise. During the discussion, our lecturer emphasized the importance of having real-life experience with the location, as UX design should be grounded in actual user behavior — not assumptions.
We mapped out a full theme park experience, from ticket purchase and arrival, to queuing, dining, shopping, and leaving. For each step, we identified pain points, gain points, and proposed solutions. While some issues were solvable (e.g., long queues → mobile app updates), others — like high prices — were simply part of the reality, with no clear solution. It reminded us that not every pain point can be “fixed”, and designers must recognize limitations.
Through this exercise, I learned the value of observing real-world behavior instead of relying on assumptions. In UX design, it’s critical to design with users, not just for users. Empathy mapping helped us identify motivations and barriers that aren’t obvious from the outside.
This also led me to reflect on how AR technology could respond to specific needs in environments like theme parks. For example:
-
In crowded areas, AR could help with real-time navigation or estimated queue times.
-
In dark settings like evening parades, AR might struggle with marker detection — highlighting the importance of environment-aware design.
-
On the other hand, AR can enhance experiences through virtual dressing rooms, selfie filters, or interactive AR shows, helping people create personalized memories.
Week 3:AR for Medical Communication – Hospital Scenario
|
|
| Fig3.2 Week 3 Exercise Presentation |
This week, we were tasked with selecting a scenario and designing an AR app to improve user experience in that context. Our group chose a hospital setting. We focused on a common issue: patients often find it difficult to describe their symptoms or understand the doctor’s explanation, especially if they lack medical knowledge.
We proposed an AR app that allows patients to indicate pain points visually on a 3D body model, while doctors can use the app to show internal anatomy, explain diagnoses, or overlay scan results like MRI or CT images. This would greatly reduce communication barriers and enhance medical understanding.
We also developed several mockups to visualize how the app could function during a consultation.
As someone with a medical background, I’ve always viewed medical consultations as opportunities for knowledge — to clarify doubts and learn more about the body. However, not all patients have that confidence or interest, and some might leave more confused than before.
I believe AR has the potential to bridge that knowledge gap. Seeing internal organs in 3D, or understanding a diagnosis through layered visuals, can transform how people perceive health. It also empowers patients to ask better questions and retain more information.
Moreover, AR’s ability to personalize visual aids could lead to more empathetic and informed care, especially for visual learners or multilingual patients who struggle with traditional explanations.
Unity
-
Start a New Project
-
Open Unity Hub → Click “New” → Choose 3D project.
-
-
Import Vuforia SDK
-
Open your Unity project → Go to
Window > Package Manager. -
Click the
+icon → SelectAdd package from disk…. -
Locate and select the
.unitypackagefile downloaded from the Vuforia website.
-
2️⃣ Get a Vuforia License Key
|
|
| Fig3.3 Vuforia |
You’ll also need to download the database and import the package into Unity. Then, go to the License Manager and generate a license key, which will later be used in the Vuforia settings within Unity.
Next, go to the Target Manager to create a new database and upload your target image. Once uploaded, Vuforia will display a rating that indicates how easily the image can be recognized. A higher rating ensures the marker-based AR will function smoothly without recognition errors.
-
Register & Log in to Vuforia
-
Go to developer.vuforia.com and create an account.
-
-
Create a License Key
-
After logging in, go to the “Develop” tab → Select “License Manager” → Click
Get Basic. -
Name your app, select "Development" type, agree to terms, and click “Confirm”.
-
Copy the generated License Key.
-
-
Add License Key in Unity
-
In Unity, go to
Window > Vuforia Engine > Configuration. -
Paste your license key into the App License Key field in the Inspector.
-
3️⃣ Create & Import Target Database
At this step, make sure that when you use the
AR Camera, you paste the
license key correctly into the configuration field.
Then, in the
Image Target settings,
assign the correct database and select your uploaded image.
After that, create a child object under the image target — for example, a 3D Cube — which will be displayed when the target image is recognized.
-
Create a Database in Vuforia
-
On the Vuforia site, go to
Target Manager→ ClickAdd Database. -
Name your database, choose “Device” type, then click “Create”.
-
-
Add Image Target
-
Inside the database → Click
Add Target. -
Choose “Single Image” → Upload your image → Set width (e.g., 5 units) → Name the target → Click “Add”.
-
-
Download & Import to Unity
-
In the database, select your image target → Click “Download Database”.
-
Choose “Unity Editor” as platform → Download.
-
Back in Unity, double-click the
.unitypackagefile to import.
-
4️⃣ Add AR Camera & Image Target in Unity
|
|
| Fig3.5 AR Camera & Image Target |
Once everything is set up, you can test the result using a webcam. If the image is detected correctly, the cube will appear.
-
Add AR Camera
-
Delete the default Main Camera from the scene.
-
Go to
GameObject > Vuforia Engine > AR Camera.
-
-
Add Image Target
-
Go to
GameObject > Vuforia Engine > Image Target. -
In the Inspector, under the Image Target settings:
-
Database: Select the name of your imported database.
-
Image Target: Select the specific image you added earlier.
-
-
-
Import Your 3D Model
-
Drag and drop your
.fbxor 3D asset into the Unity Assets folder.
-
-
Set Up the Model
-
Drag your 3D model onto the Image Target in the Hierarchy to make it a child object.
-
Adjust the model’s position, scale, and rotation so it displays correctly when recognized.
-
In Week 4, we continued our Unity AR development and learned how to create simple user interfaces using buttons.
To begin with, we learned how to create a UI button in Unity. Once we added a Canvas, the Event System was automatically generated. Inside the Canvas, we created a button and adjusted its size. For the text label, we used TextMeshPro (TMP Text) to replace the default text component.
|
|
| Fig3.7 Hide and Show |
We then linked the button’s
On Click() event to control
object visibility. By using the
SetActive(bool) method, we
created two buttons — one for
Show, and another for
Hide — toggling the
visibility of a selected GameObject based on whether the box is ticked or
unticked.
|
|
| Fig3.8 Animation |
Next, we learned how to create basic animations. We selected a cube and opened the Animation panel. By clicking the Record button next to “Preview”, we could manually keyframe the object’s transformations along a timeline, such as rotation or position changes.
|
|
|
Fig3.9 Animation Stop and Run |
Finally, we connected the buttons to control animation playback. Using the Animator component and a boolean parameter, we set one button to enable the animation and another to stop it. The control logic was again done through the On Click() event by toggling the animator’s boolean value — one for play, one for stop.
3. Proposal of 3 AR Project Ideas
My 3 Ideas for AR Project:
Our Final Ideation for AR App:
After discussion with my teammate and Mr Razif we decided to go for the Companion Pet AR experience, attached below is the rough ideation which we will continue to develop in the next task.
Feedback
Reflections
Experience:
Observation:
Findings:
I realized that effective AR design requires a balance between technical feasibility and user experience. For example, features like voice control or body tracking are exciting, but also complex to implement. Instead of overloading the app, we learned to focus on the core interactive value — what users truly need in the moment.
I also found that AR is not just about wow effects, but about solving small frustrations in a creative way — whether it’s making museum visits easier, helping patients communicate better, or turning pets into digital companions.
Comments
Post a Comment