Task 1: Trending Experiences


Week 1 

Today was the first day of class, and Mr Razif asked us a (very) basic question but for some reason I didn't know the answer to.

What is Experiential Design?

After a few wrong answers, he then explained:
Experiential Design focuses on experiences that deeply engage users across various senses and emotions. Under this umbrella lies Experience Design, which includes more specific areas such as User Experience (UX) Design — which shapes how users interact with products, systems, or services.

For our project, we will be applying Experiential Design through mobile AR (Augmented Reality). This form of experiential design allows users to interact with digital elements layered onto their physical environment, making the experience more immersive and memorable.

He also gave us some keywords and tips, such as:

Minimum Viable Product (MVP):
An MVP is a simple version of the product, such as a wireframe or a prototype, created to test the core functionality. It helps in quickly validating ideas without building a full product.

Vertical Slice:
A vertical slice refers to creating a fully functioning small part of a larger project. Instead of doing a basic mock-up of everything, we pick a small feature and complete it in full detail. For example, in a shopping app, a vertical slice could be animating an item being added to a cart, complete with interactions and feedback.

Exploring Interaction with Visuals:
We learned that simple visual interactions — like animations, color changes, and visual feedback — can dramatically improve the user’s experience. The goal is to create a maximum, memorable impact through minimal, focused interactions.

Facing Difficulties and Moving Forward:
After establishing an idea, the next step is to identify potential challenges early. By doing so, we can adjust, improve, and ensure the project is finished within the timeframe (not like my games development disaster...)


AR in Education: Components for Learning

In the context of educational AR experiences, we explored three major components:

  • Demonstration: Showing a concept or process visually through AR.

  • Practical: Allowing users to actively engage with or manipulate digital elements.

  • Gauging Effectiveness: Building features that check whether users have understood or learned from the experience.

Types of AR interactions we can create include:

  • Scan and Show: Using a marker (like a printed image) that triggers an AR model or animation.

  • Space Creation: Adding virtual elements into real-world environments, both indoors and outdoors, to create new spaces or scenes.

Reflection

At the beginning of the semester, I felt quite nervous about starting this course. Experiential Design sounds exciting, but also a bit intimidating because it combines creativity with technical challenges (also because all of the modules Mr Razif teaches are so difficult TT (not ur fault sir i promise)). However, I was relieved to know that we could form groups for our ongoing project — I’m teaming up with Naura, who has always been my emotional support hehe. 

We have started brainstorming together to develop strong ideas. Our goal is for each of us to come up with two ideas over the weekend. Hopefully, by combining our efforts, we can create an amazing project concept that we’re both passionate about..



Week 2 

Fig 1. A straightforward diagram showing different fields of design.

In this week’s Experiential Design class, we explored the subtle but significant differences between customer experience (CX) and user experience (UX). While CX focuses on the broader impression a person has of a brand—often influenced by their purchase or service experience—UX dives into the actual interaction with the product or service.

Mr Razif then reminded us how essential it is for companies to align their brand identity with the way they present and design their digital or physical experiences.


Fig 2. A venn diagram showing the comparisons between UX and XD design.

We also revisited important foundations such as user personas (which we learned from Mr Yusri), which group users by common traits and behaviors, and information architecture, the hierarchy or structure of information in an interface, crucial to making experiences that we are trying to make intuitive.

A standout part of today’s session was learning about empathy maps—a tool that helps teams gain deeper insight into what users say, think, feel, and do. This tool supports decision-making by aligning team perspectives around real user experiences.

Fig 3. Our class activity User Journey Map.

Journey Mapping Activity: Taylor’s Campus Edition

Our class activity involved creating a user journey map of a typical day on Taylor’s campus. This included the journey to class, finding food or toilets, and post-class relaxation spots like the library or lakeside. I realized how many pain points we subconsciously tolerate—like long elevator waits or crowded student centers. Yet, there are also thoughtful gain points, like AR face recognition or the lakeside’s calming presence.

One big takeaway was how customizable journey maps can be. There’s no fixed format—they can blend with storyboards or be enhanced with emotion icons or mood indicators, depending on what we want to communicate. However in this case, Mr Razif did mention that if we used emoji's or visuals to represent the plus, pain, points and solutions, it'd be more appealing.


Trending Experiences Ideas

I also discussed two of my current ideas with Naura and Mr Razif. One was an AR Home Designer, where I wanted an affordable AR home interior planning tool that lets users scan and recreate their rooms using LiDAR (or standard depth scanning) and then rearrange furniture virtually using physical QR markers or chip tags. Unfortunately, this idea was already busted because Mr Razif has said there's already an existing AR app called IKEA place. (Clearly I didn't do enough research before, and I guess it's hard to have an original thought in this world..)

Another setback would be choosing what to focus on in the app - such as whether I
 wanted to make it about room scanning or furniture placement, as I would need to focus on one and not the other due to time constraint and also learning the resources needed. Lastly, even if I did create something like IKEA place, what would make my application different from them?

Fig 4. IKEA Place in simulation.

My second idea was a Gym Workout Buddy, an AR experience where a virtual trainer avatar demonstrates gym equipment usage or bodyweight exercises beside the user in real time. The avatar can be selected based on gender or fitness style, and would overlay correct posture/form in the physical space. I thought of this idea because I had visited the gym before on my own, and not only was it full of testosterone (guys.. everywhere..) the equipment also looked confusing. Even when watching youtube videos, it was difficult to translate the same understanding due to only seeing 1 perspective on the screen.

When I told Naura about this, she consulted our chatgpt TDS assistant, and it mentioned it would be difficult to portray the correct posture due to tedious rigging and other technical issues with the animation. Because of this, I decided to change my idea where the mascot would demonstrate how to use the machine, but the correct posture/form would not be included in the idea. Instead, the workout buddy would be able to do floor exercises or anything with simple animation (planks, jumping rope) that would still serves its' purpose as a workout buddy. 

Mr Razif seemed quite okay with the idea so far, and I went back home to continue researching.

Personal Reflection

I found this activity really enjoyable and insightful. It made me more aware of the small frustrations I’ve gotten used to on campus and how they could be solved. Hearing my groupmates’ different perspectives also reminded me that even within a shared environment, user experience varies greatly. Everyone’s interaction with the same space can differ based on their priorities, habits, or even mobility.

This week's lecture also helped me realize how experience design is about empathy and systems thinking. It’s not just about problem-solving, but it’s about understanding how people move through environments and how subtle improvements can make a big difference.

Also, it's very difficult to have an original idea in 2025. 

Here is our future journey map that we did below:

Fig 5. Our future user journey map for Taylors students.


Week 3

This week, we were introduced to Extended Reality (XR), which includes Augmented Reality (AR)Virtual Reality (VR), and Mixed Reality (MR). These technologies all aim to enhance user experience, but each does so in unique ways.

Fig 6. The differences between AR, VR, and MR.

  • Augmented Reality (AR)
    Example: IKEA Place
    AR overlays virtual content onto the real world. It enhances physical space without replacing it — for example, placing a digital sofa in your living room through your phone.

  • Mixed Reality (MR)
    Example: Smart Home Systems
    MR allows users to interact with and manipulate virtual objects in real time, while also respecting real-world physics. It’s like AR but with deeper interaction — virtual switches that respond to touch or voice.

  • Virtual Reality (VR)
    Example: Minecraft VR
    VR replaces the real world entirely with a digital environment. It offers full immersion and a strong sense of “presence,” where users feel like they’re truly somewhere else.


What AR can do, MR can do. But what MR can do (only sometimes) is what AR can do.

Fig 7. A screenshot of the student's artwork in class.

Mr Razif let us try AR projects done by previous students. This one in particular is by the Fashion Design students, where they scanned Barbie Dolls and made clothes for them, allowing them to be displayed much more easily in AR. This was demonstrated to us as the Mobile AR experience, where it felt personal and customized, as there were different parts of the model's clothes seen as you moved your phone. 

Types of AR Experiences:

  • Mobile AR (Personal)
    The most accessible AR. It involves interacting with digital content through smartphones, e.g., changing object colors or animations as you move the phone. This creates a two-layer interaction between user and environment.

  • Projection AR (Shared)
    AR is projected into the physical world — think museum installations or digital graffiti that several people can see and interact with together.

  • Head Mounted Displays (HMD)
    These include smart glasses or VR headsets that integrate SFX, VFX, touch/voice activation to create more immersive AR experiences.

There are two main approaches to activating AR content:

  • Marker-Based AR
    Requires a visual trigger (like an image or QR code). Common in print or packaging — scan the marker to see content appear.

  • Marker-less AR
    Uses sensors, GPS, or LiDAR to anchor AR content in the environment. This method doesn’t require physical prompts to activate content.

We also leanred that designing effective AR isn’t just about visuals — it blends UX Designusability, and interaction design. We also touch on:

  • Design Thinking: Keeping the user at the center of the problem-solving process.

  • Human-Computer Interaction (HCI): Understanding how users interact with technology to design more intuitive experiences.

Class Activity (I forgot what this one was specifically called)

After our lecture, we had a class activity where we had to choose a place and then give them an AR solution based on it. (Mr Razif complained about everyone choosing the gym, so we couldn't reuse my workout buddy idea T_T) Crystal had the idea of using the Hair Salon as a our designated place, and surprisingly we all could relate to the Hair Salon giving us problems.

Even when customers bring reference photos to a salon, they often struggle to accurately visualize how a chosen hairstyle or hair color would look on their own face and hair texture. Also, hairstylists may have their own vision when it comes to your haircut or they forget. This can result in disappointment when the outcome doesn’t align with their expectations

We decided to propose an AR-powered hair salon mirror that uses facial tracking and 3D modeling to:

  • Let customers preview hairstyles and colors in real time, accurately mapped to their face shape and hair type.

  • Show live treatment progress updates — like how long until the next wash, or a visual timeline of each stage.

  • Include “style limits” overlays, where clients can draw or set boundaries (e.g., desired hair length), and stylists can reference them throughout the session.

This idea combines marker-less AR and real-time visualization, creating a more transparent and reassuring salon experience.


Fig 8. My chosen image to put into the database. 2D, and has contrast.

Unity & Vuforia

Lastly, today we started on using Unity and adding Vuforia inside as our package (AAAAAAAAA I can feel my future self stressing right now.)

Firstly, we had to download vuforia and add it into unity. After that, Mr Razif explained some of the features that the AR Camera had in the Vuforia package that appeared when we added it to our scene. By using the serial key that we had generated and importing our database, we were able to get the camera to recognize our chosen image. In turn, we got a cube. 

Fig 9. Downloading Vuforia and uploading the image into the database (I got 5 stars!)


Fig 10. Yay my cube worked!

Mr Razif then told us to place a 3D model instead of the cube so that it feels like a more realistic experience (do it before you go home!). So I found a 3D fish that's free on the internet and aligned it with my fish image. This was an easier process than I expected but I'm still going to be cautious with Unity because it traumatized me a bit too good in Sem 3...

Fig 11. My free 3D fish working :)

Personal Reflection

Mr. Razif’s lectures are honestly quite nice. There's a nice mix of theory, visuals, and practical demos that makes the long class hours digestible, and he has a way of explaining things that makes sense.
The class activity was genuinely fun — it reminded me how good design starts with empathy and shared experience. Starting Unity again was scary (no thanks to my past trauma), but the small win of seeing my 3D fish appear has motivated me more than I expected.
I hope I can carry this momentum into the real project. If my fish can swim, maybe I can too.

Week 4

Fig 12. Adding Buttons in Unity

This week we had no lecture, and focused solely on Unity and Vuforia. Today was a continuation of our activity from last week. 

We learnt how to incorporate buttons without coding for our future UI ideas. Here, We had created a START and STOP button using Canvas - Text Mesh Pro and also resizing our screen to be an iOS dimension (I chose Iphone 11 Pro). Beforehand, we also created an animation of our model using the Animator attached to the item and the animation timeline.

By creating a list of actions and referring the model of the fish into it, we utilized the Game.SetActive bool that uses a true or false statement. By ticking or unticking the box, you create a state that the button is in. When testing and pressing the button, it switches the option to vice versa. Pressing 'STOP' will stop the animation and pressing 'START' will start it again. 

Fig 13. Adding Buttons and Controlled Animations

We then also applied this same tactic to include another animation, this time with a sphere running. By adding another state to the list, this time including the sphere as well.

Fig 14. Changing colors when pressed (impacts user experience)

Mr Razif also explored the options for customizing our buttons, such as changing the color when pressed, when hovered, changing the font, and etc. 

Fig 15. Video of my Fish animation controlled with the Buttons

Mr Razif then explained to us that if we put the two START and STOP buttons in the same center alignment, it would look like it was the same button. This was a 'hack' to make it look like our UI was coded and was (i guess) more cool? And this would impact the user experience in a more intuitive way as well than just adding the pressed colors. 

I also had a little bit of a problem when my sphere animation wouldn't play (as you can see in the video), and I suspected it was because I made a previous animation called Sphere_Bounce and changed my mind, deleted it, and made Sphere_Run instead. Because of my previous Unity experience in Games Development, I went to the Animator, and found that the system had tagged Sphere_Bounce as the default animation despite me deleting the file in the Sources folder already. Therefore, I deleted it in the animator and attached Sphere_Run as the default, and my problem was solved. 

Fig 16. The Final Product (starts at 0:13 seconds)

Lastly, Mr Razif challenged us to try to create a separate START and STOP button for the sphere. (luckily my animation was working now) I duplicated the previous START and STOP button, and instead only tagged the sphere and removed the fish model, and vice versa. (I also improved the fish animation just in case it wasn't obvious enough)

Personal Reflection

Today's class was very fun and intuitive in how we can incorporate UI into our AR application without using coding. As someone who isn't very good at coding, I think that this saves time and stress when reflecting onto my actual AR project in the future. By utilising all of Unity's features, like the color changing pressed button, changing the font, alignment in the canvas, and etc, this would promote the work smarter not harder ideology that I really resonate with (xD). I can foresee myself using this in my future AR project, so that there is less for me to code and also get confused with.

While there was no theory today, I think that this practical hands-on lesson was very impactful, and I appreciate Mr Razif's understanding in ending the class early (due to my situation :'D   ).


3 Potential Ideas

Here are my 3 potential ideas that I have come up with for the past 3 weeks :)


I also researched on what other applications were similar to RoomSketch, and whether or not it was unique in the marketplace. I found a few, which I have listed down below. 

App

What It Does

IKEA Place

Lets users place IKEA furniture in AR to-scale

Magicplan / RoomScan Pro

Scans rooms to create floor plans, but not for adding decorative architecture

Houzz / Planner 5D

Interior design tools with some AR preview, but complex and brand/product focused

Morpholio AR Sketchwalk

For architects to walk through 3D models, not accessible for general users

Thankfully they're not all too similar, with some points of difference being:
  • Changing the room’s structure (wainscotting, arches, doors) and not adding furniture.
  • RoomSketch is for people who want to visualize renovations or changing their space architecturally.
  • Could be stylized with different themes (e.g., “modern,” “vintage,” “cottagecore”)

Software/Technicalities:

  • Unity
  • AR Foundation (Vuforia can only detect flat surfaces horizontally but not vertically, which is what I need to have for putting elements on the wall and etc.)
  • Figma
  • Blender
I put my idea into the ChatGPT XD assistant, and it mentioned some challenges that I might face if I decided to pursue this idea:

Potential Challenges 

  • Getting overlays to “stick” on walls convincingly
  • Avoiding floating/incorrectly placed elements
  • Keeping UI clear for toggling designs

Mockup (with Canva)

Fig 17. Room Sketch mockup (Adding a door)

This mockup represents an AR tool that lets users preview architectural changes to their room, in this case it's a door.

In this scenario, the user scans a chip, or a QR code they had placed on the wall. Once scanned, a 3D door model appears in correct perspective on the vertical surface, allowing the user to visualize how it fits within their real room dimensions.

Personal Thoughts

Overall, I think that this is a very safe idea for me to experiment with, as there are many similar existing applications, youtube tutorials, and the expertise of Mr Razif that would make this project very doable. Mr Razif himself also gave me his thoughts during our consultation on the 13th of May, where this idea would be considered a 'Medium Tier', and still great for exploration. Since I was younger, I have loved the idea of interior designing and decorating my room, and while I might not have a lot of budget to actually do so, if I had an application like this it would have enhanced my creativity to many levels.



Similarly to RoomSketch, I found apps which were unfortunately, very similar if not, even better than Workout Buddy. 

Similar Applications and Uses

App What It Does
Vi Trainer AI-based real-time feedback on form (not AR, but visual coaching)
AR Runner Uses AR to create movement challenges on the ground—more gamified cardio
FitnessAR Renders running routes in AR for visualizing terrain, not indoor exercises
Fiture / Freeletics Fitness training with videos, not AR overlays

The points of difference was very slim, where it was only due to the limit of my capabilities, like:

    Differentiation

    • Focusing on gym + home bodyweight exercises (because rigging an animation with correct body posture would be difficult).

    • Uses an AR trainer avatar performing beside you, not just a video (because our assignment is to create an AR Application TT)

    • Has no dependency on AI, making it Unity/Vuforia-feasible (because I have no idea how to implement that)

    Software/Technicalities:

    • Unity
    • Vuforia Ground Plane 
    • Figma
    • Blender
    Again, I put my idea into the ChatGPT XD assistant, and it mentioned some challenges that I might face if I decided to pursue this idea:

    Potential Challenges 

    • Avoiding avatar clipping or incorrect scale
    • Matching animation timing with exercise logic
    • Creating a visually clear and motivational interface

    Mockups (with Canva)

      Fig 18. Workout Buddy mockup (Floor Exercise)
       

      This screen shows the user doing a plank exercise with an AR avatar alongside them, offering verbal encouragement (the speech bubble or through voice support). A real-time countdown timer runs at the top.

      • Users can restart or skip the current routine using clear buttons.
      • The AR scene is grounded using the Vuforia plane for added realism.


      Fig 19. Workout Buddy mockup (Explanation of Equipment)


      This screen displays a 3D animated trainer avatar performing a bench press alongside the user in AR. The avatar offers friendly and motivational tips.

      •  The interface displays exercise stats: set count, rep duration, and weight.
      •  A “Read More” button leads to detailed form breakdowns in a written format with pictures

      Personal Thoughts

      While the idea sounds great, I would have preferred to work with something that is more unique and atleast innovative, rather than creating a mimic type of AR application for the sake of needing to. To add on, the challenges seem pretty plausible, with the difficulty of having to rig a 3D trainer avatar. Also, the user may feel like they are just watching a 3D youtube video, as there is not much room for interactiveness. This idea may be hard for me overall, especially because I don't go to the gym.. xD



      Repeating the process, I also researched on what other applications were similar to StyleSwap, and whether or not it was unique in the marketplace. I found a few, which I have listed down below. 

      App What It Does
      Wanna Kicks / AR Shoe apps Show specific product overlays on foot (e.g., Nike, Adidas)
      Snapchat / Instagram filters Some cultural costume filters, but they’re cartoonish or novelty-based
      Zara AR app Shows models walking on products in-store (promo-based, not personal try-on)
      YouCam Makeup For makeup try-on, not full outfits
      Zero10 AR Fashion Experimental AR clothing try-on (fashion-tech runway level)

      I saw a youtube video of how Zero10's AR Fashion is quite innovative, as the fabric moves along with the user realistically. However I think the differences between StyleSwap and Zero10 AR Fashion is due to the cultural and identity goals.

      Difference:

      • Focused on identity exploration for mixed-race users

      • Offers cultural education + reflection prompts, not just dress-up

      • Not commercial—designed for self-discovery and empathy

      Again, I put my idea into the ChatGPT XD assistant, and it mentioned some challenges that I might face if I decided to pursue this idea:

      Challenges

      • Getting overlays to align correctly on the body
      • Managing cultural sensitivity (presentation and respect)
      • Front camera AR can be harder (consider mirror target as fallback)

      Mockup (with Canva)

      Fig 20. StyleSwap UI Try On (Chinese Clothing)

      This mockup shows how users can try on traditional cultural attire using AR. The outfit in this example is a traditional Chinese dress, which appears over the user’s real-time camera as a layer of their body. Informative text bubbles explain the symbolism of the design elements (e.g., the red color and jujube fruit patterns).

      • Swipe gesture at the bottom encourages users to explore other cultural styles.
      • Icons on the left allow users to switch countries or zoom for more details.


      Personal Thoughts

      I honestly love this idea much more than RoomSketch, as it is unique, problem-solving, and very empathetic to a certain target audience. However, Mr Razif had some realistic points for me to consider, such as:

      "So, if you want to figure out whether your Blender clothing skills are up to scratch, you need to actually detect the body and accurately tag the clothes onto it — which is a more advanced tech. Since you already know how to use Blender, we can build on that.

      The key is to use AR Foundation, because Vuforia won't work here. With AR Foundation, we can create a mannequin or even a 3D model of yourself, which will serve as a virtual body. The idea is to overlay the clothes onto this model, matching the body's movements and shape.

      Basically, you’d scan or model your body, then use AR to project the clothing onto the model in real-time or for visualization. The challenge is making sure everything aligns perfectly — the clothes should fit naturally on the model."

      I considered this and decided that as much as I would love this idea, it would be tough to pull off, especially since if I'd want to have variety, I'd have to create multiple cultural outfits. This would mean time taken to research culturally accurate outfits, creating layers of clothes for the models, and even scanning the chosen model itself. While I am familiar with blender, I'm not exactly the best at it, so this may be difficult to pull off all in itself. This is unfortunate, as I personally really like exploring cultural fashion. 


      Final Decision

      After exploring all three concepts: RoomSketch, Workout Buddy, and StyleSwap AR—I’ve decided to proceed with RoomSketch AR as my final idea.
      Overall, I think this is a safe and achievable concept for experimentation, especially with the support of online resources, existing applications, and Mr. Razif’s expertise. I also personally like the idea, having loved interior design since I was younger. By creating an application like this, maybe in the future I'll be able to develop it even further and market it to people who are broke (like me) but love redecorating houses (also me).

      While the Workout Buddy idea was interesting on the surface, I felt it lacked the uniqueness I was looking for. It leaned a bit too closely to existing solutions and seemed to involve unnecessary complexity for something I’m not personally passionate about (the last time I went to the gym was 3 years ago). The challenges like rigging a trainer avatar and creating believable animations to interact and impact the users experience made it feel more like an overcomplicated assignment than something fun.

      For StyleSwap AR, I genuinely loved the idea. It felt the most unique, empathetic, and culturally meaningful, particularly for people with mixed heritage like my best friend and myself. However, after considering Mr. Razif’s advice, I realized the technical execution would be quite demanding. The use of AR Foundation, accurate body tracking, custom 3D clothing models, and cultural research made it too ambitious for this semester, especially since I'm still building confidence in Blender. It’s an idea I would love to revisit one day, but for now, RoomSketch offers the best balance of feasibility, creativity, personal connection, and interactiveness for users.




      Comments

      Popular Posts