Weekly Posts/Reflections

Week 1 

Today was the first day of class, and Mr Razif asked us a (very) basic question but for some reason I didn't know the answer to.

What is Experiential Design?

After a few wrong answers, he then explained:
Experiential Design focuses on experiences that deeply engage users across various senses and emotions. Under this umbrella lies Experience Design, which includes more specific areas such as User Experience (UX) Design — which shapes how users interact with products, systems, or services.

For our project, we will be applying Experiential Design through mobile AR (Augmented Reality). This form of experiential design allows users to interact with digital elements layered onto their physical environment, making the experience more immersive and memorable.

He also gave us some keywords and tips, such as:

Minimum Viable Product (MVP):
An MVP is a simple version of the product, such as a wireframe or a prototype, created to test the core functionality. It helps in quickly validating ideas without building a full product.

Vertical Slice:
A vertical slice refers to creating a fully functioning small part of a larger project. Instead of doing a basic mock-up of everything, we pick a small feature and complete it in full detail. For example, in a shopping app, a vertical slice could be animating an item being added to a cart, complete with interactions and feedback.

Exploring Interaction with Visuals:
We learned that simple visual interactions — like animations, color changes, and visual feedback — can dramatically improve the user’s experience. The goal is to create a maximum, memorable impact through minimal, focused interactions.

Facing Difficulties and Moving Forward:
After establishing an idea, the next step is to identify potential challenges early. By doing so, we can adjust, improve, and ensure the project is finished within the timeframe (not like my games development disaster...)



AR in Education: Components for Learning

In the context of educational AR experiences, we explored three major components:

  • Demonstration: Showing a concept or process visually through AR.

  • Practical: Allowing users to actively engage with or manipulate digital elements.

  • Gauging Effectiveness: Building features that check whether users have understood or learned from the experience.

Types of AR interactions we can create include:

  • Scan and Show: Using a marker (like a printed image) that triggers an AR model or animation.

  • Space Creation: Adding virtual elements into real-world environments, both indoors and outdoors, to create new spaces or scenes.

Reflection

At the beginning of the semester, I felt quite nervous about starting this course. Experiential Design sounds exciting, but also a bit intimidating because it combines creativity with technical challenges (also because all of the modules Mr Razif teaches are so difficult TT (not ur fault sir i promise)). However, I was relieved to know that we could form groups for our ongoing project — I’m teaming up with Naura, who has always been my emotional support hehe. 

We have started brainstorming together to develop strong ideas. Our goal is for each of us to come up with two ideas over the weekend. Hopefully, by combining our efforts, we can create an amazing project concept that we’re both passionate about..



Week 2 

Fig 1. A straightforward diagram showing different fields of design.

In this week’s Experiential Design class, we explored the subtle but significant differences between customer experience (CX) and user experience (UX). While CX focuses on the broader impression a person has of a brand—often influenced by their purchase or service experience—UX dives into the actual interaction with the product or service.

Mr Razif then reminded us how essential it is for companies to align their brand identity with the way they present and design their digital or physical experiences.


Fig 2. A venn diagram showing the comparisons between UX and XD design.

We also revisited important foundations such as user personas (which we learned from Mr Yusri), which group users by common traits and behaviors, and information architecture, the hierarchy or structure of information in an interface, crucial to making experiences that we are trying to make intuitive.

A standout part of today’s session was learning about empathy maps—a tool that helps teams gain deeper insight into what users say, think, feel, and do. This tool supports decision-making by aligning team perspectives around real user experiences.

Fig 3. Our class activity User Journey Map.

Journey Mapping Activity: Taylor’s Campus Edition

Our class activity involved creating a user journey map of a typical day on Taylor’s campus. This included the journey to class, finding food or toilets, and post-class relaxation spots like the library or lakeside. I realized how many pain points we subconsciously tolerate—like long elevator waits or crowded student centers. Yet, there are also thoughtful gain points, like AR face recognition or the lakeside’s calming presence.

One big takeaway was how customizable journey maps can be. There’s no fixed format—they can blend with storyboards or be enhanced with emotion icons or mood indicators, depending on what we want to communicate. However in this case, Mr Razif did mention that if we used emoji's or visuals to represent the plus, pain, points and solutions, it'd be more appealing.


Trending Experiences Ideas

I also discussed two of my current ideas with Naura and Mr Razif. One was an AR Home Designer, where I wanted an affordable AR home interior planning tool that lets users scan and recreate their rooms using LiDAR (or standard depth scanning) and then rearrange furniture virtually using physical QR markers or chip tags. Unfortunately, this idea was already busted because Mr Razif has said there's already an existing AR app called IKEA place. (Clearly I didn't do enough research before, and I guess it's hard to have an original thought in this world..)

Another setback would be choosing what to focus on in the app - such as whether I
 wanted to make it about room scanning or furniture placement, as I would need to focus on one and not the other due to time constraint and also learning the resources needed. Lastly, even if I did create something like IKEA place, what would make my application different from them?

Fig 4. IKEA Place in simulation.

My second idea was a Gym Workout Buddy, an AR experience where a virtual trainer avatar demonstrates gym equipment usage or bodyweight exercises beside the user in real time. The avatar can be selected based on gender or fitness style, and would overlay correct posture/form in the physical space. I thought of this idea because I had visited the gym before on my own, and not only was it full of testosterone (guys.. everywhere..) the equipment also looked confusing. Even when watching youtube videos, it was difficult to translate the same understanding due to only seeing 1 perspective on the screen.

When I told Naura about this, she consulted our chatgpt TDS assistant, and it mentioned it would be difficult to portray the correct posture due to tedious rigging and other technical issues with the animation. Because of this, I decided to change my idea where the mascot would demonstrate how to use the machine, but the correct posture/form would not be included in the idea. Instead, the workout buddy would be able to do floor exercises or anything with simple animation (planks, jumping rope) that would still serves its' purpose as a workout buddy. 

Mr Razif seemed quite okay with the idea so far, and I went back home to continue researching.

Personal Reflection

I found this activity really enjoyable and insightful. It made me more aware of the small frustrations I’ve gotten used to on campus and how they could be solved. Hearing my groupmates’ different perspectives also reminded me that even within a shared environment, user experience varies greatly. Everyone’s interaction with the same space can differ based on their priorities, habits, or even mobility.

This week's lecture also helped me realize how experience design is about empathy and systems thinking. It’s not just about problem-solving, but it’s about understanding how people move through environments and how subtle improvements can make a big difference.

Also, it's very difficult to have an original idea in 2025. 

Here is our future journey map that we did below:

Fig 5. Our future user journey map for Taylors students.


Week 3

This week, we were introduced to Extended Reality (XR), which includes Augmented Reality (AR)Virtual Reality (VR), and Mixed Reality (MR). These technologies all aim to enhance user experience, but each does so in unique ways.

Fig 6. The differences between AR, VR, and MR.

  • Augmented Reality (AR)
    Example: IKEA Place
    AR overlays virtual content onto the real world. It enhances physical space without replacing it — for example, placing a digital sofa in your living room through your phone.

  • Mixed Reality (MR)
    Example: Smart Home Systems
    MR allows users to interact with and manipulate virtual objects in real time, while also respecting real-world physics. It’s like AR but with deeper interaction — virtual switches that respond to touch or voice.

  • Virtual Reality (VR)
    Example: Minecraft VR
    VR replaces the real world entirely with a digital environment. It offers full immersion and a strong sense of “presence,” where users feel like they’re truly somewhere else.


What AR can do, MR can do. But what MR can do (only sometimes) is what AR can do.

Fig 7. A screenshot of the student's artwork in class.

Mr Razif let us try AR projects done by previous students. This one in particular is by the Fashion Design students, where they scanned Barbie Dolls and made clothes for them, allowing them to be displayed much more easily in AR. This was demonstrated to us as the Mobile AR experience, where it felt personal and customized, as there were different parts of the model's clothes seen as you moved your phone. 

Types of AR Experiences:

  • Mobile AR (Personal)
    The most accessible AR. It involves interacting with digital content through smartphones, e.g., changing object colors or animations as you move the phone. This creates a two-layer interaction between user and environment.

  • Projection AR (Shared)
    AR is projected into the physical world — think museum installations or digital graffiti that several people can see and interact with together.

  • Head Mounted Displays (HMD)
    These include smart glasses or VR headsets that integrate SFX, VFX, touch/voice activation to create more immersive AR experiences.

There are two main approaches to activating AR content:

  • Marker-Based AR
    Requires a visual trigger (like an image or QR code). Common in print or packaging — scan the marker to see content appear.

  • Marker-less AR
    Uses sensors, GPS, or LiDAR to anchor AR content in the environment. This method doesn’t require physical prompts to activate content.

We also leanred that designing effective AR isn’t just about visuals — it blends UX Designusability, and interaction design. We also touch on:

  • Design Thinking: Keeping the user at the center of the problem-solving process.

  • Human-Computer Interaction (HCI): Understanding how users interact with technology to design more intuitive experiences.

Class Activity (I forgot what this one was specifically called)

After our lecture, we had a class activity where we had to choose a place and then give them an AR solution based on it. (Mr Razif complained about everyone choosing the gym, so we couldn't reuse my workout buddy idea T_T) Crystal had the idea of using the Hair Salon as a our designated place, and surprisingly we all could relate to the Hair Salon giving us problems.

Even when customers bring reference photos to a salon, they often struggle to accurately visualize how a chosen hairstyle or hair color would look on their own face and hair texture. Also, hairstylists may have their own vision when it comes to your haircut or they forget. This can result in disappointment when the outcome doesn’t align with their expectations

We decided to propose an AR-powered hair salon mirror that uses facial tracking and 3D modeling to:

  • Let customers preview hairstyles and colors in real time, accurately mapped to their face shape and hair type.

  • Show live treatment progress updates — like how long until the next wash, or a visual timeline of each stage.

  • Include “style limits” overlays, where clients can draw or set boundaries (e.g., desired hair length), and stylists can reference them throughout the session.

This idea combines marker-less AR and real-time visualization, creating a more transparent and reassuring salon experience.


Fig 8. My chosen image to put into the database. 2D, and has contrast.

Unity & Vuforia

Lastly, today we started on using Unity and adding Vuforia inside as our package (AAAAAAAAA I can feel my future self stressing right now.)

Firstly, we had to download vuforia and add it into unity. After that, Mr Razif explained some of the features that the AR Camera had in the Vuforia package that appeared when we added it to our scene. By using the serial key that we had generated and importing our database, we were able to get the camera to recognize our chosen image. In turn, we got a cube. 

Fig 9. Downloading Vuforia and uploading the image into the database (I got 5 stars!)


Fig 10. Yay my cube worked!

Mr Razif then told us to place a 3D model instead of the cube so that it feels like a more realistic experience (do it before you go home!). So I found a 3D fish that's free on the internet and aligned it with my fish image. This was an easier process than I expected but I'm still going to be cautious with Unity because it traumatized me a bit too good in Sem 3...

Fig 11. My free 3D fish working :)

Personal Reflection

Mr. Razif’s lectures are honestly quite nice. There's a nice mix of theory, visuals, and practical demos that makes the long class hours digestible, and he has a way of explaining things that makes sense.
The class activity was genuinely fun — it reminded me how good design starts with empathy and shared experience. Starting Unity again was scary (no thanks to my past trauma), but the small win of seeing my 3D fish appear has motivated me more than I expected.
I hope I can carry this momentum into the real project. If my fish can swim, maybe I can too.

Week 4

Fig 12. Adding Buttons in Unity

This week we had no lecture, and focused solely on Unity and Vuforia. Today was a continuation of our activity from last week. 

We learnt how to incorporate buttons without coding for our future UI ideas. Here, We had created a START and STOP button using Canvas - Text Mesh Pro and also resizing our screen to be an iOS dimension (I chose Iphone 11 Pro). Beforehand, we also created an animation of our model using the Animator attached to the item and the animation timeline.

By creating a list of actions and referring the model of the fish into it, we utilized the Game.SetActive bool that uses a true or false statement. By ticking or unticking the box, you create a state that the button is in. When testing and pressing the button, it switches the option to vice versa. Pressing 'STOP' will stop the animation and pressing 'START' will start it again. 

Fig 13. Adding Buttons and Controlled Animations

We then also applied this same tactic to include another animation, this time with a sphere running. By adding another state to the list, this time including the sphere as well.

Fig 14. Changing colors when pressed (impacts user experience)

Mr Razif also explored the options for customizing our buttons, such as changing the color when pressed, when hovered, changing the font, and etc. 

Fig 15. Video of my Fish animation controlled with the Buttons

Mr Razif then explained to us that if we put the two START and STOP buttons in the same center alignment, it would look like it was the same button. This was a 'hack' to make it look like our UI was coded and was (i guess) more cool? And this would impact the user experience in a more intuitive way as well than just adding the pressed colors. 

I also had a little bit of a problem when my sphere animation wouldn't play (as you can see in the video), and I suspected it was because I made a previous animation called Sphere_Bounce and changed my mind, deleted it, and made Sphere_Run instead. Because of my previous Unity experience in Games Development, I went to the Animator, and found that the system had tagged Sphere_Bounce as the default animation despite me deleting the file in the Sources folder already. Therefore, I deleted it in the animator and attached Sphere_Run as the default, and my problem was solved. 

Fig 16. The Final Product (starts at 0:13 seconds)

Lastly, Mr Razif challenged us to try to create a separate START and STOP button for the sphere. (luckily my animation was working now) I duplicated the previous START and STOP button, and instead only tagged the sphere and removed the fish model, and vice versa. (I also improved the fish animation just in case it wasn't obvious enough)

Personal Reflection

Today's class was very fun and intuitive in how we can incorporate UI into our AR application without using coding. As someone who isn't very good at coding, I think that this saves time and stress when reflecting onto my actual AR project in the future. By utilising all of Unity's features, like the color changing pressed button, changing the font, alignment in the canvas, and etc, this would promote the work smarter not harder ideology that I really resonate with (xD). I can foresee myself using this in my future AR project, so that there is less for me to code and also get confused with.

While there was no theory today, I think that this practical hands-on lesson was very impactful, and I appreciate Mr Razif's understanding in ending the class early (due to my situation :'D   ).

Week 5

This week Mr Razif introduced putting videos into our canvas, where he talked about some of the fundamentals of setting up before we start, like how in Canvas we have to make sure to choose overlay as it should always be on top of cameras, models, etc and how in World Space we can always rescaled our canvas to our phone dimensions that we want.

Fig 17. Making sure our canvas is on top.


Fig 18. Making the video pause and play within the same button



Fig 19. Making a KABOOM VFX play within 3 clicks of playing and pausing


Week 6

imported png buttons into Unity (since for our actual project we can use PNG buttons or just the background and add text on top as the button) and changed them to Sprite 2D and single.

linking the buttons to different scenes by using the function Build Profiles and coding in VS using UnityEngine.SceneManagement 
Fig 20. Making sure our Scene List is numerically correct as it will determine the start of our application pages.


he showed us two methods of accessing the scene manager, one by coding it like this (top) and one more method at the bottom where we will be able to write the scene name we want the text box.

Fig 21. Two different codes that access SceneManager


Fig 22. Additional step that goes hand in hand with the 2nd code option

initially i had a problem where the public GameObject was not appearing back in my unity, so I had to delete my fishControl script and make a new one again (this time it worked!).
Fig 23. I had a problem.



to end the class, mr razif asked us to make all scenes accessible and able for the panel to have an X button where it can open and close

it took me awhile but i used all my braincells and linked back to putting it in the info fish panel where on the event that the fish panel X button was clicked it was setactive to false.

Fig 24. Full flow of Scenes working and demonstrating popup that happens upon On Mouse Down interaction + adding an X button to make the pop up disappear.

Week 7

we spent the entire class trying to set up our phones into our borrowed school macbooks (because naura and i both use iOS, so that meant that our computer also had to be iOS to build the profile in the phone and test the AR

i almost skipped the entire thing because I forgot my apple account password and it locked down my phone which meant i couldnt connect to the mac, but thankfully i managed to remember (after 1 hour) and it worked!

also i HATE MAC BECAUSE THE TOUCHPAD IS SO WEIRD AND DIFFERENT FROM WINDOWS AND MY MOUSE IS SPOILT AND CANT CONNECT TO THE MAC cuz its not an apple mouse and has no bluetooth 








Week 8

this week was slightly better and not so hectic as I managed to do some things on my own WITHOUT asking mr razif for help TT
we tried to incorporate the ground plane and ui together


Fig ?? Testing the Ground Plane function on the iOS by tapping and adding red cubes








Comments

Popular Posts