Final Project: ARchify Application

 

DELIVERABLES:

Google Drive to Folders + Walkthrough Video

Naura's Blog


WALKTHROUGH

TASK DELEGATION:

NATANIA:
Handling AR + AR UI

NAURA:
Unity App UI + AR UI

Most of this development was way before we managed to debug our AR Scene, as it previously showed us a constant error of "development console exception in callback failed to create ImageTargetObserver: DATABASE_LOAD_ERROR" when we build to the phone, so most screenshots and testing was done on using the Windows platform. 

This error showed that it was having trouble getting the Vuforia database to load, and any attempts to even scan our ImageTarget did not work. (Surprisingly no one else on the internet has this, except for a few which did not manage to get solid answers as to why this happens, and the only one who managed to solve his issue was this guy - who didn't help me...)

Move, Rotation, & Scale, Reset

Considering our absolute fail in the previous Prototype stage, Naura and I were pretty stumped on how to move along. Deciding that we would just stick to horizontal planes, we switched back to Vuforia, where we implemented Image Tracking to spawn where our model would be (as per the proposal idea).  

With the help of Renee, we were able to apply the same method she had for her AR app where she could Move, Rotate, and Scale her model.

Fig 1. A screenshot of part of the Draggable code

After implementing this code, I also implemented a feature called TargetTracker, where the UI will only appear IF the image is tracked. This way, the user knows that they are supposed to scan the room first and implement the image target before accessing to the models as instructed in our Tutorial Page. 

To produce a more interactive feature for the users, I implemented a ModelSpawner code with an empty GameObject named ModelManager, where another empty GameObject named SpawnPoint with the coordinates of 0,0,0 (where the center of the ImageTarget is) allows the model to be spawned where we want it to be, solving our problem from the previous MVP prototype where our models would spawn in random places and would be in random inconsistent scales. 

Next I created a ModelManager script that would handle the Models on screen when they are spawned. I linked it to On Click for the Reset, so that if the user wanted to they can Reset the room (clear all of the models using the Destroy(model) function). 

Fig 2. AR Tutorial Page (on startup but other UI is hidden)

AR Catalog UI

We wanted to follow the Figma UI that we created in the previous assignment, so I created the Tutorial Page (using Panel) and used 2 codes, one called CatalogToggle, and CatalogCategoryManager. For CatalogToggle, when the user clicks on the slider button it toggles the panel and expands to the desired height we wanted, and on the void Update it would go back to the untoggled version. 

Fig 3. Catalog Manager Inspector

Next was the content of the category. When replicating the UI, I was wondering how to be able to have subpages within the Catalog. In the end I created CatalogCategoryManager, where I decided to index each of the button's content panels that contain the element catalogues. This would be done by creating an empty GameObject called CatalogManager with the index system and the ability to input a GameObject into each number. This way, they would know when to be SetActive.

Fig 4. Part of the code (most crucial!!)

 By adding this it makes it easier for each subpage to be called, and when its called the other pages will be "hidden"/unchecked in visibility in the heirarchy, (with of course the first one PartitionContent being the exception, so that when the user toggles the category its the first page to see). 

Fig 5. Inside an Element's Inspector (shows OnClick event details)

Models

Next we input the models for each button in each category's content. This meant that each button had to call the model Prefab that we made of it, and it had to be the right scale and where we wanted it to be. Previously in other tries in our MVP Proposal, we had trouble getting the scale of the models right as well as the spawning point (which is thankfully solved by the SpawnPoint I mentioned earlier). Naura also had some trouble importing in the models because they would be either be a) too small to see b) slightly transparent or just missing? c) not in the texture she wanted them to be 
(you can read more about this on her blog)

Previously, we had a lot of trial and error even before of trying to import models and finding out some can and some can't, and when I realized we had to edit the already existing Model Prefabs (but couldn't, because we just stuck the original model into a folder and made it  a Prefab which was dumb), we had to reimport all of them again (and Naura managed to get the textures in too! <3)

But after many tries, she managed to work it out, and I helped her make them into model prefabs by putting each individually in the hierarchy, setting all the transformation points to 0, making sure it scales to fit the width of the ImageTarget (exactly), adding the Draggable code, an ARModel tag (just to make sure it can be called) and also a Box Collider for Touchpoints and Mouse Gestures. 

Individual Control

Next, I modified the Draggable code as you could only previously add in 1 model every time you click on the button. It took me a day or two to figure this out, but eventually I modified the script to update how the models were instantiated. Originally, the code replaced the previous model every time a button was clicked, which meant only one model could exist at a time.

Next, I wanted to have individual control over each model. While I had multiple in one scene, they somehow were controlled by the gestures at the same time.

Fig 6. Evidence of a Confused Natania

That's when I realized that before this I was just clicking anywhere on the screen and would be able to control the models. Now that I have multiple, doing that obviously wouldn't work anymore. By clicking only on the model, I was able to control them individually by making it detect input only when the model itself is clicked via a Raycast with its Box Collider. Now it works!! So that's great (but I don't think it ended up showing in our final.. for some reason TT)


Fig 7. Playing around with the AR (making a wall of Partitions spawning with Individual Control - STILL on my laptop because we were still finding out the build Error)


Fig 8. Evidence of our wonderful teamwork

Collaboration + Solution

We used Unity Version Control as a way for us to share and update files while still working on the same Unity file, and I think this really helped us a lot. In the beginning stages of our MVP Prototype, I think Naura and I spent like, one whole day trying to get the GitHub collaboration to work or find any other alternatives, due to the fact that our file was just TOO big (we didn't even start the AR at that time yet). Thankfully, Mr Razif introduced us to Unity's Version Control that made everything a lot easier, and even if there were conflicting files and others that needed to be updates, just by pressing "Update workspace" it just relieved the both of us of a lot of stress (because honestly we were crashing out at this point). 

Mr Razif asked us to solve our Image Database Error problem by trying everything in a new file, which we REALLY didn't want to do because we were scared of losing our progress or having to restart, but honestly I've been trying to fix this error for over a month and I think it's because of the history from our prvious attempts that may have just corrupted this entire file all together. Luckily Naura found a method on youtube where we can import all the dependencies and still retain our current progress, code, and links without doing anything (yay!).


Reflection

We did get the occasional error where on Naura's side there would be a file missing or something that wasn't supposed to be there. It even prevented her from opening the Unity file at all, but luckily I could delete it from my side using Version Control, and she could open the file again. I think at times like these, it's crucial for you to have a reliable and close teammate, because we show up for each other when the other can't, and we are each other's moral support (she is more of mine to be honest HAHAHAHA). I think without her I would be struggling so much overall, and I think she really is the key ingredient that makes our app so high quality with all its interactive features, visuals, and models (SHE DID ALL 24 MODELS?? Honestly her Blender skills are so much better than mine).

Through this project, I learned a lot about the real struggles that developers and coders go through — how even the tiniest difference in how you build your app can determine whether it runs or crashes. For example, changing the Input Package Manager to "Both" worked for me during testing, even though in previous classes it was recommended to choose only one. It just goes to show that there is no one-size-fits-all solution when it comes to development.

I also realized how keeping up with the most recent versions of software and plugins is so important. If even one version is out of sync (like Vuforia or Unity packages), things might not work at all, or worse- crash your project halfway through.

Most importantly, I learned that there are a thousand ways to reach the same goal, and just because one method didn’t work for someone else, doesn’t mean it won’t work for you. You just have to keep trying, keep tweaking, and be ready to problem-solve your way through it.

In fact, we tried around 4–5 different approaches before we even got our final app working:

  1. AR Foundation

  2. AR Magic Bar Lite (from our MVP proposal)

  3. Lean Touch Toolkit

  4. Manual screen-based prefab placement

  5. And finally, Vuforia !!! which was the one that actually allowed us to do everything we wanted.

Here's a rundown of everything we managed to achieve in the AR Scene:

  • Marker-based model placement using Vuforia Image Targets.
  • Multiple models can be instantiated from the UI catalog.
  • Each model is interactable through individual control scripts.
  • Drag (move) functionality using mouse or touch.
  • Rotate (mouse right-click or pinch gesture).
  • Scale (mouse scroll or pinch zoom).
  • Tap-to-delete function with raycast verification.
  • Reset button to remove all placed models at once.
  • UI catalog with expandable/collapsible animation.
  • Category-based filtering for catalog panels.
  • Sound effects integrated for toggle and reset actions.
  • Clean and modular script organization for maintainability.

Honestly, this journey has been chaotic, tiring, and technical but also one of the most satisfying and collaborative experiences I’ve had so far because of the amazing partner I had and our vision on the project together. The fact that our AR app works, runs smoothly, and has such strong interaction features is something we’re genuinely proud of.

Comments

Post a Comment

Popular Posts