How improving agent tracking slashed onboarding times by 83%
Scalelabs - Aspire (client) | 5 minute read

Company Overview: End to End Solutions
Founded in 2021, Scalelabs creates end-to-end solutions for partners across various industries, serving large clients such as Aspire Insurance in the US. Projects typically span several months to a year and encompass all aspects of the product development process, from market analysis and design to development, ensuring impactful outcomes throughout.
The objective of this project was to improve a solution previously developed by a consultancy to streamline the release process for Aspire's insurance agents. In the US, the insurance industry operates similarly to a multi-level marketing (MLM) scheme, meaning that incorrectly released agents' commissions were sent to their previous agency rather than to Aspire, but agents always received the same commission amounts.
The information in this case study is my own and does not necessarily reflect the views of Scalelabs or Aspire.
My design goal was to provide clear visibility of system status and deliver actionable information to admins to reduce agent release times and reduce the frequency of errors.
As Senior Product Designer, I achieved this through multiple cross-functional workshops, where we gathered expertise from both Aspire admins and our in-house developers to refine the solution.
19 days


reduction in release times
6.29%


increase in MRR for commissions
127%


increase in released agents per week
The Inherited Design Challenge
The initial product from the previous design agency provided some visibility into agent status but lacked key information. It focused only on registration progress, ignoring critical blockers outside the form needed for release. Views also failed to explain why a carrier was blocked, only showing that it wasn’t submitted.

Usability testing showed this solution was actually less effective than the traditional paper-based method.
Given our limited background knowledge and reference material, I decided to start by validating assumptions. This approach enabled us to progressively build a solid knowledge base that would inform the design of the UI and workflows.
After each assumption was validated or invalidated, I identified the next steps to move the platform closer to our end goal of enhancing system status visibility and reducing errors.

If you're not familiar with assumptions testing, it's an incremental process of slowly confirming beliefs about a product, workflow, or users.
Initial Designs: a Notification System
Our first assumption test revealed that Aspire admins lacked sufficient information to unblock agents in the release process. They not only struggled to identify the specific blocker, but also couldn’t determine how the agent was blocked based on their records.
These insights led us to our initial design: a notification system that was direct and actionable, providing admins with clear guidance on how to follow up and resolve the issue.

To test, we pushed "fake" notifications to Aspire admins directly on the platform. While these notifications weren’t automatically generated, they included real agent data and actionable tasks for the admins.
Also, yes, I accidentally blurred my own face in the only picture I had (as an image, not a frame).
Unfortunately, even with notifications, we noticed blocked agents slipping through the cracks, which inspired the next assumption tests.
Eureka: Immediate Action & Mental Model Disconnect
For our next assumption test, we realized that Aspire admins preferred to address blockers as soon as they arose. For example, if they noticed a required document was missing for an agent, they would call or email them immediately.
So, we asked ourselves: why wasn’t it the same for notifications in the digital platform?
After conducting additional continuous discovery interviews, we discovered that Aspire admins experienced a mental disconnect when comparing the digital process to the traditional paper process. The platform notifications lacked a clear call to action, which meant the information often went in one ear and out the other for admins.
Based on the previous agency's work, the entire team and I mistakenly assumed that the process couldn't be fully digitized, only tracked digitally.
Thankfully, this wasn’t the case. It turned out that only some aspects of the process couldn't be digitized, so our next steps were to map out all possible processes and prioritize them based on frequency and severity of errors.

This workshop was conducted with both internal and external teams. The internal teams consisted mostly of developers to check feasibility of digitizing specific processes.
From there, my next steps were to compile and synthesize our learnings to revitalize the Aspire admin platform.
Design Principles
- Actionable and descriptive information: admins should know exactly what to do to unblock an agent
- Visibility of agent status at all stages of the release process
- Tracking for offline (non digitized) tasks to avoid duplicate work
The Initial "Final" Solution
Based on the previously stated design principles and goals, this was our solution.

Admins could instantly address blockers (not necessarily resolve) as soon as they came up through the summary tab on the Agents page.
Even as an initial solution, it performed much better than any of the previous iterations in usability testing. We just needed a few more things to make it perfect.
Offline Task Tracking
As requested by admins, we added a way to track offline/off portal tasks to avoid duplicate work for admins. This way, if another admin attempts to take action, they will see the status has been updated.
No Agent Left Behind
Although this feature wasn't requested, we treated it as an addition to the MVP. Admins would be greeted with this screen upon logging in, ensuring that no agents slipped through the cracks or remained blocked indefinitely.
Results
Just one month after our redesign, we saw significant results. Most notably, Aspire admin productivity for releasing agents improved greatly, thanks to enhanced tracking and the ability to take immediate action.
19 days


reduction in release times
6.29%


increase in MRR for commissions
127%


increase in released agents per week
Key Takeaway: Validating Assumptions for Smarter Decisions
I picked up this method from a Staff Designer at Grammarly, and it’s become a useful tool in my process. Whether you’re starting a new role or refining an existing product, assumption testing helps quickly uncover insights and guide decisions. Running small, focused tests allows us to move faster, learn quicker, and make adjustments along the way.
In the future, if time is tight, I’ll aim to keep these tests even shorter—sometimes, just showing a prototype to a user can tell us a lot.