Building on the resume optimization feature at Winterview
Winterview | 4 minute read

Company Overview: Get Jobs Faster
Winterview is a dynamic SaaS startup offering B2B services to recruiting agencies and headhunters with the goal of improving their candidates' (B2C) application materials and interview skills. The platform provides AI-driven resume enhancements, interview preparation, and cover letter services.
The information in this case study is my own and does not necessarily reflect the views of Winterview.
Summary & Results
The end goal of this project was to increase feature adoption for the resume booster.
As a Product Designer, I identified and addressed a critical gap in our resume optimization feature, specifically targeting users with resumes missing critical information. Over a five-month period, I experimented with different approaches from a guided resume builder to targeted clarifying questions.
34%


increase in feature adoption
40%


increase in CSAT scores
6x


increase in interview invites
Why We Redesigned: Incompatible Resumes, Subverted Expectations
Imagine you were contacted by a recruiter to fill a position and they tell you about Winterview. You eagerly upload your resume to their optimization feature; however, you hardly notice any improvements. "It's not my fault; I did everything I should have," you think.
In reality, your resume wasn't good enough to be improved. This was the case for 30% of surveyed users, whose resumes lacked key elements like metrics or did not follow best practices like the STAR format (situation, task, action, result). This led to decreased adoption and fewer interview invites.

For some users, this was their experience with Winterview's resume optimization feature.
Guiding Users to STAR
Our initial strategy was to improve the building blocks so to speak. We introduced a bullet point builder feature, prompting users to detail their experiences in the STAR format. Then, our AI would summarize and paraphrase their writing.
This was our initial solution for improving inputs to the booster.
Cognitive Overload, Skepticism, & Laziness
Technical usability testing was conducted with success criteria for tasks defined as completion time being less than 2 minutes for each aspect of STAR with "appropriate" responses.
Usability testing uncovered 3 key issues:
- Cognitive overload: users struggled to fill out STAR the first time, taking as long as 4 minutes to fill out a single field. Two causes: comprehending instructions and recalling their experiences. Users often did not fill out the fields correctly.
- Skepticism: users didn't believe their original resumes were the problem when it came to subpar outputs, making them unlikely to use the resume builder.
- Laziness: the average time to completion for a single STAR bullet point was 10 minutes. Extrapolate that to 9 bullet points per resume, and it will take users 90 minutes to build a resume. I was barely able to convince users to diligently fill out the STAR fields once which did not inspire confidence.

During usability testing, completing a task didn't necessarily equate to success. To ensure features were easy to learn, I established time limits as part of the testing criteria, determining how quickly a task should be completed to be deemed user-friendly.
Design Principle: Minimize Errors, Mitigate Consequences
For this feature, our aim was to guide users and decrease cognitive load (minimize errors) while simultaneously mitigating consequences if users completed certain steps incorrectly.
Clarifying Questions: Filling in Our Blanks
Our solution was to introduce a new section of clarifying questions to the resume booster feature, avoiding issues arising from the cognitive overload associated with the resume builder and subpar resume inputs.

For instance, if a user failed to include a metric in the result field, our AI would prompt with a targeted question such as, "By what percentage did you increase Click Through Rate (CTR)?"
This approach elicits a straightforward percentage (X%) response, supplying the data necessary to create a compelling bullet point. Direct queries like this encourage users to quickly recall specific pieces of information, in contrast to broad questions that demand more time and cognitive effort to answer.

Feedback was overall positive; however, surveyed users identified several possible quality of life improvements.
Adding some key supporting functionality
Through additional testing and feedback, it became clear that several supporting features were necessary to enhance the overall functionality of clarifying questions.
- Memory recall: users were occasionally stumped when it came to answering clarifying questions (unrelated to metrics).
- Uncertainty: new users didn't know right away what an "appropriate" response would be or what a "clarifying question" was.
- Still lazy: for power users (3+ resumes a day), filling out clarifying questions was tedious. They understood the benefits, but it took time away from other tasks.
Addressing Power User Concerns
Directly editable placeholder text showcased optimal sample answers and allowed for easy editing. Users could add their own information which was especially helpful for power users.
Digestible For New Users
Definitions were also introduced to provide context on the purpose and importance of clarifying questions which increased the number of users enabling them.
Recap & Results
After incorporating supporting features, we tested clarifying questions once more, noting an increase of 40% in CSAT scores compared to the original resume booster feature along with a six fold increase in interview invites in our original focus group. In the following 6 months after implementation, we also noticed a 34% increase in feature adoption for all users.
34%


increase in feature adoption
40%


increase in CSAT scores
6x


increase in interview invites
Things to Try Next Time
While the project was a huge success, here are some things that I would like to apply next time:
1. Ask Users Directly: It's clear that lengthy instructions often go unread or misunderstood, as seen in our usability tests. If you need specific information, it's best to ask for it in a straightforward and separate way. This approach helps prevent it from being overlooked in the midst of other details.
2. Mitigate Error Consequences: Similar to the flexibility I incorporated in Dyne's meetup feature redesign, I would focus on providing users with multiple opportunities to submit key information. For instance, if users' original input resume omitted metrics, users have a second chance to provide this information through clarifying questions.