Paul Kim • Product Designer
Peregrine Thumbnail.png

Velocity Insights

Peregrine Hero.png

OVERVIEW

Problem

Creating, testing, and deploying code can be an arduous process for an engineer. It’s not only time intensive and cognitively demanding but can also include a variety of bottlenecks that may stall progress and ruin momentum. 

In 2019, Indeed set out to double the velocity of product development while reducing delivery lead time by 10 percent by the end of the first quarter.

How might we find a way to showcase velocity based on performance and provide a detailed view of insights and reasons for certain patterns?

Outcome

The team designed and validated a web application that helps the senior leadership team and engineering managers facilitate velocity management for a project or team through visual insights. At a glance, users can see a retrospective of delivery lead time and throughput, including A/B tests, production bugs, and issues pushed into deployment. The app also proactively identifies the top in-flight tickets that may be bottlenecked and stuck in a particular status in the engineering workflow.

This experience has allowed the organization to ship products 28% faster to customers, all the while maintaining quality of these products within the same experience. So it is not just fast delivery. But quality delivery faster.

TEAM

Peregrine Team.png

TOOLS

Peregrine tools.png
 

Approach

Process.png
 

Design Considerations

Design Considerations.png

Investigate

Understanding the users through discovery research

Before we could develop an effective solution that supports velocity management, we needed to understand how engineering managers work and the pain points they experience. Research facilitated individual 30-minute interviews with five engineering managers; they discussed how the managers define and track velocity, how important velocity is, and what they expect from a velocity management tool. 

Things we learned through these conversations

  • Engineering managers didn’t track velocity in a formalized way
    Some managers said they weren’t sure at what point to start tracking velocity, and another acknowledged that projects are variable, making them difficult to track. Measuring velocity was also too difficult for those not using agile workflows with time metrics.

  • Most managers saw value in tracking velocity
    The majority of those interviewed saw upsides to tracking velocity, including the added value of standardizing best practices, more straightforward performance management, and identifying process effectiveness over time.

  • All participants noted some level of risk
    Defining metrics can be difficult since projects vary greatly, morale could suffer due to comparison, and getting team buy-in could be a challenge.

  • Managers had similar expectations of a velocity management tool
    Managers interviewed were interested in a configurable tool showing top areas of concern and offering ideas or recommendations for next steps. They also wanted the tool to normalize data across teams based on variability and show how accurately teams estimated story points and other measurements. 

Identifying the project focus

Creating and using standardized visual insights for velocity metrics was new to Indeed, so there wasn’t an existing process for the team to explore. Instead, we were tasked with developing one from scratch, based on the following problem statement: 

Problem statement.png

This was a high-risk, high-reward concept. If the tool worked as desired, managers could proactively improve velocity at Indeed. In the worst-case scenario, it could impede velocity management by providing incorrect or confusing data. 

For this reason, the UX team influenced the stakeholders to host a design sprint -- an intensive 4-day workshop that uses cross-functional collaboration and creative exercises to answer critical business questions and conceive of a viable, desired product. 

The design sprint involved a mix of specialities within the two groups: Technical stakeholders (director, senior project manager, and software engineers) and UX professionals supporting the Internal Tools team (research, content strategy, design, and design technology). Inspired by their individual areas of expertise, the sprint participants worked together to envision an MVP.

Mapping the user journey

During the sprint, participants mapped the journeys of the experience’s two main user groups: engineering managers and members of the senior leadership team (SLT). Next, they plotted within this map the pain points they identified during the earlier research read-out and discussions with subject matter experts. The high concentration of pain points around “understand insights” and “configure insights” indicated to the team that these were the areas of the journey that the MVP should focus on. 

User+flow+1.jpg
User flow 2.JPG
 

Define

With the sprint focus narrowed to “understand insights” and “configure insights,” participants sketched their solution-oriented ideas. Their anonymous, low-fidelity drawings were presented to the group by the sprint facilitator, after which each participant voted on their favorite solutions. The key team’s stakeholder served as the ultimate decider, determining which sketched features to include in the prototype and which to backlog for possible iterations in future quarters. 

Solution sketches from the sprint, with dots representing votes for a feature or entire solution flow.

Solution sketches from the sprint, with dots representing votes for a feature or entire solution flow.

Solution 2.JPG

Key features identified for inclusion in the prototype:

  • Monthly email that provides a velocity report in summary form

  • Team view 

  • Insights by month/current view this month

  • Recommended actions with link(s) to engineering ticket(s)

  • All on one page

  • Card view with metrics

Ideas added to backlog for future iterations:

  • Adjustable time frame

  • Compare quarter to quarter?

  • Engineering ticket details, including ticket status

 

Storyboard

In order to align on the experience from the solution, we do a two-part storyboarding process. The first process is User Test Flow, and the second is an eight panel storyboard.

User Test Flow.JPG

A User Test Flow has the team focus on the solution and map out the ideal flow. But there are limitations, which is that it has to be in 6 steps.

We communicate to the team that these are not constraints, but that it really drives people to make sure that the experience is simple. Without these limitations, members of the group could create infinite amount of steps to what is a simple solution.

The team aligns and votes on the flow that reflects the goals of the problem, and then moves onto mapping the selected flow into an eight panel storyboard.

Peregrine Design Sprint Storyboard full.JPG

Next, we parsed the experience out into an 8 panel storyboard. This process helps the team understand not only the full end-to-end experience, but also allows the opportunity for the members to contribute into the individual flows for content strategy, information architecture, and interactions to stitch the prototype together.

If you’re following along, you may notice that we have 6 steps in an 8 panel storyboard. The reason for this is because we’ve learned that some of the steps in the user test flow actually reflects into more than one story panel. So we give more room and assume that in the infrastructure of the process.

Lastly, we ensure that every panel has clear and thorough documentation around content strategy, interactions, information architecture, and preliminary visual language to enable a smooth prototyping process.

We don’t leave the room without annotating key elements of the storyboard to help allow prototyping go smoothly.

We don’t leave the room without annotating key elements of the storyboard to help allow prototyping go smoothly.

 

Prototype

With the alignment around the defined solution, I worked with the other UX team members to stitch together an experience that we would then test with users that same week. Below is a collection of some of the key flows within the experience:

1.1 | Email.png

Email velocity report

Engineering managers would receive a monthly email reflecting their teams’ collective cycle time and quality changes. An included link would take users to their Peregrine dashboard for a closer look at the data.

2.1 | Dashboard landing.png

Dashboard overview

Users would log into the dashboard using their credentials, which would inform the “My team” information. On the dashboard, users would see visual representations of month-over-month data, including cycle time, quality, and throughput, as well as recommendations (explained below). 

Recommendations

The “Recommendations” section would provide issue-level insight into subpar metrics to help managers proactively improve team velocity. Each insight would include links to associated engineering tickets that may need review or action.

 

TEST

The UX researcher presented the resulting prototype to a subset of users to validate the content and usability. Did this prototype provide a value add and could it help drive velocity metric enhancements across the org and at an individual level? 

The five engineering and QA managers provided the following feedback in one-on-one discussion sessions with the researcher:

Email velocity report

  • Most users felt the email created a need to learn more. 

  • All users wanted the email to include more information so it was actionable. 

  • Most were skeptical about data credibility, especially for the quality metric

dashboard

  • Most users agreed that the dashboard was a good start but thought it needed more information to be trusted and used.

  • All users wanted to be able to customize the dashboard.

  • Most users accepted that the number of production escaped bugs served as a baseline for quality.

  • Most wanted the ability to compare themselves to other teams.

  • Most users wanted granularity of data and said they were fine doing it themselves.

“Recommendations” section

  • Users wanted more information about how recommendations were determined. 

  • Most users expected to go to other tools to get more information. 

Some key learnings for the product included:

Understanding how velocity impacts engineering teams

With the research, we got a deep dive into the specifics of how the data points and content pieces carried weight in the decisions that engineering teams could make. There was a heavy influence in guarding the engineering culture to not be measured so directly on velocity. But then also wanting to see the positive trends of velocity that enabled their teams to get products out quicker to end customers. So areas like quality assurance, code reviews, and resolving bugs were identified areas of opportunity that came up consistently to resolve. Lastly, the identification of knowing where the bottlenecks, or which engineering efforts were taking the longest and held up, was another area of opportunity to help optimize velocity.

Transparency in data source and measurement

Because measuring velocity is complex and varied, engineering managers are hesitant to trust metrics without understanding where they’re coming from and how they’re calculated.

 

deliver

Version 1

V1 developed with the help of qualitative data from the validation and ongoing collaboration with stakeholders and SLTs, as well as backlog ideas from the sprint.

  • Add content (tooltips, callouts) that provides context on data points, including what it is and how it’s measured.

  • Rework visualization of delivery lead time (stacked mean, stacked median, …)

  • Provide a clear CTAs for each engineering ticket in the “Dynamic Bottlenecks” section to streamline check-in process for managers

  • Email summaries for managers so they can see their team velocity metrics at a glance and keep it top of mind

 

RESULTS

Indeed exists to help people find jobs. So getting quality products out efficiently and effectively to our customers in arguable one of the most stressful times of an individual’s life is paramount in the hiring tech space. Thanks to this solution, our end customers were able to receive their products 28% faster (Decrease from a 14 days of delivery lead time to 10 days). This was experienced across the entire organization, all the while maintaining the quality of our products.

Furthermore, we were able to ensure that:

  • Product and engineering had a centralized source of truth when it came to tracking velocity

  • Made sure that velocity metrics would not have any weight on team members’ performance reviews

 

evolution

Since the inception of the experience, the product continued to evolve and garner opportunities across the organization for collaboration. Today it is considered the approach to helping the organization achieve a goal called “Concept to Cash”. Another way to explain that is: “How can we measure the quality and velocity of the products we deliver to our customers?”. And because of this, this tool has adopted view points outside of engineering, and now tracks the flow from the having an idea, and how fast the organization can track an idea through production and return of value to both customers and the business.

The product has therefore had to evolve both the front-end and back-end logic to house all of these complexities. There have been numerous research studies, information architecture efforts, design system evolutions, and technical innovations to showcase the experience as it looks today below:

 

Information Architecture

Since there was introduction to multiple back-ends and duplicative content, we set out to make sure that the content was organized, discoverable, and presented in a way that would help shape the optimal experience to help make velocity decisions in the organization.

We had a 5 step approach:

Similarity matrix to help identify how to categorize information

Similarity matrix to help identify how to categorize information

  1. Content audit - Google Sheet
    Goal: Gather all of the current and future content that will live within the product across all integrating tools

  2. Open Card Sort - Optimal Sort
    Goal: Understand how internal users group and label content pieces identified in the audit
    Methodology: 15 internal users, remote unmoderated

  3. Closed Cart Sort - Optimal Sort
    Goal: Validate and iterate on the content categories from the open card sort
    Methodology: 15 internal users, remote unmoderated

  4. Treejack Study - Optimal Workshop, Tree test
    Goal: Evaluate the discoverability of content within the proposed categories
    Methodology: 15 internal users, remote unmoderated

  5. Usability qualitative interviews
    Goal: Validate and iterate on the proposed experience’s categories and hierarchy
    Methodology: 7 internal users, remote moderated

Dendrogram visualization to understand the level of agreement on the prospective categories

Dendrogram visualization to understand the level of agreement on the prospective categories

Learnings from the information architecture research included:

  • Identified 6 distinct groupings of content

  • Evolved the framework of the product to a tab design on the front-end logic to help users navigate the information. And separated the hierarchy of the information through introduction of 3 layers of hierarchy

    • Primary navigation (left hand navigation)

    • Secondary navigation (top navigation bar with tabs and controls)

    • Tertiary navigation (within specific tabs to navigate more micro-content)

  • Identified a requirement for the navigation to explore 3 dimensions based on navigating across and within business areas

    • X-axis: going across your same functional area within the business area

    • Y-axis: Spanning the entire business unit at every level

    • Z-axis: a visual interaction that allows a user to easily navigate across the organization to find relevant content

 

Prototype

In order to conceptualize the complex 3-dimensional experience, we crafted a quick prototype that also included the reframed front-end design language for the content.

Recorded: Prototype in action built in Principle