Alyshia Olsen

AO

Experiment Health Monitor

Designed new indicators to help SigOpt users quickly assess the health of their experiments.

Summary

The SigOpt product allowed AI engineers to quickly conduct experiments on their algorithms to identify the most appropriate hyperparameters.

One limitation here was that experiments could fail or stall for a variety of reasons - some on the user end and others on the server end. The existing user experience wasted our users' time.

Sigopt users needed to monitor their work in a long table of run details to understand the health of their experiment as a whole and decide whether or not to take action.

WE HAD AN OPPORTUNITY TO HELP OUR USERS UNDERSTAND THE HEALTH OF THEIR EXPERIMENTS AT A GLANCE AND MORE EASILY CONSERVE COMPUTE RESOURCES.

Team

SigOpt Core

Deliverables

Visualization Iteration
Wireframes
Finalized Designs

Responsibilities

UX/UI Design
Visual Design
Requirements Gathering

Timeline

April 2022 - August 2022

Team

SigOpt Core

Deliverables

Visualization Iteration, Wireframes, Finalized Designs

Responsibilities

UX/UI Design, Visual Design, Requirements Gathering

Timeline

April 2022 - August 2022

My Role

I proposed and incorporated two UX surfaces into the existing design at different levels of hierarchy in the product to help users understand the status of their experiments. A Status Bar allowed users to understand the overall health of all experiments on a single page, and a runtime tracking visualization enabled users to identify problems within an experiment and make a decision to pause or cancel the experiment if needed.

Shipped Designs

The Status Bar gave users an at-a-glance insight on the status of their experiment so that they could dive into the details if needed.

Additionally, even when experiments were running, they could be unhealthy and use resources without making progress. The user often should cancel these runs as soon as they notice this issue. The new run time graph gave users detailed information about each run and helped them identify which runs were problematic and make a decision about whether to continue the experiment or cancel it.