Platform: Web
Scoop is a mobile carpooling app that enables colleagues to easily arrange carpools to and from work. At Scoop, we partner with employers to provide employees with carpooling benefits that solve parking and transportation challenges, increase employee retention, and promote sustainability. Employees who ride in carpools get discounted or free trips and drivers are paid by employers for providing the rides.
In 2018 Scoop's billing and reporting was delivered to customers through our customer success team that manually put together quarterly, monthly, and sometimes weekly program performance summaries.
We set out to create Scoop's very first web product: a reporting dashboard for our employer customers. Those customers included eleven Fortune 100 companies. The dashboard would provide visibility into their transportation program performance.
Our high level goals were to:
1. Provide customers with visibility into their employee's usage of their transportation program and enable them to substantiate their Scoop invoice.
2. Enable customers to remove program access for former employees.
3. Show customers how Scoop carpools are helping them meet their goals.
4. Reduce the customer success team's overhead associated with generating and sending frequent reporting to customers.
This was a two year long project that spanned from the fall of 2018 to the summer of 2020. I was the lead designer on the project and I had support from two other designers and our design team manager.
I worked closely with a product manager, our customer success team, and UX researchers to understand user needs. Research included generative research to establish a base understanding of expectations and priorities, and usability testing of low-fidelity concepts and high-fidelity designs. I worked with engineers to make sure that UX needs were met by both backend data systems and frontend UI. I collaborated with our marketing and customer success teams to launch the features and collect customer feedback.
Before I started working on designs, our UX researcher interviewed contacts at eleven of our existing customers to understand what they would need and expect from a Scoop reporting dashboard, both immediately and long term. She identified 19 themes for needs that customers had including visibility into program usage, tools for program expansion, and blocking former employees from using the service.
She also developed two personas based on these conversations. Throughout the process we considered the needs of two main types of users:
Sophisticated stakeholders
Customer contacts who are specialists in transportation. They have data science savvy and the time to dedicate to manual analysis.
Seeking stakeholders
Customer contacts for whom transportation is a small piece of their work. They manage a variety of benefits, facilities, or real estate programs within their role.
Early on in the process I developed two sets of designs for two different internal uses.
First, I developed a set of designs to unblock our backend engineers who would need lead time to build the systems that provide data to the dashboard. We focused on the first feature that required complex custom backend work, which was user management, specifically the ability to remove terminated employees from having access to carpool benefits paid for by their former employer. We prioritized this feature because we had several pending contracts where customers required this feature in order to sign with Scoop.
These designs were not representative of the visual style that the UI would have, but detailed the functionality of user management. I met with the engineering team to go over the designs, answer questions, and talk through areas that needed more clarity.
The second set of designs we made were a future facing plan for the layout of the site. I reviewed publicly available eye tracking research that focused on website users expectations for where to find a page's home link, navigation, system information, and content title. I also organized the feature insights from our generative research into categories, so that everything we planned on building long term would fit together into a cohesive system.
With limited resources we had to aggressively prioritize the user needs we wanted to build for. The foundational features we set out to build that had the most user and business value were:
User management
Give customer administrators visibility into which of their employees are utilizing carpooling benefits and remove program access for former employees.
Performance metrics
Provide customers with both a high-level overview and more detailed graphs that show employees' usage of their transportation program and highlight how Scoop carpools were driving impact towards customer goals.
Reports
Enable customers to to substantiate their Scoop invoice, see where carpooling was working well for employees, and export information for customer analysis in other business intelligence tools.
To speed up the process of design and development, we used the Bootstrap CSS framework for development, plus Bootsketch, a purchased Sketch library of Bootstrap components for design work. Using Bootstrap allowed us not only to speed up implementation, but also utilize built-in features like responsiveness and WAI-ARIA accessibility.
We customized the Bootsketch library with Scoop colors from our mobile component library and documented custom components as needed. For example, I created custom sidebar, top bar, metric summary cards, and date picker components.
For our engineering team we documented the interactions of all components that we used, both Bootstrap components and our custom components.
For the sake of this case study I'm going to focus mainly on the features within our dashboard that addressed the unique needs of carpool program administrators. That said, here's a brief overview of the foundational features we built to support a secure web app.
The experiences included account creation, secure password guidance, two-factor authentication, sign in, and internal tools to support dashboard creation and mantainence.
Our first feature was user management. I started by collaborating with a product manager and our chief product officer in a whiteboarding session, where we explored what we needed the feature to do.
After that I designed a low-fidelity prototype for us to concept test with some of our customer contacts. The prototype included experiences for:
Our primary goal was to validate usability of the experience we had designed. I observed and acted as a notetaker for all of the interviews that our UX researcher conducted with customers.
Squiggly lines and minimal color in the concept test screens allowed users to focus on the functionality of the interaction rather than the visuals. We were able to put the screens together quickly because they didn't require pixel perfection.
Lots of screens went into the prototype.
In usability testing, our participants were able to complete the tasks and gave feedback that the interface was easy to use. For this reason, we didn't make any significant changes to the functionality based on testing. One thing we did learn was that all customers asked for program usage information, which was an excellent validation of the next feature we had already planned on building.
We validated our hypotheses about discoverability of the bulk deactivation feature and email addresses being an acceptable way to identify carpoolers. This was important because in most cases, we didn't have full names for our carpoolers since the app was designed with last name set as optional.
Our second concept test covered the performance metrics and reports sections of the dashboard. We recruited contacts from our existing customers to give feedback on the designs. These designs were more high-fidelity because our visual design and component library was more developed at this point. We presented the prototype in greyscale because we didn't want to spend valuable feedback time talking about color choices.
The concept test covered two features:
Performance metrics
A high level overview of carpool program performance and detailed graphs of performance over time.
Reports
Downloadable reports of trip-by-trip information and program usage by carpooler.
For the performance metrics feature, we designed two versions, one version with the high level overview of program performance and the more detailed graph views of each metric on the same screen.
The second version of this feature had the overview on its own, and the graphs were on separate screens accessed by clicking on each the cards or the sidebar.
For the performance metrics, we learned that our customer contacts preferred having the graphs separated out on their own pages, because the sidebar served as a table of contents and gave them a quick understanding of the information available in the dashboard.
We also learned that separating metric information by location was very important. Aggregating data across two or more locations didn't tell the whole story.
For the reports feature, we designed two versions, one version with tables that were customizable in the web app plus the ability to download the information as a CSV.
The second version was a step-by-step report builder where the user answered questions one at a time about what information they wanted in their report before downloading a CSV.
These two versions of this feature were both favored by different test participants for different reasons. Users preferred the ability to preview the report information in the browser in the table versions, and also appreciated the simplicity and focus of the build a report version.
In this test we asked our customer contacts how often they might want to access information in the performance metrics and reports section and they unanimously answered less than once a week, and not on mobile. Based on this information we decided not to optimize or QA for mobile responsiveness.
We made some design changes to incorporate what we learned in the concept test. For the performance metrics screen we went with the version that had graphs on a separate page.
In order to get to launch a bit faster we took out some of the comparison details on the high level overview and moved them to a later phase of the project as a nice-to-have.
We also launched without the location selection and moved it to a later phase, because there was significant backend engineering required to categorize trips by location.
For the reports feature we combined the best of both versions we tested. For simplicity we provided a list of pre-prepared reports that could be downloaded to CSV and filtered and sorted in other tools. For clarity we provided a sample report for viewing in the browser window so that users could see the exact column headers and type of data in each report before downloading. We also provided monthly and yearly reports since users had clear use cases for both monthly reporting and end-of-year summaries.
I collaborated closely with the engineering team to select a charting library for our metric graphs. I started by providing the functionality requirements and nice-to-haves. The engineers compared several charting libraries and also contemplated the pros and cons of building graphs from scratch. We decided on Recharts for the library since it had all the functionality we needed, was customizable, well-documented, and actively maintained. We customized the visual design of the charts and also the behavior of the axis labels and hover states.
For all of our projects we prepare designs for our engineering team using a method based on Scott Hurff's UI stack. We used the same approach for this project.
For every screen we prepared up to seven types of states: Ideal, Empty, Partial, Loading, Error, Success, and Responsive break points. Sometimes there was more than one of each type of state, and sometimes we didn't need every one of these states for a screen, but we considered these states for each screen.
Screens were arranged in two ways. First by state, in a grid. We also provided flows with all the same screens in them that include with click zones, interactions, and the order of interactions documented.
Here's an example of the performance overview screen arranged by state:
And here's a flow for the trip reports section of the dashboard:
We regularly met with the engineers for feedback and guidance long before designs were completed. This helped guard against getting something fully designed and then having to start over based on engineering constraints. Once we reached the state above where all the screens and flows were finished, we had a walkthrough meeting for each feature where I went through the designs and answered any questions. After implementation started I encourage questions from the team and I made small design updates based on their feedback. I also participated in QA and gave guidance on what was not working as designed.
For accessibility one of the other designers on my team put together a checklist for all of the accessibility considerations we needed to address. Prior to this project, we worked with an accessibility consultant to audit our mobile app and learned a lot in the process.
A few highlights from our accessibility considerations:
By necessity the carpooler app collects a lot of personal information from people: home and work locations, the time that they want to go to those places, and their phone number so other carpoolers can contact them.
In creating a reporting dashboard for the employers paying for those carpools, we faced a challenge: how to provide information about carpools with enough detail to substantiate billing, without letting employers see where each employee was were going every day?
The good news was that no customers had told us they had a business need to know where an individual was going on a single day, although even if they had we wouldn't have let them. But we did talk to people who wanted to see information for each carpool so they could validate their bill, map where carpooling was working for their employees, and do a bunch of custom analysis. We set out to fill the most common needs without giving access to private user data.
The way we accomplished this was by splitting up reporting into two separate sections. One set of reports had information about individual trips: the where and when. Another section had information about carpooler usage, how many trips they had taken as a rider or a driver in any given month or year.
In the user management section we ordered carpoolers by how recently they had taken a carpool, but only gave week-level specificity to protect their privacy. The column headers were "Week of last trip" and "Week of first trip."
We rolled out our feature set gradually, starting with user management to a few customers who had been waiting on that feature to sign their contracts.
After that came the performance metrics overview, reporting, and performance metrics graphs features which went out to all of our customers.
In almost every launch meeting, customers were surprised not to see a metric on the performance overview for pounds of C02 saved. This surprised us because in our initial discovery research customers had not seemed too interested in the reporting of this metric. With the new information, we quickly designed, built, and launched an update that included pounds of CO2 saved.
At the same time, we added a metric for total registered carpoolers, which was something we wanted to have early on but it was delayed until we had backend support.
The customer administrators we built the dashboard for were people we knew well at Scoop. They were about 200 contacts that had high-touch relationships with our customer success team and after all of our user testing I knew many of them on a first-name basis.
Discussions we had at launch meetings and shortly after indicated that customers were thrilled with the visibility they had into their transportation program performance and the ability to have the information on-demand.
"I'm almost tearing up from happiness."
- A customer admin, Senior Manager of Transportation, in response to being able to download a CSV of detailed carpool trip information.
We heard that admins had no problems removing former employees from their program. Even our own internal admin was impressed by how easy it was, and sent us a surprised and delighted messaged after using the interface.
Information about how many cars were being removed from parking lots was especially valuable and encouraged customers to renew their Scoop contracts and in some cases agree to a contract up-sell. One customer had the ability to rent out empty parking spaces and so in addition to program incentives offered through Scoop they started paying their employees directly to use the service.
Our customer success team was very relieved. Each employee was able to take back hours of their week for other endeavors.
Unfortunately, most of these features went live right around March 2020. As you might imagine, due to COVID-19 lockdowns, the summer of 2020 was not a great year for carpooling. Scoop saw a 95% decrease in carpools and those numbers have yet to bounce back. For this reason it has been difficult to collect much user feedback or data insights on the impact of the dashboard.
A clear lesson: it's possible to make no errors and still not succeed at your goal.
Scoop spent the fall and winter of 2020 pivoting its product focus and now builds tools that support remote work.