Overview
I interned in the Limited Scope team, which focuses on products that help consumers find the legal answers they need. Within the Limited Scope team, I worked with the Activation & Retention scrum team, which focuses on how to get prospects to register on Avvo and continue to use Avvo services. For 5 weeks, I owned a project that involved improving the high traffic, Ask a Question (AAQ) page on Avvo’s Q&A product.
Project Length
5 weeks |
Tools
Paper & Pencil Sketch InVision Google Analytics Crazy Egg Optimizely Avvo's Usability Lab |
Process
Problem Identification Heuristic Evaluation Hypothesis Competitive Analysis Sketches Prototypes Usability & A/B Testing |
Deliverables
Sketches 4 Hi-Fi prototypes 2 Usability tests 2 A/B tests |
'Ask a question' Page
The AAQ page was the focal point of my internship project
The Ask a Question (AAQ) page is a space for users to ask anonymous questions on a forum and get free advice from multiple lawyers in their area. Askers receive notifications when a lawyer responds, at which point, clarifying questions can be asked to understand options.
Summary of the Ask Path:
Summary of the Ask Path:
Problem Statement
Many users who land directly on the AAQ page are bouncing, therefore, not going through the ask path. This is a missed opportunity to register users and collect their question for Avvo's knowledgebase. How might we improve the AAQ page experience for landing traffic to increase ask path throughout?
The following data observations supports the problem statement:
- Direct landers make up a significant percentage of visitors to the AAQ Page, however they have the highest exit rate of all paths to the AAQ page
- A majority of the total AAQ page traffic is from mobile users
- Mobile users have a higher bounce rate than non-mobile users
- More users preview their question than actually submit them
This project was an important opportunity for Avvo because it:
Heuristic Evaluation
I used Jakob Nielsen’s 10 heuristics for user-interface design as a guide to quickly assess the usability of the AAQ page.
The heuristic evaluation revealed 3 major areas of weaknesses:
|
Original landing experience of the AAQ page on mobile
|
Hypothesis
Based on the heuristic evaluation, I hypothesized that users are not following through the ask path because the existing landing page has unclear value props & expectations and a lack of trust signal.
Summary of Key Metrics:
If bounce rate decreases, then users are progressing through the ask path
If number of questions asked increases, then people are registering and submitting their questions to the Q&A database
Summary of Key Metrics:
If bounce rate decreases, then users are progressing through the ask path
If number of questions asked increases, then people are registering and submitting their questions to the Q&A database
Competitive Analysis
A selection of products I conducted a competitive analysis on
Internal pages provided an idea of Avvo's existing design patterns. This guaranteed that design solutions would be consistent with the existing brand identity and ease development efforts by providing existing UI pattern to implement in A/B test experiments. External competitors provided an idea of in-market standards and design inspiration.
Reoccurring findings from competitive analysis targeting the 3 main areas of weaknesses identified from heuristic evaluation:
Reoccurring findings from competitive analysis targeting the 3 main areas of weaknesses identified from heuristic evaluation:
- Unclear Value Props: Sparklers attract and introduce users to a product or service
- Unclear Expectations: A how-to-guide demonstrates how to use a product and instructs user on what they should expect after using the product (i.e: what happens after completing a transaction)
- Lack of Trust: A hero image in a landing experience eases audience into the context and provides a visual appeal
Sketches
I designed mobile first since most users were entering through mobile and it's a good rule of thumb to design the hardest interface first. The solution was responsive.
Using findings from the competitive analysis for inspiration and direction, I created 4 sketches. To narrow my solutions, I led a design review to gather feedback on information hierarchy, features, and copy.
Key takeaways from the design review were:
Key takeaways from the design review were:
- The new experience can look completely different from the existing
- Highlight social impact and community within Avvo Q&A - one way could be to use sparklers or testimonials
- Keep in mind that if someone sees a successful scenario, they’re more likely to try it out themselves
- Remove the search bar because data observations show that users end up using the global search instead of submitting their own question to the database
- The Q&A page is the one of the only Avvo pages that don't have a formal landing experience
- v4 was the most well-received
Prototype #1 / Usability Test #1
I further ideated on v4 of my sketches to design a hi-fi prototype. Since the AAQ page is the only page without a formal landing experience and so many users bounce upon landing on it, I created a landing experience for the AAQ page.
To test if the landing page conveyed value props effectively, I put the prototype through an in-person, moderated usability study with 3 participants. Observations from usability test:
Recommendations from usability test:
|
|
Prototype #2 / AB Test #1
Given that 2/3 participants immediately clicked on the CTA that links to the Q&A form, I kept the form on the landing page. Since 3/3 participants successfully found "How it works" on the original design, I kept it as is, but kept the hero image and added a bulleted list of value props. The hero image provides a landing experience and a bullet list is easy to skim.
The A/B test determined if my design solution would actually target the key metrics - lower bounce rate and increase number of questions asked. Observations from A/B test:
Recommendations from A/B test:
|
|
Both tests failed. Now what?
At first, I was embarrassed that this happened and didn’t not how to respond to it. But my manager, Puja, and my colleague, Kaitlyn Schirmer, flipped the scenario around 180 by asking me, “Ok, so the test didn’t go as planned. Now what are you going to do?”
Bounce right back.
I asked myself if it made more sense to persevere or pivot from my design solutions. Ultimately, I pivoted because a closer look at internal data revealed an opportunity.
Key observations from usability and internal metrics:
These key data points informed a new hypothesis, which gave direction to a second usability and A/B test to experiment with the idea.
Key observations from usability and internal metrics:
- Users who arrive to the Ask Path from a Q&A page are twice as likely to progress through the Ask Path than those who arrive directly to the AAQ page
- Users are inclined to click on the search bar to navigate the website
- Users like to see examples of Q&A as it provides comfort
- Users like to know what to ask based on existing questions. For example, they may want to build off of an existing question or dig deeper on one of their own
These key data points informed a new hypothesis, which gave direction to a second usability and A/B test to experiment with the idea.
To bounce back from the failed tests, I came up with a new hypothesis and strategic plan to pivot the project
In summary, the next strategy was to:
Give users what they want, which is the ability to search for existing Q&A so that they can learn what to expect in the Q&A service. In this experience, users would arrive to the AAQ page with a search box modal overlaying the AAQ form. We hope that by seeing an example beforehand, users will be more likely to ask their own question - essentially completing the Ask Path.
Give users what they want, which is the ability to search for existing Q&A so that they can learn what to expect in the Q&A service. In this experience, users would arrive to the AAQ page with a search box modal overlaying the AAQ form. We hope that by seeing an example beforehand, users will be more likely to ask their own question - essentially completing the Ask Path.
Prototype #3 / Usability Test #2
Testing a new path in a usability study allowed me to create screens for each step in the new flow to guarantee that users can navigate through the modal without ever leaving the funnel. This is not something that could be done in an A/B test since I can't control a live experience. The purpose of this usability test was to learn how the new flow affects how people feel, understand, and go through the Ask Path after seeing existing questions.
|
Findings from usability test:
Recommendations from usability test:
- Flow was beneficial in helping users understand what they were doing, but their perspectives of what they were getting out of it was scattered
- Even after going through modal, every participant clicked on “How it Works”
- Overall: Seeing the search box modal and sample Q&A beforehand did not convey better understanding of what's happening in the Ask Path
Recommendations from usability test:
- Continue in the model of keeping the user in one place. No one was confused about the flow the way it was, so keeping them in one place is great
- Use plain language to describe what is going on. We learned from a card sort that people care less about stats and more about the benefit it provides to them. Instead of saying “ask your question” consider “ask your free question”
Prototype #4 / AB Test #2
Clicking on "Search" in the modal leads to the results page which means that the user leaves the AAQ page. Users can only return to the AAQ page if they click "back" on their web browser.
|
The purpose of the A/B test was to determine if people actually use the modal, and whether or not they're less likely to bounce and more likely to ask questions if exposed to a Q&A beforehand.
Data observations from A/B test:
Data observations from A/B test:
- No significant effect on lowering bounce rate and %exit
- More than half the landers clicked on hard “no thanks” to exit the modal
- About a quarter of users visited the global search page, and about half of them clicked on the search result
- High confidence interval that global search usage increased significantly
- Number of questions asked dropped significantly
- Overall: This test failed since our ultimate goals of decreasing bounce rate and increasing number of questions asked were not met. However, it was clear that the search box is a highly used feature.
Recommendations
This project taught me that losses lead to gains. Between these two failed experiments, I eliminated solutions that didn't meet objectives and learned valuable information that can be useful for improvements.
Given more time to work on this project, I would:
Given more time to work on this project, I would:
- Use plain language to communicate benefits and instructions; (Ex: "Q&A is free and anonymous", instead of "10.5 million searchable Q&A" and instead of "Search," say "Get Free Answers")
- Create an exit survey to ask users why they bounce. This information would help us come up with a clear solution idea instead of hypothesizing solutions that we're not sure would target the user's pain points and behaviors
- Continue exploring how to use global search to decrease bounce rate because it's clear that users use the global search, but because our experiment leads to the results page, users may have gotten lost in the path
- Continue exploring the idea of keeping users in one place because no participants were confused about the new flow
Learnings
Through this project, I learned how to:
To read more about my internship experience and how I made the most of it, check out my Medium post here.
- Bounce back from a failed test, particularly on deciding when to persevere or pivot
- Interpret data on Google Analytics and run significance tests on experiments
- Write a concise hypothesis that is testable
- Determine what goes through a usability lab vs. A/B testing
To read more about my internship experience and how I made the most of it, check out my Medium post here.