QEESI

Identifying the usability issues of a validated online questionnaire.

The Brief

QEESI is a validated questionnaire used for assessing Chemical Intolerance. It is also a self-assessment tool that patients can bring to their doctors to explain their chemical exposures and symptoms.

This is a group project collaborating with UT Health San AntonioOur goal of this project is to identify the usability issues of QEESI and providing our client with feasible recommendations to improve the design of the questionnaire. 

Role

Duration

Key Skills

UX Researcher

3 months

Competitive Analysis  | Usability Testing  |  Analyzing Test Results 

Type of Project

Group project | Usability course project collaborating with UT Health San Antonio

Research Questions

Though the QEESI is a validated questionnaire, users can fail to accurately respond to the questions and share it with their doctors due to the current UI design.  We’d like to help our client make better design decisions to solve these problems in the future, so we wanted to find out:

  1. How users currently take the QEESI and what issues they will have while taking QEESI?
  2. What are the greatest obstacles prevent users from sharing the results with their users?
  3. What are the feasible solutions QEESI can adopt?

We planned to use heuristic evaluation, competitive analysis, and usability testing to answer these questions.

Process

Timeline_2x_2x_edited.png

Heuristic Evaluation

We conducted the heuristic evaluation to identify the problems of QEESI interfaces based on the 10 heuristics proposed by Jakob Nielsen. Compared to usability testing, it is a much easier way for us to recognize the usability issues from expert perspectives. From the evaluation, we had a holistic understanding of QEESI and recognized its potential problems, which helped us identify what obstacles users might encounter during the usability testing.

HE_1_2x.png

We also gave each issue a severity score, which could be 0-4, to find out which issues were the most serious that should be addressed first. We selected the following issues that we think should be given high priority:

  1. Wordy instruction and lack of visual hierarchy

  2. Easy to lose sight of scale while scrolling

  3. Two different color lines in Symptom Star do not match users’ expectations

  4. No salient way to remind users to save their results

  5. Poor error reporting

Competitive Analysis

Since we’ve identified the potential problems of QEESI, we wanted to evaluate the competitors to see how others solve the same problems and what can make QEESI stand out,  and then determine what design feature could be the best practices for QEESI. We divided our competitors into three categories: Direct Competitors, Health-related Indirect Competitors, and non-health-related Indirect Competitors.

Competitive Analysis-Competitors.png

*Direct competitors - the most direct rivals in the market

*Indirect competitors - different industry/products, can provide insight into how others deliver similar concepts

The Take Away

  1. Making good use of font size, color, and spacing to create a visual hierarchy can help users read effortlessly. Also, using icons/images to communicate can facilitate users to digest the information.

  2. Using labels, color, or size on scale points can help users easily identify them and are also more engaging.

  3. Providing suggestions with the results - steps to take going forward, who to contact, additional materials to read, etc.

  4. Spreading questions out to a few pages and providing progress indicators can help users to have more sense of control to know where they are.

  5. Speaking the language users use can help them understand results and communicate with doctors.

  6. Allowing users to track changes over time by creating a profile is a way for QEESI to stand out.

Usability Testing

To recognize the real problems users might have, we conducted nine 45-60 minutes of usability testing. We used BREESI, a screening tool with three questions to determine whether an individual should take the QEESI, as a screener to recruit participants. Our goals are to learn how users currently take this questionnaire, as well as what obstacles prevent users from responding to questions accurately, interpreting the results and sharing results with doctors.

Participants

Usability_demographics_edited.jpg

To engage the participants with tasks scenario, we made our task realistic by incorporating the information on the TiLT website and given by our clients, including when people will normally be exposed to the chemical substance and experience the symptoms. 

Task 1

Goal: Take the questionnaire and finish the Symptom Star

You recently moved to a new apartment and experienced some symptoms, like dizzy and skin rashes. Your doctor has recommended you to take the QEESI before your appointment and wants to talk through your results at the appointment.

Task 2

Goal: Save the whole questionnaire to the computer

Now, you've finished the questionnaire and want to share this questionnaire with your doctor to have further discussion in your appointment, how will you do?

Findings

5/9 participants had trouble with the instructions, which were too wordy and long that they tended to ignore them.

Would be better if they were broken up and not doctor, patient, researcher all together.

Lots of text at the beginning, would skip it, mentioning the doctor doesn’t seem relevant.

I wouldn’t read all the stuff before the chart.

6/9 participants thought the 10 point scale was too much.

4/9 participants needed to scroll back up to see the labels.

The choices are so subjective; what’s an 8 compared to a 6?

There are too many choices. Five would be better.

The numbers don’t follow you, so I don’t know which bubble is which, I would end up guessing rather than scrolling back to check.

5/9 participants commented that the assessment was very long, and they had to keep scrolling on the QEESI.

It’s quite long. I wish they had explained how long it would take at the beginning.

Oh, Lord. This is so long.

This page is so long, my hands got tired.

Participants struggled with interpreting the symptom star and 4/9 of them thought that the example of the symptom star was their own results.

Does that mean high tolerance or is it bad for me? Is it good or bad to have high score?

I'm not really sure how to read this.