Ontario Digital Service Project | 1 month | Remote Research |

Methodologies and Artefacts:

Interviews

Study Design

Usability tests

2 minute summary

2 Minute Summary

I completed this project while working as a User Experience Designer for the Ontario Digital Service. We were tasked to Usability Test a portal that Ontarians use to apply to various awards provided by the provincial government. Our role included narrowing the scope of the testing, identifying relevant user groups and related flows, recruiting participants for usability tests, writing a facilitation guide and running the usability tests to improve the platform. In short, this case study focuses specifically on my work conducting usability tests. You can find a similar project that focuses more on interviews as a form of research here.

Introduction

Introduction

The Ministry of Heritage, Sport, Tourism and Culture Industries (MHSTCI) came to us to test the usability of a portal used to apply for 17 different awards. They also wanted to test the usability of a portal used to review applications for these awards. Each award had its own target user group and application process through an already developed online form. Some of the target user groups for award application flows were broad (ex. general Ontario public) and some were very specific (ex. Parents of Indigenous teens). Given the diverse target audience for this platform and the short timeline for this project (one month) we started by narrowing the scope of the project.  

My Role

Some of the aspects of my role included:

Narrowing Scope

Narrowing the Scope of the Project 

In order to narrow the scope of this project we had to start by understanding the extent of the prototype we had been given and what our clients expected from us by the end of engagement. Understanding the prototype included talking about how users were accessing the portal, who users are, what problems they have had with submitting applications in the past, how applications were submitted in the past and so on. All of this information came from talking to subject matter experts ahead of talking to actual users during the usability test.  Our subject matter experts were the program administrators who ran all of the awards applications. They had experience reviewing applications, running the end to end awards program and helping applicants with any issues they come up against. The program experts were also the clients for this project. They helped us gain the solid background knowledge needed so that we could come up with questions that made sense during our usability test. Some key takeaways were that:

The Miro board used during project kickoff. We used this board to get an understanding of our clients ideas for the project.

Choosing Which Flows and User Groups to Test

Choosing User Groups and Flows

To choose which flows to test, we consulted our subject matter experts and grouped the 17 awards into smaller segments. Our groupings were based on similarity or complexity in application flow, similarity in user groups and statistical data that identified most commonly applied for awards. We wanted to test with the least amount of awards but have a high impact.

Grouped flows by complexity, similarity and whether individuals or organizations were applying.

 We were able to narrow our research to focus on 7 flows for the listed reasons:

Writing a Facilitation Guide

Writing a Facilitation Guide

In order to write a facilitation guide to use while usability testing, we had to come to understand all of the flows we were going to test at a granular level. Our usability testing focused on end to end user experience of applying for an award though the portal. This means we looked at content, interactivity, accessibility and other important aspects of user interfaces that have an impact on UX.
We began by walking through and mapping all of the flows on Miro.

Flow maps for each award.

Then we broke down what made each flow special and added some notes about how to best test features of the application. For example, some flows had pages that could not be edited once information was inputted for the first time. We wanted to make sure this technical limitation of the system was communicated so that there was minimal negative implication on the user.

Notetaking the nuances of each flow so that we could include award specific questions in our usability testing.

In our facilitation guide, we included a question that indirectly asked users whether they noticed they could not change the information once inputted. If the user did not verbally communicate that they noticed this aspect we asked them to tell us more about the information on this page before continuing on. This question was specifically designed to not be a leading question. Each testable award in the portal had its own facilitation guide. 

Our guides also included flows outside of just applying for an award. We also looked at how users would navigate accessing help, whether the homepage made sense and how applications can be withdrawn or edited. These were universal to the facilitation guide for each flow.

Screenshots of our facilitation guide for the usability tests.

Last, for this project we decided to create personas for usability test participants to emulate while going through the flow. Personas were provided to the participants before the usability test and contained filler information to be used while testing the forms. These were drafted along with the facilitation guide. (Fun Fact: Our user research plan including all the facilitation guides was 50 pages long!!)

Screenshots of the persona used during usability tests.

Recruiting Participants 

Recruiting Participants

Recruitment for this project consisted of drafting recruitment materials, posting materials for outreach and scheduling in participants. We also made sure to have content such as consent emails for usability testing and scheduling emails ready to go.

We drafted materials for general social media, drafted tweet length materials and drafted email messaging. These channels were specifically chosen to reach the appropriate target audiences and to support outreach done by the client team. All outreach materials included a link to a screener that was designed to provide us with a solid understanding of target users prior to usability testing with them. We wanted to know whether potential testers had experience with the awards portal, how they learned about the portal, how often they have submitted in the past and what user group they fell into. The screeners also included an optional set of demographic related questions that were used to assess the diversity of the group tested with. Lastly, the screener gave users the opportunity to disclose their availability for usability testing. 

For each of our seven flows we wanted to have two to three testers each. There are a couple of rationales to this:

We were able to recruit 10 participants within the given time frame. These respondents were sent consent forms (consent to record and consent to participate) and were scheduled in for a usability test.

Running Tests

Running Usability Tests

Prior to facilitating usability tests with actual users we were sure to schedule a dry run with a co-worker. This allowed us to make any final tweaks to the facilitation guide and get a little bit of practice in.

Our main focus while running the scheduled tests was to avoid asking leading questions and allow users to explore the page as much as possible. We did a lot of probing into why user decided to interact in certain ways. We also reminded users think out loud and be as open as possible with their answers. Notetaking for interviews was done on miro so that we could affinity diagram our notes afterward. 

Screenshots of some of our notes for usability testing sessions.

Synthesizing Findings

We synthesized our findings by affinity diagramming our notes and identifying common themes. Due to the tight timelines for this project our clients requested that we organize findings by priority. Our last step was organizing findings by priority and creating a few prototypes for the findings that were harder to communicate textually.  Our prototypes were made to follow the Ontario Design system (which I helped develop - case study pending).

Iterating on feedback mechanisms used for file upload.

Some of our findings included improving feedback mechanisms and clarity around what had been uploaded and what could be uploaded. You can see that we used elements of Ontario's Design System like callout boxes to make it super clear when a file had been uploaded and the size of the file that was uploaded. We also improved the page by making the calls to action (buttons) more clear and findable.

Iterating on the existing location selection mechanism.

Another improvement had to do with adjusting the system's method of choosing an address. The default system of address input disabled some input boxes and had an unusual flow that required users to start by entering their postal code and searching. Our redesign made it clear what needed to be done first (search) and got rid of the greyed out disabled input boxes that a lot of users found confusing during testing. Removing greyed out fields was also done to alleviate accessibility issues.

Wrapping Up

We wrapped this project by presenting our findings to the project team. Here is some of the feedback we received from our partners through a post engagement survey.

“Engagement was very professional and well executed”
“Thank you for all the hard work that was put into the testing phase of the new portal”

Lessons Learned

Lessons Learned   

Mini Case Study - Interviewing for the Ontario Land Tribunal

But Wait... There’s More! (Mini-Case Study)

I did a similar (but not the same) project with the Ministry of Attorney General’s (MAG) Ontario Land Tribunal (OLT). With MAG I focused on the interviewing aspect of user research. Recruitment, outreach, drafting a facilitation guide, understanding the problem and facilitating were pretty similar for these two projects. The differences lie in the purpose for research and the framing of the questions asked. 

While usability testing for MHSTCI Awards we were building off of a given prototype and validated user needs, as such, questions were framed around validating that user needs were met through our interface and identifying any gaps. For MAG's OLT, we started with a more complex pre-existing website and had to identify user needs from scratch. We interviewed to gauge user behaviours, attitudes, perceptions and pain points around the existing website to try and understand key aspects of its structure. We also pulled up the website during interviews so users could reference it while talking - not so that we could evaluate specific aspects of the website. Interview work was done generatively whereas usability testing work was done evaluatively.

Screenshot of the research plan for MAG's OLT interviews.

For context, MAG’s website was in need of help because they had to haphazardly combine 5 (yes 5!) websites into one when the Ontario land tribunal formed. Aside from interviewing to learn about past user experiences we also provided some guidance around how the previous website could be improved and did some prototyping.

< Previous Case StudyNext Case Study >