Leading User Research in State Health Care

Mar.2018 to Dec.2018 | Deloitte Digital servicing State Health Care Client | UX Designer

I planned and facilitated user research under tight constraints set by a doubting client. By utilizing members from our development team, account team and client as notetakers, I was able to work around an understaffed research process while building universal empathy to the end user. This mended our digital team’s relationship with the client and shifted development to mobile-first.

Project Information

I was tasked as the lead UX Designer to help build a digital product that would allow citizens of a state log work activity hours in order to maintain their Medicaid benefits.

One of the biggest concerns entering this project was the lack of user research. Our primary client, “L,” was not very trusting of Deloitte Digital’s work due to past interactions with a former team. Rather than letting us lead the design process through qualitative studies with the end user, L believed she was the final authority when it came to knowing her state’s citizens. Consequentially, our role originally was simply as what Dr. Hartson in the UX Book would coin as a “priest in a parachute.” We were originally expected to simply “bless” the design ideas proposed by her team after a couple of cosmetic updates to the UI.

To make things more complicated, we had to work through an account team to communicate with the client. With a lack of access to our important stakeholders, getting buy-in for user research would be difficult.

Building Trust

To start off, there was a lack of design ownership. L’s dev team had already created prototypes. Consequentially, even when providing heuristics evaluations backed by empirical studies on some of the design decisions, it was hard to attain buy-in. Before anything else, we needed to take control of the design process by building trust.

Recreation of a wireframe (originally created in Powerpoint) by the dev and account team.

I originally cited best UI practice through NN Group articles, HCI books such as About Face, and more to move away from accordion-ception- but ultimately I realized I was using a language unfamiliar with the client. Simply pointing at "design-like" companies such as Amazon or Apple made evaluations more visual and easier to understand/trust.

I switched to communication channels that were more receptive. Rather than giving feedback from my personal work experience, I aimed to reference best practice through well-known examples instead.

After building up credibility with the account and development team, we were given the opportunity to meet with the client in person to present new updated wireframes based off of our feedback. By explaining the new designs through examples that were relevant to client L, we were able to demonstrate our knowledge and build rapport. However, the client was still not sold on the idea of conducting research as she felt she already knew her end users well. Rather than pushing back, our team pursued value propositions more relevant to her by referencing other similar state projects that were taken to court for ignoring citizen feedback. We argued that by conducting a little bit of research, we could mitigate the risk of being criticized post-release by advocate groups for not listening to the citizens.

In response, we were given the opportunity to interview twelve users within a limited time frame of 15-20 minutes per interviewee. From any traditional interviewing method, this would be difficult to undertake.

Preparing for the Interviews

Despite the short time frame for interviews, our team was fortunate enough to have been given access to interviewees that came from an existing state aid program. These interviewees were relatively homogeneous to one another being young single mothers and they represented a group of end users for the product we were designing.

That being said, I knew we could make do with the 15-20 minute time spans as a result of the homogenous sample by dividing up our research questions into subsets across several different cohorts. In order to do this however, I would need more human resources for my team as interviewing one-by-one would result in only around one cohort (4-5 short interview sessions) which would be sufficient enough for only one subset of research questions.

To solve this, instead of having two experienced practitioners interview and take notes together, I split our team into three groups and recruited notetakers from teams outside of our own. There would be three experienced interviewers (including myself) each paired with a notetaker we would train ahead of time. By recruiting notetakers from the dev team, account team, and client our team could directly allow them to observe the importance and value of the research process.

I prepared a list of research questions and reached out to the broader team including the client for suggestions and review to get buy-in on our mission ahead of time. I then divided them into 3 separate categories. Each category would encompass 1 cohort of 3 interviewees. There would be 1 cohort remaining as a buffer-zone for any new questions that could arise during the first 3 cohorts.

I also created a single documentation source and guide for our team to use in an excel spreadsheet. The pages were divided by purpose, and gradually evolved throughout the lifecycle of the research. It started with four pages:

  1. Research Questions. A summary of what our goals were for the interview.
  2. Interview Guide. Created for the interviewers to reference when conducting their interviews. Included an introduction script to the interviewee, general context questions, and the research questions divided by cohort and in more detail with logic paths.
  3. Interview IDs. A repository for all of the contextual background information of our interviewees. The notetakers would fill this in based off of the responses from the introduction questions.
  4. Raw Interview Data. Created for the notetakers to use to log data from the interview. Included four columns:
    • Unique Note ID. (To be filled in after the interviews) This would be used to reference back the original raw data point.
    • Interviewee ID. This was abbreviated as I (Interview) – JK (Interviewee Initial) – 1 (Interview #) e.g. I-JK-1 as a means of tracking the individuals present in the room.
    • Question. The question posed by the interviewer.
    • Notes. The response given by the interviewee in their unfiltered language.
    • Comments. Any emotional, facial, etc cues worth of note.

Spreadsheet Pages

Research Questions

Interview Questions

Interviewee Information

Raw Notes

Waivers were prepared and mailed to the interview site ahead of time.

Working towards the interview date, I spent half a day training the notetakers and reviewing with the interviewers. This included dry runs with local team members. To learn a little more about how I conduct interviews, you can reference an article of mine on the process.

The one major caveat to the norm was that the interviewees came from very sensitive backgrounds. Consequentially, I knew it would be difficult to build enough credibility to extract important information given the time frame. The most important item I stressed to the other interviewers and myself was to be empathetic and flexible.

Conducting the Interviews

Upon arriving to the interviewing site, we were given a fairly large classroom that had an additional testing room/closet within it. I once again briefed the team on overarching goals and quick reminders on interviewing/notetaking conduct.

Every notetaker had a fully charged laptop prepared with the Raw Interview Data spreadsheet open. The interviewers each had a stack of waiver forms in the incident an interviewee did not sign off ahead of time and a paper guide of research questions split up by cohort. To prevent group-think or interactions between the interviews, I split the classroom in half with one interview session to be held on either end. The third would be conducted within the testing room.

Schedule wise, we would continue as planned with each cohort of 3 interviewees using the allotted 20 minute time frame. At the end of each cohort, I planned to hold a brief 10 minute debriefing session as an opportunity for the team to sync and pivot if necessary.

The interviews ended up being an incredible success. Thanks to the buffer times, we were able to quickly pivot and focus on the more valuable questions and even newly discovered ones with the fourth cohort. More importantly than a smooth collection process, all parties came together in a final debrief with newly found empathy for our end users and some critical discoveries. I was keen to remind the team that ideation and finding solutions would come later, which allowed the debrief session flow more freely as a means of identifying existing behavior patterns.

Post-Interview Analysis

Here’s the fun part. I took the >500 lines of raw interview data compiled across the three teams and got to work on the contextual analysis phase of the process. This meant building onto the excel spreadsheet from earlier with the following pages:

  1. Processed Work Activity Note. Raw interview notes processed and filtered into easily understandable work activity notes. I included a topic column as a tagging mechanism along with the original note IDs to trace back to the source if needed and to ensure all data was accountable.
  2. Contextual Stories. Since this was not an incredibly domain-complex situation, all relevant parties participated in the interview process, and we were on a short timeline- I organized the work activity notes into contextual story groups digitally instead of through a WAAD.
  3. Findings. A final summary of the key takeaways from the research to present to the client. Statistics were given in clear terms e.g. 10 out of 12 instead of deceptive percentages e.g. 83%. I also provided explanations for the anomalies with original note IDs to make it clear to the client that nothing was subjectively motivated.

Spreadsheet Pages

WAN (Work Activity Notes)

Contextual Stories



Despite the constraints that made user research seem nearly impossible at the beginning, I was able to capitalize on this small opportunity to make a data-driven proven point to the client of the value of our team’s work. By involving members from the account team, dev team and client, I was able to strengthen my design team’s credibility and gain buy-in even before our official presentation of findings.

Important wins included switching development from a desktop-first to mobile-first process, more involvement and leadership in the design process, a sense of shared empathy for end users across all teams, a renewed relationship with the client, and access to more user research and testing.