Redesigning a Service Portal and Its Team's Culture

Apr.2018 | Deloitte Digital servicing a gov agency | UX Designer

In a little over 3 weeks, I was tasked to conduct brief user research on a government agency's service portal that was run by a team of analysts. I took my duty one step further by endeavoring to encourage the adoption of design methodology into their work. By the end of my stay, the full team experienced a paradigm shift towards human-centered design, research extended to address a major neglected user group and work began towards creating a design system.

1. The Situation

As a UX Designer doing consulting in the government space, I sometimes face people and project problems more than interaction design problems. The worst scenario is being “marooned” on what Dr. Hartson calls a “priest in a parachute” assignment.

“In the early days of usability it was often assumed that a usability practitioner was needed only in small doses and only at certain crossroads within the project… they played a secondary role, something like a ‘priest in a parachute’: the human factors engineer dropped down into the middle of a project and stayed just long enough to give it a blessing. Anything more than a few minor changes and a blessing was, of course, unacceptable at this point because the design had progressed too far for significant changes.” – The UX Book, pg. 75


Those “early days” of usability are often “present day” scenarios in government work. In this instance, I was onboarded for only three weeks as a design team of one to conduct user research on a ServiceNow portal for an agency. This was meant to help support a team that was run by five analyst-level contractors. My work had the expectation to serve more as a “nice to have” addition for minor UI design changes rather than a process changing intervention for a product that was already on path towards a second “facelift” release within two months.

At this juncture, I could have opted to simply complete the expected task and move on- interview several users and write a report on it for the next person to use or ignore. The team had already begun looking for a visual designer to start mockups, so my findings would have little to no major impact at a first glance. Furthermore, the team had no clear sense of leadership. There were no user stories, no research-based decisions and no day-to-day managers staffed on the project. I discovered later in a conversation with one of the analysts that they had thought of design as more of a subjective, unnecessary step. My real job was to change that perspective.


1.1 The Goal

I have learned to be pragmatic on the job, but I have retained my principles towards good work. Despite being a priest in a parachute, I knew I could try to take an extra leap towards long term change. I would like to coin it as being an evangelist in a parachute. I gave myself two goals for the end of my three weeks:

  1. Provide a research report to create a list of minor design changes. (The client's expectation)
  2. Use the research as a springboard to create a paradigm shift in the development team towards human-centered design for longer lasting and deeper impacts. (My personal expectation)

1.2 The People Breakdown





2. Conducting Research, Week 1-2

I was given only three weeks to achieve both of my goals as a design team of one. Even accomplishing the first goal of conducting research and preparing a report would be near impossible if I took a stringent best-practice approach a la Elizabeth Goodman. I knew I had to be flexible and adapt on the fly. Furthermore, I knew if I wanted any shot at accomplishing my second goal, I would need to demonstrate competence to gain accreditation to my process (this can be a struggle as a 24-year-old who gets mistaken for a high schooler at times). In order to do so, I decided to start by quickly familiarizing myself with the situation first.


2.1 Heuristic Evaluation, Week 1

I have found heuristic evaluations in tandem with conversations with developers as a great way of getting to know a product in more detail beyond the kickoff meeting. A heuristic evaluation would also nicely fit into my strategy of both demonstrating my competence and introducing developers to the value of design.

I took a small risk by using UI Tenets and Traps as the basis of my heuristic evaluation for the first time instead of the usual hackneyed Nielsen Norman heuristics for a couple of reasons. I reasoned that the deck of cards format, introductory language and easy to understand visual examples would be significantly more appropriate for the analyst-level team I was evangelizing design to. Furthermore, given that it was developed and applied at Microsoft, it could serve as a fantastic bridge for communication and trust thanks to its associative familiarity.

By the end of the heuristic evaluation in the first week, I was able to prepare two reports for the team to digest. One was an extensive page-by-page evaluation which provided immediate recommendations for short-term fixes (addressing goal #1) and the second was a client-tailored presentation that highlighted high-level issues that were a result of the absence of human-centered design (addressing goal #2).


I created a template in Sketch using symbols to translate an excel version of the heuristic evaluation into a report for the developers (left). I then summarized the important and high level findings in a powerpoint deck to show the client (right).



The reports would also be invaluable for my own purposes in preparing research questions to inform my approach towards the user interviews.

A note on this. I never like doing heuristic evaluations alone. I am fully aware that they are meant to be a group activity between experienced HCI experts to mitigate one another’s innate biases. Despite this, I still think having one expert evaluation is better than none, which led me to undertake this task. In hindsight, I would have liked to spend extra time getting it reviewed by peers in my studio.


2.2 User Interviews, Week 1-2

Typically, my minimum sample size for interviews is 12. Realistically speaking, if I wanted to conduct 12 interviews within a 3-week project that included pulling together a report, I would need to begin scheduling them immediately. Through conversations with the development team and client, I honed-in on two separate user roles: requestors who are looking to request a service and approvers who approve those requests on the ServiceNow instance. I asked our client to provide a small list of names from both user roles to begin with as opposed to a fixed sample. I did this with the intent of using snowball sampling to rapidly uncover hidden populations and discover relationships/networks between employees in the portal.

During this time, I worked closely with one of the analysts who functioned as the project’s product manager (PM). I leaned on him heavily to schedule those initial interviews as I completed the heuristic evaluation and began writing the user research guide and consent form. He was delightfully engaged and was able to secure our first interview at the end of the first week. I asked him to join me for the interview as a notetaker. I hoped by including him I would be able to not only demonstrate the value of qualitative research, but also build empathy by having him actively listen to the end user. I used the time we spent together riding in Ubers to client site to gather valuable contextual information and explain to him the benefits of human-centered design.



Raw notes were transcribed by the analyst who participated in the interview within an excel document to familarize them more with the end user. To ensure interviewee privacy, I used interview IDs instead of names. IDs are made of the interview number and the interviewers' combined first initials e.g. I-JA-1... I (Interview)-J (Josh) A (Alan)-1 (Interview #)



This turned out to be an incredible success. Our PM analyst became the first of the team to fully believe in human-centered design as an important step in development. With his help, I was able to scale this process to the other analysts on the project. I ended up training 3 of the 4 developer analysts to become user interview notetakers in the second week, and in the process gave them full exposure to their end users for the very first time. By the end of our research, all of our developer analysts were beginning to speak in the voice of the user.

Despite achieving a major win for goal #2, I was unable to secure enough interviews to meet my minimum requirement for researcher peace-of-mind (only 8). In fact, I hit an “aha” moment that we were only interviewing a small piece of the puzzle halfway into the second week. Our client selected the initial interviewees based off of their frequency of use of the portal, and as a result, we ended up talking mostly to liaisons who would submit hardware/software requests on behalf of the average Joe employee. Because the team had always relied on quantitative data to identify who their end users were before by intensity of usage, we had missed an entire population of users who abandoned using the portal due to its difficulty of use. I noticed this in our qualitative research because the liaisons often observed that the average Joe employee would opt to call the helpdesk instead of submitting their own tickets. Those observations in tandem with receiving new names by snowball sampling allowed us to later confirm this fact when we interviewed our first beginner non-liasion user.

This was really bad, and I was already running out of time by the end of the second week. I was politely asked by the manager to wrap up the findings by the end of the third week which left me in a tricky situation. How would we be able to cover potentially hundreds of unrepresented users without a time extension?

Area for improvement: Had I known we had access to quantitative data earlier on portal usage, I would have sampled the interviews by extreme outliers. This would have netted the beginner users who quit the portal early into interviewing as opposed to later. In hindsight, I should have spent a bit more time collecting more context before diving into research.




3. Analysis and Report, Week 3

I knew I had to be pragmatic and complete the report in a timely manner, but I didn’t want the higher powers to jump straight to hiring a visual designer to create mockups without doing the needed research. Luckily, at this point in time, I knew I had full support from the analyst team who now had faith in my process after seeing it first-hand. The challenge would be convincing our manager and our client to get on board with extending the time for research as well. To do so, I wanted the report I was preparing to be both professional and grounded in the user’s voice.


3.1 Contextual Analysis

I did not have the resources or time to complete a full physical WAAD, but I still wanted to put together a strong, data-inferred analysis. I opted to use a digital WAAD (adapted from Lextant) which I have used several times in the past during time-crunch situations. In an excel document I converted the raw notes transcribed by our development team into work activity notes (WANs). I then started flagging the work activity notes by key terms, topics and behaviors on a separate column. Upon completion, I used this column to infer contextual stories and provisional personas. These would directly inform the findings highlighted in my report.


I kept one master file to use as a source of truth for user interviews and analysis. All contextual stories were comprised of WANs (Work Activity Notes, ID'd as I-xx-x Wx ) which are cleaned up versions of the raw notes. Everything is linked to a qualitative source to ensure the voice of the user is retained and trackable.



I introduced three provisional personas to the client in the report: the Middle Man (liaisons), the Specialist (highly technical approvers) and the Uninformed (average Joe employees). They were provisional for two reasons I wanted to communicate to the client. First, there were not enough interviews (8) to fully uncover the width and depth of user frustrations and goals. Second, personas are only personas when there are enough observed behaviors to create a proper user model. As we were not able to interview enough users, the team risked developing solutions that would address an incorrect audience.

Despite these provisional personas being “incomplete,” they were still inferred off of real data. To instill a switch towards more research, I used quotes from the interviews to appeal to the emotional elephant of our client e.g. “I wasn’t even aware that you had to go to the portal.” I also used the Uninformed provisional persona as a means of theoretically representing the average Joe employees who were unaccounted for initially in development. This was done by stitching together observations of said employees from liaisons and the one interview we had conducted with a user of that type. The opportunities I highlighted for them were focused on additional research as a means to more long-term goals rather than quick fixes e.g. conducting card sorts in order to rearrange service categories.

This gave me the leverage to make the following argument: we discovered a lot of symptoms of deeper systematic problems with just a little bit of research. With more research, we can confidently address those problems and design longer lasting solutions that can save you money and cover hundreds of users that were unaccounted for originally in development.


I opted to use sketched profiles of the provisional personas instead of real life pictures to emphasize the need for more research. Large quotes with trackable WAN IDs summarized major concerns.



3.2 Recommendations vs. Immediate Tasks

I wrapped up the report with recommendations and immediate tasks to address my goals. Recommendations would cover my greater goal of creating a paradigm shift towards more design and research in development while immediate tasks would cross the dots for what I was originally tasked to do.

Throughout the recommendations, I kept to my strategy of appealing to the client’s elephant by keeping the user’s voice both transparent and prominent in the report. For example, I suggested conducting an IA and content audit by showing him artifacts I had procured from IA card sorts during the research. He was surprised to see that even expert users struggled to make sense of page titles (including the top navigational items).

I used my findings from the heuristic evaluation and research to inform the immediate tasks portion of the report. I caveated these tasks to not be considered as substitutes to the long-term recommendations before presenting them. To retain the voice of the user, I continued to use quotes from the research along with their identifiable WAN tags. This came in handy in disproving the client’s assumptions of user mental models. In fact, while presenting this report, the client commented that one of the immediate tasks would not be a helpful fix to him. Needless to say, it was incredibly satisfying when I was able to pull up several raw transcript quotes on the spot to represent the user’s perspective in response.


I used artifacts from user research to better champion the user's voice. For example, I brought cards with the end user's written comments on the back explaining their confusion towards the navigation of the portal.





4. Overtime Research and Prototyping, Week 4

The report was successful in winning more time for research and encouraging the team to seek out a new full-time UX designer to replace me. Although I was already onboarding onto my next project, I stuck around for an extra week to help interview more beginner users and to rapidly prototype and test wires for the purpose of handing off a Zeplin document to the developers.

As soon as we were given the greenlight to conduct more research, our PM analyst immediately began to schedule interviews with new hires and infrequent users on the portal. We hoped by targeting these groups, we could fill in the beginner user gap caveated in our prior research. During this time, I also rapidly designed a new wireframe of the portal based on our earlier findings. I reasoned we could use the latter-half of our interviews to conduct in-situ prototyping a la John Whalen’s contextual interview method to uncover more rich findings and to test my design hypotheses.


4.1 Preparing for In-situ Prototyping

There were several design hypotheses I wanted to test during this additional week of interviews. In order to do so within such a limited period of time, I opted to take inspiration from John Whalen’s recent book “Design for How People Think.” Dr. Whalen recommends presenting several versions of a prototype during early tests. This way, users can articulate better which parts of a prototype they really like, and it can also reveal further unmet needs or nuances in the UI that might not have been brought out earlier. I created three prototypes of the portal to start, all in greyscale: the original portal design, a portal that the client liked (which was “pretty” but not necessarily functional) and a portal that was designed based off of findings in our initial research.

In my first pass at the research-informed design I cleaned up language to be both less technical and more actionable. I also added more unique icons to each of the cards as opposed to the original gratuitous redundancy of fontawesome laptop icons. To follow the tenet of recognition over recall, I reduced the height of the search bar to provide more visibility to recommended services above the fold. Since even advanced users were confused between “Get Help” and “Find Answers” I folded the knowledge base below “Get Help” and added iconography. To cater to many of the user’s goals of saving time, I also introduced a favoriting functionality.

4.2 Contextual Interviews and In-situ Prototyping Findings

The first user we interviewed was the equivalent of a researcher’s el Dorado. Just from our conversations alone, our interviewee gave us incredibly rich information on why he quickly gave up using the existing portal. He said he struggled to understand the language of the portal and how to navigate it when he attempted to order a mobile phone. He also noted that he stopped using it in favor of calling/emailing the help desk directly.

I used our conversations to inform my approach to our comparative in-situ prototyping by giving him a task he had attempted to do earlier on the original portal: order a mobile phone. Similar to before, he was unable to place the order on the original portal and the client’s suggested portal. He also, to my delight, was only “50%” confident on my first pass at an improved design. During the in-situ prototyping exercise, I realized that although I had made the technical jargon more understandable, it still was not in the user’s language- even with the icons. It seems obvious in hindsight, but “ordering hardware” is not nearly as clear as a simple “order a mobile phone.”


I asked users to think aloud and use sticky notes to add wanted features. This acted as a catalyst for follow up "why" questions during in-situ prototyping activities and made for valuable artifacts to take back to the client.



By the end of the week, I was able to make several new important recommendations based off of the 4 additional contextual interviews of beginner users. I packaged these up into a Zeplin document which I passed on to the development team for future reference.

  1. I removed the icons from all of the cards and focused on improving content and categorization. As there were hundreds of services, I reasoned it would be better to have the team invest into mastering the user’s language before introducing graphics given the short timeline. The team would also be able to pivot faster as wording text is easier to test with users (graphics can be a pretty nebulous and subjective space; a seasoned visual designer is needed for this). In any case, graphics can always be added later on once content is less volatile.
  2. I recommended introducing “recently used services” as opposed to favoriting to better anticipate the needs of more advanced users without the extra clicks.
  3. I introduced “popular services for employees like you” to the portal as opposed to “popular services” (which spanned across all employee types) to reduce mismatched recommendations and to encourage more confident action.
  4. I re-introduced "Find Answers", but with improved iconography to differentiate it from "Get Help" and to address needs of both older users who wanted access to a phone call and technologically advanced users who wanted to view a knowledge base. I caveated this to the PM analyst as an opportunity for A/B testing.
  5. I set up a grid and strictly spaced the UI on a typographic scale of Perfect Fourths to establish visual consistency.


I originally used Alla Kholmatova's scientific sizing nomenclature (pico, nano, etc) in her book "Design Systems" before switching to t-shirt sizes based off of a suggestion by my UX mentor. He reasoned t-shirt sizes would be easier for analysts to digest and remember.

5. Parting Gifts and Conclusion

Although my time on the project had come to an end, I wanted to keep the team on track for human-centered success. Before officially leaving, I arranged some parting gifts.

I passed on my research documents and provided a user interview crash course to the PM analyst. This included a copy of Steve Portigal’s “Interviewing Users” and a practice interview with one of the internal team members. He did a fantastic job on his first try and was able to identify many of his own mistakes without my immediate feedback. He joked at the end that he finally had developed empathy for me as a researcher from the stress of the practice interview.

I also handed off a basic style guide for the development team. This came out of an atomic UI inventory of the portal which I guided the team through in order to catch and correct UI deltas. I did this with the intention of saving the team time by preventing rework of existing UI elements and to serve as a launchpad for the next designer to start work on creating a design system.

Finally, I wrote parting notes to each of the team’s members along with recommended readings. I requested the manager to purchase a copy of Adam Silver’s “Form Design Patterns” for the team and have begun mentoring one of the analysts in the ways of UX (she’s already reading my copy of How to Make Sense of Any Mess as I write this).




I made full use of the library I curate at work as an educational space. Here, I was able to introduce members of the team to design and provide them recommended reads to checkout on the spot.



5.1 Some nice emails I received from the team

“Thank you for your hard work and insight! It was a great experience working and learning with you, your research will be invaluable in our redesign effort.” - Analyst

“We really appreciate all the help and dedication you brought to our team! Especially enabling us to be more self-sufficient in designing for the user in the future.” - PM Analyst

"Thanks for [this]. I really want to get my hands on design." - Analyst

"Wow, this looks great! Exciting stuff. Please get started as soon as possible." - The client upon reviewing final suggestions



Leading User Research for a State Healthcare App

I evangelized, planned and facilitated user research for a state-wide health care app that was in murky water. The secret sauce for success? Persistence, consulting jedi mind tricks and empathy.

Read More