As human factors engineers (HFEs), an important aspect of our job is being the voice of the user. For those who are HFEs or for anyone who may have worked with us before, what I am about to say next may seem odd and maybe even a little rash — I don’t think we are doing enough for the user.
Let me explain what I mean before you call me some fanatic and discount my opinion.
Human factors, by FDA standards, is about safety and efficacy. The opening sentence in their 2016 guidance document titled: Applying Human Factors and Usability Engineering to Medical Devices lays the foundation for what they believe the value of human factors is. They state that the reason they developed this document is to (bolding is my own):
“assist industry in following appropriate human factors and usability engineering processes to maximize the likelihood that new medical devices will be safe and effective for the intended users, uses and use environments.”
The truth is that we do and we care about so much more than that. We care about user experience, enjoyment during use, and user’s overall feelings about the device. We take into account market differentiators, the commercial viability, and the paths to adoption. Our job is not as well defined as many would like it to be, and we have different roles depending on the project team and the specific device. However, the range of potential roles is much larger than just ensuring safety and efficacy.
Why is there a misunderstanding of roles?
Part of this difference lies within the hugely problematic issue of inconsistent titles within the fields of human factors and user experience. The image below shows a wide spectrum of aspects/jobs/considerations that fall within, what the image calls, user experience (UX). The definition of human factors as set by the FDA probably falls within the usability engineering, interface, and information design sections. Whereas in my opinion, HFEs have a role in, or at least take into consideration, a large majority of this spectrum. Click on the image to enlarge.
The FDA, as a government agency, has a narrow view on what is deemed important. Meaning, they primarily care about safety and efficacy because those two aspects prevent harm and therefore are most poignant. If the device does what it is supposed to do in a safe way then, according to the FDA, it doesn’t matter if it is enjoyable to use or if the product has an elegant design. In their eyes, these are supplemental and unnecessary features. Similarly, they have no concern for market success or the business case; whether they think the product will become a bestseller with millions of units produced each year or think the product has no real market viability, as long as the device is safe and effective it will get approved.
Without meeting the FDA’s criteria, medical devices cannot get approval from the FDA and are not be allowed on the market. Being biased by the box that the FDA puts human factors into seems to dictate where industry believes our time and effort should be spent and validates why much of our effort is going into supporting the definition of human factors as set by the FDA.
Why is this problematic?
Unfortunately, I think as a field we are guilty of falling into the classic, and woefully overused, adage of not losing sight of the forest for the trees. Through the tunnel vision of safety and efficacy, we are losing the larger and equally important aspects of user adoption, enjoyment, and overall satisfaction.
This seems especially important towards the middle of development. After the early front-end stages of observation, market analysis, and user needs research, where multiple potential ideas are being considered and the focus is on subjective feedback, there comes a turning point where the design converges to the development of a single idea and the focus turns to safety and efficacy. It is at this point that we often conduct formative user studies using a simulated-use study design. During simulated-use, we attempt to create realistic task based scenarios and evaluate how well users are able to complete those tasks.
One of the primary goals of these studies, as you can probably imagine at this point, is the safety and efficacy of the device. While we do ask questions to get subjective feedback about the overall UX, such as “how easy or difficult was the device to use?” and “how enjoyable or not enjoyable was the device to use?”, we tend to use that data as a sanity check to make sure end users seem satisfied. We do not generally analyze, track, and compare that data against itself throughout the iterative design process to see how design modifications change the overall user experience. This is opposed to safety and efficacy data which is front and center throughout the design process (e.g. design modification X and Y increased correct performance on safety critical step Z by 30%).
We are not utilizing valuable and accessible information to improve the product, and as such I want to put forward the case for quantifying the user experience. I believe we need to be more in tune to the users overall feedback and whether the design changes being made are creating an overall better use experience, not just improving the efficacy and safety. The reasoning is simple — I believe that a user can know how to use a device and it can be safe, even under cases of misuse, but that does not mean it is enjoyable to use, increases compliance, or has commercial viability.
Safety and efficacy are boxes that we check along the process, but are a far cry from being the end all be all. While safety and efficacy are necessary, they are not sufficient. We care about users’ perceptions, ease of use, market viability, and post-market success. We care about how the device fits within the user’s lifestyle and how the design can positively influence a person’s perception of, and adherence to, the product.
Sometimes, in the effort required to demonstrate that our product meets the FDA standards, we minimize the effort spent to capture, track, and improve upon the user experience. I want us to avoid letting the narrow definition set forth by the FDA to bias us away from concentrating on the whole picture.
Where does this leave us?
I am not arguing that a risk-based approach to human factors for medical devices is wrong, because I believe it is necessary. However, I am arguing that along with a risk-based approach, we should take extra time to consider the user and determine if the changes we make are improving the overall experience. We are already thinking about these things, but we need to do a better job of objectively tracking it during development.
Unfortunately, the methodical documentation and improvement on the user experience often takes a backseat to safety. We need to start wearing two hats more often, one safety focused and one focused on the user experience. For as much as some HFEs will try to argue that they are the same, or put forth like the FDA that safety and efficacy are all that matter, I strongly disagree. In a well designed product, optimization of both can converge on a single design solution; however, I do not think they are always synonymous.
I realize it is not as simple as it sounds. Often, we test aspects of the device in isolation, and each formative test can fall during very different points of development. There are significant risks and challenges with assessing the UX of isolated components and comparing them across studies or generalizing them to the whole device. Additionally, spending more time and effort to gather, track, and analyze this data increases the resourcing needs for projects.
However, I think there are ways to work within these constraints, and I strongly believe the benefits will improve the overall product leading to better adoption, user enjoyment, and market performance. Quantifying the user experience has the potential to refocus medical human factors to what it should be and will allow us to recommit ourselves to truly being the voice of the user, not just the voice of the FDA.