What does the computer software assurance approach mean for regulated life sciences companies?
Learn from the U.S. Food and Drug Administration's (FDA’s) Francisco Vicenty about what the upcoming Computer System Validation (CSV) guidance is and get clarity on what the Computer Software Assurance approach means for regulated life sciences companies. The FDA explains the intentions and principles for the guidance and why this simplified, risk-based approach to CSV can be applied to IT systems today.
The discussion included:
- Why the FDA is moving from “validation” to “assurance”
- What Computer Software Assurance means to you
- Examples of risk evaluation and acceptable records
- Live Q&A
About the Presenters
Francisco Vicenty, Case for Quality program manager, the U.S. Food and Drug Administration
Cisco Vicenty is currently the program manager for the Case for Quality (CfQ) within the Office of Product Evaluation and Quality, Center for Devices and Radiological Health (CDRH), FDA. The Case for Quality has been a strategic priority to improve device quality, access, and outcomes for patients. Cisco has worked as a manager within compliance at CDRH and previously worked in quality and reliability within the semiconductor industry.
Sandy Hedberg, Cloud Assurance QA/RA manager, USDM Life Sciences
Sandy Hedberg has over 20 years of experience in Quality and Regulatory Affairs in the medical device, pharmaceutical, and biologics industries. She has participated in assisting companies with responses to consent decrees and audit findings to the FDA. She is well versed in risk analysis, creation of quality procedures, computer system validation, auditing, and authoring regulatory submissions.
The questions and answers from this webinar are documented for you here: Q&A: CSV, CSA, and Why the Paradigm Shift.
Additional CSA References:
In this webinar, Updates from the FDA on CSV Changes, we focused on why the FDA is moving from a validation to an assurance approach. This transcript has been edited and condensed for clarity.
Introduction and presenters
CSA: What, why, and how
Project background and pilot program
Case studies and analysis
Approaches to critical thinking
Indirect system expectations
Summary and real results
Diane Gleinser: Today, we'll be talking about why the FDA is moving from validation to assurance and what that means. What does computer software assurance (CSA) mean to you and your company? We'll be giving examples of risk evaluation and acceptable records under this new focus.
Francisco Vicenty is the program manager for the Case for Quality at the FDA’s Center for Devices and Radiological Health (CDRH).
Sandy Hedberg, is our Cloud Assurance quality assurance/regulatory affairs manager at USDM Life Sciences.
CSA - The WHAT
Francisco Vicenty: The whole purpose of moving to CSA versus the computer system validation (CSV) approach was to reset the paradigm of expectations. We have been engaged in this with our Case for Quality effort in order to enable the adoption of these technologies. We started from the regulated medical device industry, in the context of CDRH. Lots of these principles are directly applicable now. There is nothing in regulations that says you cannot do this right now.
CSA applies across the regulated industry. We're drafting guidance with some of our counterparts to make sure there is alignment in the approaches and the methodologies. This is also going to be tailored to the non-product quality system software space that is not in the device. It is not software as a device, it is everything else that's used in the production of it.
CSA - The WHY
Why are we doing this? In our early Case for Quality efforts, we were focused on getting better information and better data. As we engaged with our stakeholders, we learned that there wasn't a lot of investment or focus in the technology and the systems to drive that kind of information analytics. Once we tried to figure out why that was, CSV came up as the big-ticket item; we would invest the time and money to demonstrate that the system worked and to determine the amount of testing that a company should be doing. The FDA supports the implementation of automation.
This shift in focus is much better for the patient. We want to enable a better outcome, better access, better information, and better control. We are big believers in the use of automation in IT. We've already seen significant benefits as compared to doing things manually. In terms of what we could do from an agency standpoint, how we approach oversight, and our approach to the reviews we do, that information is key.
CSA - The HOW
How are we trying to do this? One of the very big-ticket items that we're trying to work through is answering questions about policies that are causing a lot of confusion. Something that is key here is thinking through and using that risk-based approach because it’s very acceptable. We started with defining direct or indirect impact, and clarifying acceptable approaches to test assurance activities and where to focus our time and energy. We are being more deliberate about where to expend our resources, which are limited as we move forward.
The Paradigm Shift
Diane Gleinser: While we're all thinking about this as a paradigm shift, understand that this approach is acceptable to the FDA today. This guidance document is going to help assist us in how we can shift our mindset from a CSV approach to a CSA approach. In doing that, we focus more on critical thinking from a risk-based approach with testing activities and documentation, in that order. From risk determination, focus on, “Is the software going to impact the patient's safety? Is it going to impact product quality? How does it impact quality system integrity?” While not specifically called out in the guidance, you'll want to document why your risk justification is what you want it to be and why. Does it mean that you have to be long-winded about your response or your answers, or simply justify those determinations? The hope is that we'll spend less time documenting and more time on testing the software applications; focusing on testing the more critical aspects of the systems and not doing as much documentation because documentation isn't getting us where we want to be. We want to be confident that the level of testing that we've done is accurate and is going to get us more bang for our buck rather than just spending time documenting. Cisco has some examples and case studies for you.
Francisco Vicenty: As we talked about what was going to make sense, we discovered that there wasn't a lot of clarity around these things. As we talked about making these principles clearer, we wanted to test that out. We had companies approaching us with some of these questions, so we asked if it was clear enough. That model has helped us refine the construct and the concepts to articulate what we already intended. The time and energy savings that some of these companies showcased, Boston Scientific in particular, with regards to their ability to implement these technologies more efficiently added more value to the organization because the data is much more usable for the organization.
Something we've been adamant about is piloting this stuff to find out if we are missing anything or if we are introducing additional risk. We have not found a situation where that's the case. Oftentimes we see cases where errors were found that wouldn't have been caught using traditional approaches.
Pilot: Johnson & Johnson
Here are some numbers from a Johnson & Johnson pilot. We are looking at significant time reductions to execute these changes. The key to this isn't in terms of less work; on the contrary, we want you to test the heck out of your system, but there are better ways to do that testing more effectively. The information and what you're doing should be of value to your organization. This isn't about generating work or evidence for an auditor.
Non-Product CSV Success Story: Value of FDA Collaboration
This was one of our initial piloters. This was great because we were really trying to get the message out on these concepts. We got a lot of pushback! I went to a lot of conferences and presentations saying that this is okay and people telling me no, I'm pretty sure that's not what FDA wants (no matter that I was there presenting on FDA's behalf!). We got lucky with a company where an individual was in a new role and found that they had a whole slew of data they needed to improve upon. They latched onto the concepts because they had no other option but to try. The results they demonstrated were significant and I think that was a turning point for generating a lot more interest, engagement, and attention within that space.
Cultural Barriers Paralyzing the Industry
A bigger problem is the idea of this not being something outside the bounds of regulation. I think that's probably the biggest struggle for most of the companies that have been applying and implementing this. These are examples of what we've been hearing in feedback. It's often not the FDA saying you cannot do something; it's oftentimes internal resistance that is holding up the shift to this type of approach and mindset, perhaps based on fear of what is happening during an investigation, or maybe even through an international regulated audit.
We are getting our investigators up to speed on these principles and concepts. In the meantime, the CDRH established a Digital Health Center of Excellence and communication paths so people can reach out and ask questions. If there is a discussion or debate, there is a mechanism there to help resolve the issue while an audit is going on.
How would you rate the difficulty of getting your company culture to adopt this paradigm shift?
- Very difficult: 10.6%
- Difficult: 39.7%
- Neutral: 32.2%
- Easy: 12.8%
- Very easy: 4.7%
Host: Sandy and Cisco, where did you think these results would go?
Sandy Hedberg: I think a lot of people are going to say difficult, but in my opinion, it's going to depend upon where you are in the hierarchy of your company. Some people are seeing ways to cut costs and get things to market quicker and get automation quicker in their systems and in their areas. And the collaborators are going to want to say that this is going to be easy, we can just do this. Again, this is really no different than the risk-based approach that FDA has been saying all along. It's just that we have never really applied that very well in the past.
Francisco Vicenty: I agree with you on that. It depends on the level of the organization it’s being driven from. I get the idea that it is very difficult from high up in the organization when there is a competing quality/regulatory counterpart, when there's that degree of separation between the business processes. That wouldn’t surprise me because we've seen it. We've seen happen within organizations that have been very proactive in their engagement and very forward leaning. They will always hit some kind of wall, and that's where you can see the context of whether the concern is the compliance piece or is it the quality piece? I could see numbers for generating the quality and value are there to support the shift. It's the compliance fear.
Host: It looks like about 40% of you view this as difficult; 30% of you neutral. Any take on that, Cisco and Sandy?
Francisco Vicenty: That's actually a surprise for me. I think if there are people out there who view this as a neutral standpoint, it would take normal processing activities to make the shift. I am very curious about that. Please feel free to share your experience with me. I'd like to be able to share more of your viewpoints.
Change the Culture
Diane Gleinser: So what do we need to do? Step 1 is Change the Culture. We need to change the culture in our companies in order to move toward this better way of doing CSA activities. Generally, there are four steps to the success of those changes. In our experience, changing the culture is one of the key components. Some examples of how you could do that are on the screen as we speak.
Step 2 is Plan and Pilot the Program. Failure to plan can drastically affect the outcome of your change process. Initiate a pilot project to establish a process instead of changing everything at once.
Step 3 is Develop a Methodology and Transition Plan. Once you've successfully proven your concept, transition that process to your develop procedures and processes in-house so that you can incorporate it into the rest of your everyday processes.
Step 4 is Achieve the Results. You want to evaluate your results. Hopefully you can achieve better testing with less paper and safer applications; better testing, safer products, better for FDA, better for you. Everybody wins.
More FDA Case Studies
Francisco Vicenty: I want to emphasize something that you just mentioned to really bring the point home. At the end of the day, this is all about the results and the result isn't more paper. The result is better quality systems, a better quality product, and a better outcome for the patient down the road.
These four case studies—ICU Medical, Verical, ConvaTec, and a biopharma company—show that this is applicable across the spectrum. There are a bunch of companies that are not necessarily in the medical device space. We've tried some of these principles to engage them, and the results have been significant in terms of generating fewer errors than they would normally get and writing deviations from within the construct of their pieces.
Being able to demonstrate that you're getting a better result and less of these difficult compliant errors that go by is something everybody has been experiencing in the pilot so far. Great numbers from one of our companies who is engaging in this and shared their information. They bought into the concept and ran with it, and this is a company within the pharma space. You can see the results that they've demonstrated here. A key takeaway is that they were able to drop defects and issues throughout their validation process and move to where they found more issues upstream, which is where everybody wants to be, where the bulk of the problems were about the initial implementation of the system and software, not what was tested downstream afterward.
What level of testing are you doing today?
- I don't know: 6.3%
- We test everything: 20.5%
- More selective, but still test more: 63.1%
- Optimized methodology, more of a CSA approach: 10.1%
Host: Cisco and Sandy, is this in line with your thinking?
Sandy Hedberg: Yeah, I think that's a lot of what I've seen.
Francisco Vicenty: Sandy, you've had a bit more experience and exposure to outside the medical device space, but that's a little bit more optimistic than what I've seen from some of our traditional participating companies. They stick to the “we test everything” approach. I think they're starting to transition. Like I mentioned before, because the principles apply across the board, I've actually seen more of that type of thinking applied in the pharma community.
Approaches to Critical Thinking
Sandy Hedberg: Your approach to critical thinking is going to be important. In our experience, there are many ways to approach that. You want to stick to clear, accurate, and relevant facts in order to evaluate your system and your processes. Here is an example of a risk analysis that uses critical thinking. Out-of-the-box systems would require less testing and documentation than custom systems, even when those out-of-the-box systems may have a high risk.
The key is to leverage your supplier’s qualifications; the better your supplier, the more you can leverage their activities, and you can concentrate your testing on the appropriate levels of your risk-based thinking and how you're going to use that application in your environment. That risk level is going to drive your documentation requirements.
Acceptable Records of Results
Francisco Vicenty: I cannot stress enough: don't reinvent the wheel. If you are using something that is out-of-the-box and using something from an established supplier, leverage what they've done. That's a great starting point, but when you go past that, then you're looking for your implementation, demonstrating that it’s effective for your intended use. There are lots of different types of testing that are available to you other than traditional expectations of scripted testing; all of these methodologies apply. We are trying to articulate the idea of “what is the documentation evidence” or the record?
You want this to be something that is of value to the organization. It should contain the usable piece of information that lets you get back to understanding what was done well, and what was wrong within the product should there be an issue or something that comes up. The record is not about pulling something together that's of value to the auditor, it's about what is of value to the organization. Are digital records acceptable? I want to emphasize clearly is, let’s move away from screenshots. They are a huge time and resources consumer and they are not necessarily adding significant value, and they're not reusable again down the road. We've tried to state this in earlier guidances and activities, and it still isn’t catching on within the industry.
Lower Risk Example
One thing we learned through these discussions and engagements is that examples are good. We don't want to be too prescriptive because people need to include what's relevant to their organization into their strategy and processes. We allowing the flexibility to say, “Here's what's acceptable.” Because this applies to different types of technology and applications, we need a good number of examples for different scenarios, but that's something we can work through once we get the policy principles out. This is just something to highlight what we're thinking about in terms of acceptable records and the intended use. What was the risk assessment? What was tested? How did you test it? What did you try to accomplish? When? Who? What were the activities observed? What was the end result?
This is an acceptable record for demonstrating that you are satisfied with the outcome of that assurance activity, because that is key here. It's not, “Is FDA satisfied?” It’s “these things have a function, these things have a purpose, and they're intended to keep a goal within the organization.” Are you satisfied that it's achieving that goal? I really liked using this example because this is something that was developed by one of our internal members who was helping with the development and review of the guidance pieces.
Select all the testing approaches that you're currently utilizing.
- Unscripted – ad hoc: 26.3%
- Unscripted – error guessing: 15.2%
- Unscripted – explanatory: 22.4%
- Scripted – limited: 65.7%
- Scripted – robust: 69.1%
Host: Is this what you were thinking, Cisco and Sandy?
Francisco Vicenty: Yes. I'm actually surprised that there's a split between the limited and robust, but it's about writing. It was amazing to see how quickly everybody seems to move into those categories on this poll.
Indirect System Expectations
Sandy Hedberg: Quality management system (QMS) software that does not directly impact the patient or the product—like document control systems, CAPA systems, complaint handling systems—might only require a supplier qualification if it's used out-of-box. Even if you need to configure the system, ad hoc testing would be all that's expected. This means no test plans, no formal requirements, no traceability matrix, no configuration specification. The expected documentation would include a summary description of the features and functions tested, any identified issues and their disposition, a conclusion statement, and a record of who performed the testing along with the dates of testing.
Indirect System – CSV Tools
We're not just talking about QMS software, there are indirect tools that might only require a supplier qualification or ad hoc testing such as blood tracking systems, load testing tools, automated GUI testing tools, and lifecycle management tools. These types of systems do not directly impact product or patient safety. Cisco, you might have some more context that can add clarity to this.
Francisco Vicenty: Absolutely. There are a lot of tools that are current state of practice. They are what you must do to make things successful within software development. These tools make things easier to manage. They are the tools in the background, and somehow they'd be compounded into the same scope that some of the regulation covers from a quality system or process standpoint. That's not the intent. These tools they need to be qualified. You need to make sure that they are going to meet your need, but that's not the same impact to the patient or the product safety that we're looking to get to with additional testing or assurance activities.
In fact, some of these tools are probably essential and make that effort much easier. We want you using them. Don't spend a lot of time thinking through the validation strategy for these types of tools. One of the key things to always go back to is the intended use. If it fails, is that intended use going to impact in any direct way, shape, or form, the ability to deliver a safe product or safe experience for the patient.
The riskier a computer application, the more documentation and testing required
Sandy Hedberg: While indirect systems provide the most savings in time and dollars from the expected guidance, the riskier computer application is an increased sliding scale of the required documentation, including air testing, and exploratory testing could be required. But remember that not all functions are created equal, so some functions may require less effort than others. In those cases, a hybrid approach is perfectly acceptable and should be looked at in that way so that your testing activities are concentrating on your riskier portions of your application.
Direct systems require that you do some testing based upon your risk. Some direct systems include electronic device histories and adverse event MDR reporting systems. Expected deliverables are similar to today's expectations. And again, a hybrid approach for documentation is perfectly acceptable.
Francisco Vicenty: Given all the things that we want to consider, going back to documentation and the value of it. You want to make sure that your time, energy, and focus are on the best outcome for testing the features the right way. In that sense, you might do some unscripted testing. Some pieces may require you to do more to demonstrate the repeatability of the function or feature, and that's where you want to make sure that there's a balance, but always err on the side of suitable and appropriate documentation.
What Make a Good Supplier Qualification?
Sandy Hedberg: In order to approach this assurance activity in a manner that's consistent with the guidance, you need to have a good supplier qualification. You have to know your vendors. If you have a vendor that's been in the marketplace for only a couple of years, they might have an immature SDLC process or CMMI level. They might require a higher validation effort than a vendor with a long track record with a mature, transparent SDLC process. Selecting good vendors becomes much more important from a cost perspective when doing the CSA approach.
Francisco Vicenty: The beautiful thing about this Summary slide is how it shows the cascade of what we're to focus on. It's the effort and the thinking and the approach over the documentation on the record.
Identify the Intended Use
Everything starts from your intended use. What is that function or feature? What is the system supposed to do? Where does it directly impact a patient or a product safety issue? I cannot stress that enough. Oftentimes see where the system used in a process gets the same risk classification as the process. If there was no system there, that's what the process would be delivered. The system is not adding or changing that product risk. It is changing that process step’s activity. That's the risk that people need to be focused on.
Determine Risk-Based Approach
Determine the risk-based approach that would be most appropriate. How do you want to evaluate that?
Methods and Activities
I can't imagine people going out there and not doing some kind of due diligence when they're going to purchase some of these systems. They are expensive systems. That is good work, take credit for it.
What's the appropriate record? The least burdensome applies. What is it that is relevant and of value for the organization?
Host: Let's end with some of the real results that Cisco has seen from this pilot program.
- Improved quality and efficiency
- More than 50% reduction in validation cost and time
- Up to 90% decrease in test script issues
- Significant testing overhead reduction
- Utilize prior vendor assurance activities
- Maximized use of CSV and resource expertise
- Capability to deliver value faster
Cisco and Sandy, moving forward and taking on this approach, what are your thoughts?
Francisco Vicenty: These are some great results and people have been very open and transparent with what they've been able to achieve with the approaches and methodologies once implemented, which has been very helpful for the FDA’s internal efforts to move this along. What doesn’t get captured here is that the IT organization is viewed as a hero in helping to implement and drive these things. There was positive engagement that shifted a bit of the culture within the organization. That's something to keep in mind.
The big takeaway is that you can do this now. This is applicable tomorrow. There's nothing that prevents you from doing it. Focus on capturing the right results so that you are getting the maximum value. If you can demonstrate that piece, there really isn't a concern or fear with the auditors. We had one of our participants go through their ISO 13485 audit and one of the best feedback points they got was their approach and methodology toward the computer system validation activities because they were able to articulate the thought process that went into their risk determination.
Sandy Hedberg: It's important for people to realize that the focus is on testing, not documentation. More testing, less documentation, safer applications.
Q. When will final guidance be made official?
A. (Francisco Vicenty) This is on our A-list, a guidance for publishing in draft form by September (2020). That's the timeline. There are a lot of external limitations on how quickly we can accelerate that, but that's the goal for at least the draft being out and then whatever time it takes from addressing any comments. That's the best I can commit. It is moving, it's got enough attention and traction now and the commitment behind the agency to keep it moving forward, but there is a lot that is outside of the agency's control.
Q. Do you have tips on where companies should be starting with this?
A. (Sandy Hedberg) People think this is a big paradigm shift when, in reality, it isn't. It's important to establish a pilot study where you're doing a smaller project to identify the types of processes and procedures that you want to use for this, how to establish your risk analysis and risk assessment, and what your documentation is going to look like. Once you've done the pilot study, do a debrief to understand what went right and what went wrong. Then, update your policies and procedures as appropriate. Conversely, if you find that you're still not getting to where you want to be, do a second pilot if you have to.
Q. What is the difference between the current approach, which is more of a risk-based validation, and this new CSA approach?
A. (Francisco Vicenty) There really isn't a significant difference between them. We encountered so much resistance with the terminology and the construct and the concept. There was too much connotation already associated with verification and validation. The phases and activities are intended to provide confidence that the system is delivering. That's really what it's all about. This is not new. The industry has acknowledged and provided some guidances. What we're trying to do is be more overt with the clarity and explicitly say, “Move in this direction.”
Q. Are there plans to align with other international regulatory bodies regarding this approach?
A. (Francisco Vicenty) One of the reasons this fits so well, especially with our internal shift to a 13485 approach in methodology, is that external regulators are more explicit about allowing and integrating this risk-based thinking throughout their validation activities. Integration is much easier with the outside regulator space and we are actively engaged with our regulator community.