25 years have passed since the Good Automated Manufacturing Practice (GAMP®) forum was founded to deal with the evolving regulatory expectations for good manufacturing practice (GMP) compliance of manufacturing and related systems. GAMP published its first guidance in 1994.
Over the years, GAMP® has become the defacto approach to dealing with these expectations, and has morphed into leveraging a “Risk Based Approach” to tackling the nuances of computer systems qualification and validation. It has been 9 years since the last version of the guide – GAMP® 5 was published.
With the avalanche of companies moving their infrastructure and systems to the cloud, what does that mean for that approach? How do you keep up with changes to your compliant systems occurring continually and often without the option of choosing not to embrace them?
Don’t get me wrong, risk management and validation still go hand in hand. It is still critical to focus the right level of attention on the critical elements of your system, and if the number of changes per release are of a “manageable” level – and can be assessed on a case by case basis (as is the case with some cloud systems), then the approach is applicable.
But – what about systems that don’t have changes limited to an established cadence – the “published release” approach? More and more cloud vendors are realizing the power and advantages of making continual changes to their platforms. In these cases – where hundreds of small changes can occur on a weekly basis, the traditional “Risk Based” approach is no longer viable. It simply is not possible to assess that many changes and produce an accurate mitigation. In fact – it becomes riskier to try to do this as manual inputs are so contingent on an individual’s own interpretation and there is not enough time to produce a fully reviewed and quality mitigation strategy for the inherent risks.
A new “hybrid” approach is needed; an approach that was not possible a few years ago. Leveraging advances in technologies such as Automated Testing Tools, it is possible to designate specific elements of a systems architecture that you want to focus upon (utilizing a risk based determination process) and perform “Continual” or “Adhoc” testing of those elements to ensure the data integrity is not compromised. The outputs from these tests can be tailored to produce bespoke, real-time dashboards of platform operation and performance, showing the compliant state at any given time.
It could be argued that this approach, if implemented correctly, is in fact much less risky than the traditional approach. Since the elements tested can be determined once, and repeatedly tested ad infinitum, the chances of an unintentional adverse effect on cGxP functionality made by a cloud vendor being missed by a manual assessment of release notes is drastically reduced.
New tools enable new approaches. The approach is still risk based, but it has evolved – it is more thorough. In fact, the necessity to focus only on highly critical elements can even be removed - since a one-time bout of test creation enables continual testing henceforth. You could, if you desired, decide to test everything, all the time.
And that is where this all began. In the early nineties companies were more often than not testing everything, to the nth degree. There was far too much money, time and resources being spent. So naturally, risk based approaches made sense. It’s now possible to more accurately, and more efficiently test everything – if so desired – providing even more assurance of your cloud’s continued compliance.
USDM is already working with cloud vendors on these approaches. For more information contact email@example.com.