New improved GCP

I had some free time this morning and thought I'd catch up on a bit of reading. I came across this nice summary of ICH E6 R2.   There's nothing radically new in this addendum to GCP but it makes some positive statements about focussing on risk rather than blanket data management.

The approach taken by some trial teams seems to be to check *everything* at the expense of focusing on the important. Perhaps it's easier to think like this but it's wasteful and can lead to oversight of the real issues. A real risk-based approach means thinking deeply about the underlying data, what really matters and what can be done about it. It takes confidence to reduce, or even ignore, efforts on other less important issues - there's always the fear of getting it wrong. Mitigating these concerns isn't so hard. The key, as it is to all risk-based approaches, is to demonstrate that you have considered the issues in detail, involved those people with relevant expertise, and documented the thought process and output. And remember too that risk-assessments shouldn't be seen as a one-off task but something that is routinely updated as part of an ongoing and pro-active culture.

The addendum also reinforces the importance of demonstrating system validation. This has been with us for some time but is often misunderstood (I like to use the Ronseal analogy here, for those of us who remember 'it does exactly what it says on the tin'). You can read up on the latest incarnation of GAMP and its famous ‘v-model’, but in a nutshell, you need to demonstrate you have a system that works in a clearly defined way, and reliably so. The problem is how to define system scope. So if your EDC system runs on a Windows box, uses a SQL Server backend and users access it via a web browser, *any* change to *any* of these components needs to be considered a risk to the validated state (and yes that means browser versions!). Coming up with a pragmatic way to define and manage scope, and the myriad of possible changes, is not trivial, but it can be done. I think the message or the addendum is helpful: think about the real and relative risks, identify and manage them – demonstrate critical thinking.

The phrase 'data integrity' also pops up in the addendum and is something I've been hearing from inspectors a lot lately. Data integrity is closely related to system validation and boils down to trust. Once operating in a validated state a system should be trusted to process data in a predictable way. But as we know, systems and data are subject to regular change. You need to demonstrate that data entering at one ‘end’ exits the other in the way it is expected too – can you prove that does your analysis data extract application does exactly what it says on the tin (the tin here referring to your system specification) and continues to do so after you applied that Windows patch?

Summarising then, R2 puts the onus on us to ensure we spend resource wisely and focus on mitigating real risks. This is no bad thing, but it does mean a more intelligent and dynamic approach.