System validation training

System validation seems to be the bete noir of regulatory concerns for clinical trials IT professionals so when I gave some training earlier in the year on this thorny topic I put a lot of thought into how to make it accessible and pragmatic. My client was a UK university-based clinical trials unit with a small IT team that wanted bringing up to speed on the principles of system validation and how it might work for their own specific environment.

Where do you start with system validation? Well definitions are no bad place: what do we mean by 'system', how do we define the scope of this (not easy), and how do we interpret 'validation' in this context? A read up on the principles of GAMP and its 'V-model' is helpful here - because it's both useful for understanding the various components of system validation, but also because the MHRA like to know that you've heard of it. So if you have a locally-installed EDC system it would be comforting for an inspector to find an Installation Qualification (IQ) demonstrating that you've installed and tested it against vendor-supplier instructions.

I'm less certain on expectation for the Operational Qualification (OQ) however. The OQ seeks to demonstrate that the system behaves as it was designed to from a functional perspective (the save button really does save). To prove that every individual bit of functionality works as per design can be *huge* task though, often taking many days, even weeks, to complete. I think there is a real question here as to whether a user of a commercial system needs to perform a full OQ - isn't this partly what you are paying for? One might argue - as I did successfully during my time as IS lead at a large UK academic CTU - that it is enough to demonstrate validation of an individual study (effectively a focussed OQ of the parent system). This argument wouldn't wash if you had built your own system however.

Patching a validated system is a tricky issue too. You cannot just install the software, note the updated version number and go home. But how much validation is needed? The vendor should supply release notes which contain the installation instructions - the IQ for the patch. If not already these should be converted into a checklist and signed off once complete. This isn't the end of it though - the patch presumably contains changes to your system which might impact on behaviour, or even existing data. You need to demonstrate that the impact of these changes have been considered and any concerns explored and confirmed as OK. So, you might develop a second checklist of the functional changes, associate risk with each (any changes which might affect data integrity scoring highly) and then work though these providing test evidence that behaviour is as expected. And of course you do all this on your test platform, only deploying to production when you are confident all is well. All this can still take time, but is a world away from performing a full OQ, and arguable more effective.

Back to the training, I spent a couple of hours on the principles and then got stuck into practice. We discussed the relevant documentation - specifications, test logs and sign-off - and processes for formal change and version control. We worked through a specific example of an actual trial database, which is where I think things started slotting into place. I provided some document templates and with the addition of the unit logo, off they went. From communication I've received since it sounds like all is proceeding well and hopefully when the inspector calls, system validation won't be something this particular unit is fearful of.