Technology Transfer note #17 – Statistical comparison of samples and PPQ batches

PPQ batches are performed to demonstrate that the process under test can be manufactured consistently. You are expected to do this on a statistical basis. I’ve often wondered how you can demonstrate this with only (probably) three batches.

It’s all very well saying that all PPQ batches have passed there in-process control and release tests and so the PPQ has passed, right? But you are required to demonstrated statistical confidence that the process was in control. But if you only performed three validation batches (based on a risk assessment of course, see my previous tech transfer note #11). how can I demonstrate statistically significant consistency?

Some of the observed batch to batch test result variations will come from variation within the batch (intra-batch) and some comes from variation between batches (inter-batch) sources.

In the main these tests all require some statistical knowledge. Let me just start by saying that I’m (definitely) not a statistician, so I won’t be delving into the mathematics of the statistical methods, just exploring what methods are available, but I will give a suitable link at for each for the mathematically minded of you. However, if you are like me you will probably just feed the test results into a statistical software package (such as JMP, SPSS etc) and let the software do the heavy lifting.

There are two main types of statistical test –

  • Those that demonstrate inter-batch variability
  • Those that demonstrate intra-batch variability

It is important to recognise though that statistical tests cannot give you an absolute answer, they can only give a probability such as “I am 95% sure that 95 % of the sample results will fall into this group” etc.

There are several statistical tests that can be used such as:

  • Chi square
  • USP <905>
  • t-test
  • Analysis of variance (ANOVA)
  • Kruskal-Wallis test
  • Levene’s test.

Intra-batch variability

Chi square (goodness of fit)

Link: https://www.jmp.com/en_gb/statistics-knowledge-portal/chi-square-test.html

The Chi-square goodness of fit test checks whether your sample data is likely to be from a specific theoretical distribution – in other words are all the results from repeat samples taken from the same point at the same representative of the actual value. The results are compared against the ideal (required result). The test looks at the variance between the results and gives us a way to decide if the results have a “good enough” fit to our specification, or if the results do not give sufficient confidence in the repeatability of the sampling / testing methodologies.

If you are looking to see if there is any variability for the results from a sample point over time, then a t-test is probably the better test to use (comparing the variability of samples taken at a different time point.

USP <905>

Link: https://www.usp.org/sites/default/files/usp/document/harmonization/gen-method/q0304_stage_6_monograph_25_feb_2011.pdf

This is a compendial test which calls for tablets, capsules to be weighed and the standard deviation and Relative Standard Deviation are calculated. The standard defines a specific acceptance criteria and requires a minimum of 30 samples (tablets). So, while this is a statistical test of intra-batch variation, t is only applicable in certain circumstances, although the principle used can be applied to other sample types and sample numbers.

Inter-batch variability

The t-test is a statistical test procedure that tests whether there is a significant difference between the means of two groups (e.g. PPQ batches or two sets of samples taken at a different time).

Link: https://www.jmp.com/en_gb/statistics-knowledge-portal/t-test.html

The one-way ANOVA (Analysis of Variance) is the extension of the t-test used to compare more than two PPQ batches.

Link: https://www.jmp.com/en_gb/statistics-knowledge-portal/one-way-anova.html

The t-test and the one-way ANOVA test do have one main assumption – and that is that it is assumed that the distribution of results is parametric – and by that, I mean that it is assumed that the sample points in each “group” are normally distributed (conform to a Normal distribution curve).

If it is felt that this assumption is not the case, then non-parametric tests such as the Kruskal-Wallis Means Test and Levene’s Variance Test should be used.

The Kruskal-Wallis test is used to compare the results from three or more PPQ batch results to determine if there are statistically significant differences between them. If you are fortunate to have been able to justify the use of only two PPQ batches, then you could use the Mann-Whitney U test instead.

Kruskal-Wallis test

For each sample point, the Kruskal-Wallis test looks at the results from each PPQ batch and combines the results of each duplicated or repeated sample point for each of the PPQ batches and then statistically determines if all the points are “likely” to have come from the same “spread” of distribution by evaluating the means.

Link: https://datatab.net/tutorial/kruskal-wallis-test

Levene’s Test

Levene’s test is also a non-parametric test used to determine if two or more samples have the same variance. The Levene’s test uses deviations replacing the original data points.

Link: https://datatab.net/tutorial/levene-test

As with all statistical tests, the higher the number of samples taken and the more PPQ batches that are run, the higher the number of data points to be manually calculated and this can end up being very complex and cumbersome. As such it is recommended that statistical software is used for these calculations. Whereas Excel can always be used, the validation of any spreadsheet constructed could be time-consuming. Statistical software can be assumed to be regarded as “off-the-shelf” software requiring minimal validation.

Control Chart

Control Charts: https://deming.org/a-beginners-guide-to-control-charts/

Western Electric Rules: https://en.wikipedia.org/wiki/Western_Electric_rules

There is though one “non-statistical” test commonly used and that is the Control Chart (sometimes called a Shewhart chart). This chart is marked with an “average” line, an “upper control limit” and a “lower Control limit and plots the value of each reading in time order. By using a simple set of rules such as the Western Electric e.g. whether Four out of five consecutive points fall beyond a limit line on the same side of the average line, you can assess whether the process is in control or not.

Example of a control chart (The Deming Institute)

The control chart is usefully to monitor how a single parameter varies with time but cannot be used for multiple samples taken from the same point as the same time.

About The Author:

Trefor Jones is a technology transfer specialist with Bluehatch Consultancy Ltd. After spending over 30 years in the pharmaceutical / biopharmaceutical industry in engineering design, biopharmaceutical processes, and scale-up of new manufacturing processes, he now specializes in technology transfer especially of biotechnology and sterile products.

He can be reached at trefor ”at” bluehatchconsultancy.com.

Technology Transfer Notes #16 – Scaling Down

We probably all understand what scaling-up a process is – increasing the batch size to reap the economies of scale and also many of the problems that have been encountered along the way. But scaling down? Why?

The reasons for scaling-down a process are not always understood. By scaling down, I mean replicating the larger process at a smaller or even laboratory / benchtop scale. Why would we do that and what relevance is this to technology transfer?

Scale-down principles have long been used to evaluate and understand how a process will be carried out and the parameters that surround the process. Scaling down is thus mimicking the larger scale process using a smaller process.

There are three main reasons for scaling down a process:

1. Process development, which usually starts with small scale processes on the laboratory bench. As many technology transfer practitioners have found out – unfortunately often the hard way, many process characteristics and parameters do not scale up well or are difficult to achieve, such mixing or heating parameters.

In order to assess the effects (or limitations) of the larger scale process without the time and cost implications of running multiple large-scale batches, the processes can be performed at much smaller scale simulating the larger scale parameters of rate of temperature rise or mixer shear rate.

The result of such scale-down studies can mean optimisation of large batch processes can be performed quicker and cheaper, ensuring technology transfer can be “right first time”.

Scaled-down processes can be used to either mimic the whole operation or just specific unit operations.

2. Troubleshooting of larger scale processes (where for instance 10 or more small scale fermentations can be run on the bench at a time), especially useful if raw materials are expensive or in short supply or perhaps is just not practically feasible.  Scaled down studies can be used to examine parameter values experienced as a result of out-of-specification or process deviations.

3. Process Development, that is to create and validate small scale models of the full-scale process to perform process development as a pre-cursor to technology transfer and gain an understanding of the processes Design Space and to achieve reliable process characterization. This is becoming increasingly common in the development and technology transfer of biological products.

The results of scaled down studies can be used to support process validation studies, and product licence applications, however in such cases it is important that any scaled-down models “should account for scale effects and be representative of the proposed commercial process” [ICH Q11], in effect the small scale model must be validated or “scientifically justified” against actual results obtained from the larger batch size.

Regulatory guidance on the use of small scale models is provided by ICH and FDA documents:

ICH Q8 (Pharmaceutical development) – an assessment of the ability of the process to reliably produce a product of the intended quality can be provided.”

ICH Q9 Quality Risk Management – to assess the need for additional studies (e.g., bioequivalence, stability) relating to scale up and technology transfer.”

ICH Q11 Development and Manufacture of Drug Substance – a scientifically justified model can enable a prediction of product quality, and can be used to support the extrapolation of operating conditions across multiple scales and equipment.”

FDA’s process validation guidance -a manufacturer should have gained a high degree of assurance in the performance of the manufacturing process . . . the assurance should be obtained from objective information and data from laboratory-, pilot-, and/or commercial-scale studies.

EMA Notes for Guidance on Process Validation – allows data from small scale studies to be submitted for a marketing authorisation if studies on production scale batches are not available, however it is expected that such studies be linked with available production data.

In some cases (such as viral clearance studies scaled down studies are actually required, ICH Q5A(R1) requires that viral clearance studies can only be performed using qualified scale-down platforms, including chromatography and nanofiltration, in a virology lab located outside the cGMP facility.

About The Author:

Trefor Jones is a technology transfer specialist with Bluehatch Consultancy Ltd. After spending over 30 years in the pharmaceutical / biopharmaceutical industry in engineering design, biopharmaceutical processes, and scale-up of new manufacturing processes, he now specializes in technology transfer especially of biotechnology and sterile products.

He can be reached at trefor ”at” bluehatchconsultancy.com.

Tech transfer notes #15 – How not to carry out a risk assessment

Risk assessments are an essential part of technology transfer, but it is surprising just how often they are misused – usually by being used to justify what is already being done rather than looking to see how process risks can be eliminated, or at best mitigated. Another common failing is to recognise a risk but to mitigate by saying “controlled by SOP” rather than actually doing anything about it.

However more often as not this is due to the people charged with performing the risk assessment not understanding its relevance – or not knowing how to carry it out properly.

There are many ways of course to perform a risk assessment – and perhaps I will list these in a later note. But one of the most often used methods is the good old Failure Modes & Effects Analysis (FMEA). This method looks at ways in which the examined process could fail, the effect of this failure and how these effects could be eliminated or mitigated.

Often the analysis points to how to minimise the effects of the risk rather than trying to eliminate the cause of the risk- and this is by far the most common failing (back to “controlled by SOP”).

Before I progress – a quick description of what an FMEA is:

Failure Mode and Effects Analysis (FMEA) is a structured approach to discovering potential failures that may exist within a process and the consequences of those failures. A risk score is derived by multiplying together individual scores assigned to the severity, frequency of occurrence and detectability of an identified potential failure in the process. A table such as the example below is often used to perform this analysis.

Example FMEA excerpt

So – to look at some of the ways an FMEA is “mis-performed”:

  1. Performed by one person – no single person can assess all the potential failure modes, or effects of, from all manufacturing operations perspectives. Although I have seen many people try. A cross-functional team incorporating people from manufacturing, engineering, QA, MSAT and process development.
  • Not allowing sufficient time – It will take longer than you think. If the time allocated is in “hours” rather than “days” you are probably not being thorough enough.
  • The FMEA and its creation is not always the responsibility of the QA department. The QA department does not own the production process and to be honest don’t know the process as well as the production department. The FMEA should be created and owned by the person responsible for the production process.
  • Performing the FMEA at the wrong time. Quite often performance of the FMEA is just seen as a box ticking operation that can be carried out at any time. To be effective the FMEA should be performed at the beginning of the manufacturing process design since its purpose is to assess the adequacy of the manufacturing process design. Any shortcomings after that could entail the redesign of the production process – costly and time consuming.
  • Confusing Failure Mode with Failure Cause. For instance, in the example FMEA given, “feed volume too low” is not a failure cause – it is a failure mode. The cause is the incorrectly set pump speed.
  • Mis-using / fixing PRN number. This is a biggy – it is assumed that a high PRN number (severity x occurrence x detectability) represents a big risk, but that is not so. Firstly, the PRN is only a relative number and only gives a relative value for that study. Unless very strict scoring methods are defined scores from one study cannot be compared. The PRN only gives an indication of which failure modes should be tackled first.  I have seen many instances where scores have been “massaged” to result in PRN values under a certain “action” value and used as proof that the whole process is “low risk” and no action is required. FMEA’s are only useful if carried out in good faith. Personally I don’t believe a potential failure that could kill a patient (say severity score 10) could ever be an acceptable risk even if it would only very happen rarely (occasion score 1) and “almost” certainly be detected (detection score 1).
  • Assuming all incoming components and excipients are within specification and error free thus not included in the FMEA. They are not. Indeed, in my experience this has been one of the most significant and costly process failures.
  • Recommended Activities don’t address the root cause of the failure mode. Its all too easy to ignore identified risks by simply commenting “acceptable risk” or “controlled by SOP”.
  • Not repeating the risk assessment after remedial actions have been taken. FMEA’s should not be a once off exercise – but should be repeated once remedial actions have been taken and can be used to demonstrate continual improvement.

About The Author:

Trefor Jones is a technology transfer specialist with Bluehatch Consultancy Ltd. After spending over 30 years in the pharmaceutical industry in engineering design, biopharmaceutical processes, and scale-up of new manufacturing processes, he now specializes in technology transfer especially of biotechnology and sterile products.

He can be reached at trefor ”at” bluehatchconsultancy.com.

Technology Transfer Notes 14 – Transferring Operational Ranges (CPP, PAR, NOR)

I’m sometimes asked if a transferred process should use the same Critical Performance Parameters (CPP), Proven Acceptable Range (PAR) and Normal Operating Range (NOR) parameters and values and what parameter values should be used in the manufacturing documentation for such a newly transferred process. Two questions really, but my thoughts are as below.

Transfer the same Critical Process Parameter (CPP) & values?

The first reaction may well be “yes – of course” – it’s the same product and manufacturing method that is being transferred, but that’s not always the case. While it may well be the case if you are using the same equipment, and the process is being manufactured at the same scale. And some parameters such as temperature or pressure transfer can easily. However, even with the best of intentions there are often minor differences in some of the equipment used between sending and receiving sites (previous sites older equipment size / model is not available or been “updated”) and sometimes the opportunity to “optimise” the process during the transfer is irresistible.

While the fact that a particular CPP’s will remain a CPP unless the process has been redesigned some CPP’s will likely have changed value (e.g. mixing speed and mixing time of a liquid mixer) due to even slight differences in impeller type or positioning, in such cases the Proven Acceptable Range (PAR) values will have to be re-evaluated.

Even more so for more significant changes such as a change from say a vertical to a horizontal granulator, as previous studies cannot be used to establish the new PAR values for each affected CPP’s and need to be re-established experimentally. This can be done by performing a series of engineering runs but it is more usual to study only the affected unit operations in separate “support studies” such as separate granulation or mixer studies to define new PAR values. The material produced during these studies is often used to demonstrate comparability with material previously produced.

If process changes are significant enough, then full scale demonstration batches may have to be run and CPP’s re-evaluated. Such batches and studies are often call bridging studies (linking both previous manufacturing parameters with the new ones).

I’ve often seen the Normal Operating Range (NOR) values being transferred as well as part of the technology transfer process, and to my mind this is an absolute no–no. Why? Well in part it just demonstrates that the receiving site doesn’t understand what this value represents or how it is established.

The NOR is primarily the natural variation around the target value as experienced at the receiving site. This natural variation is unique to the receiving site equipment, environment, process control systems and people and it has to be measured at each specific manufacturing site. The NOR is not a limit, it’s a measure of that sites capability and acts an alert point for that site’s specific manufacturing process. During technology transfers it is often the case that the receiving site does not have sufficient specific process experience to assign this at the receiving site.

NOR values are used to:

  • Account for natural variation.
  • Ensure that any abnormal variations are detected before they become a process issue.
  • Demonstrate process control.

NOR values can be assessed at the receiving site by using:

  • Data from engineering runs.
  • Previous equipment history (historical data).

If the receiving site has no history to base a value on, the sending sites NOR value may give a guide (but general guide only as a starting point.

  • Manufacturers equipment capability specifications.
  • Instrument / sensor calibration accuracy.

And should take into account:

  • Some computer-controlled equipment can maintain tight limits.
  • If manual input is required, the difficulty, ease or precision capability of the operation.
  • The accuracy / resolution of equipment / readout of equipment or analytical instrument
  • Digital / rounding up-down errors.

Although NOR values cannot (or should not) be transferred, they are usually required as part of the receiving sites product licence application. This means that it is incumbent of the receiving site to monitor the process variations during pre-PPQ studies and runs (and indeed during PPQ runs) to determine the NOR values for their equipment on their site.

Once you have transferred a process, what process parameter value should you include in your manufacturing documents?

Each PAR value will have its own study derived or product specification target value. However, these target values are usually expressed as a range of values e.g. net weight +/- 10g in manufacturing documentation.

It is not advisable to use the CPP / PAR limits as the range value, this builds in too much process variability to start with while at the same time using any NOR values means that you will be working at the edge of the process equipment’s limits and is likely to cause unnecessary out of specification alerts or deviations. The manufacturing documentation values you use should be tight enough to ensure control, but wide enough so that natural variation doesn’t result in unnecessary deviations.

So any value between the PAR and the NOR value would be OK. The temptation is to use a documented value as close to NOR as possible – but that can be counterproductive. A + 5% over the NOR value or +10% against target value might be used as a starting point. Sometimes it may be best to go a little wider provided PAR isn’t approached / exceeded. You can always tighten this once process experience is gained and NOR value understood it’s much easier to justify the tightening of a range than seeking to widen it, (as my QA colleagues I’m sure will agree). Widening limits after process failures will only leave you open to accusations of widening it to avoid deviations.

About The Author:

Trefor Jones is a technology transfer specialist with Bluehatch Consultancy Ltd. After spending over 30 years in the pharmaceutical / biopharmaceutical industry in engineering design, biopharmaceutical processes, and scale-up of new manufacturing processes, he now specializes in technology transfer especially of biotechnology and sterile products.

He can be reached at trefor ”at” bluehatchconsultancy.com.

Memo 12 – clinical trials

Blog 12 – Technology Transfer of Clinical trial manufacturing.

Firstly, what is Technology Transfer – I suppose putting simply it, includes everything that’s needed to move a technical process from one location to another. See my Blog #1 for more detailed definitions.

While most discussions are around the transfer to commercial scale manufacturing, it should be remembered that process and assay information (tech) transfer also occurs at all points through a products life cycle from development to production end of life, including through the clinical trial phases.

Based on: “Unraveling the Complexities of Technology Transfer” – Thomas ChattawaySeptember 2020 in Bioprocess International

However transferring products still at the clinical phase stages have specific problems and challenges:

The main one being that the manufacturing process has not yet been defined and can still be modified / developed (even at phase 3 stage) and is subject to change, and analytical tests may not have been fully developed, validated, or verified. Indeed these aren’t strictly necessary for early clinical manufacturing, filter validation for instance being one of these – although it should be performed as early as possible.

Some of the challenges can be:

  • Product specification not yet defined.
  • Analytical methods not yet finalised – or even been fully defined.
  • Product may not yet have been characterised.
  • Manufacturing process not developed – indeed in its current form it may not even be GMP compliant.
  • Current manufacturing process may not be scalable.

Key documents and items of information traditionally relied on for technology transfer to commercial manufacturing scale may only be in draft form if at all.

Let’s take a look at some of the most crucial tech transfer points between these stages:

Preclinical to Phase 1

Challenges:

  • Scalability of the process; whether it can be replicated at scale
  • Needed materials, which may not comply with GMP requirements.
  • Equipment between sites is likely to different
  • R&D often based in another country with different cultures and time zone
  • “Tribal” knowledge may be prevalent and not always be captured on paper – a prime task of technology transfer at this stage is to try and capture all of this knowledge. The problem is that a massive amount of data is needed to encompass all the different procedures, equipment setups, protocols and reports, methods, output, validation, parameters, and equipment guidelines.
  • Remote workforce, which has become the new normal. Teams are now distributed across sites and even continents, making ad hoc meeting and document review often difficult.

At the preclinical phase, there is little or no process information. Product characterization, information, procedures, and early stability data may be present.

One of the key roles of the technology transfer at this stage is to ensure that the process being transferred is or at least capable of being GMP compliant (yes, I’ve heard R&D staff say “but GMP doesn’t apply to us” and in a sense they can be right) but failure to ensure that a product cannot be manufactured in a GMP compliant manner at this stage is planning for failure. Such an example could be the reliance of raw materials or excipients that were either not GMP grade or could not be sourced as GMP compliant materials and would waste time / money re-formulating the product.

Other examples could be:

  • Equipment that could not be scaled up (e.g., rotary dryers)
  • Processes that could not be performed easily or at all at manufacturing scales (e.g., fast heating or cooling rates required – O.K. at test tube scale, but unfeasible at 2,000L scale).

So, the main role of technology transfer at this stage is probably to ensure that as much data as possible is captured about the product and process (at this stage what is important or not is usually unknown) and to ensure that the process being transferred is at least potentially GMP compliant.

Perhaps a word of caution is appropriate at this time, you should be careful to protect your intellectual property, not only regarding the product, but also in respect of the analytical methods and process methods used as it would otherwise be possible for any entity performing further development work to be able to patent the methods themselves.

Early phase Clinical Phase transfers

This is where the development teams hard work starts, turning an R&D (possibly non-GMP compliant) into something that resembles a GMP / regulatory compliant process that can be scaled up through pilot plant to commercial scale, and produces a product that is safe for animal and human trials.

Technology transfer at these stages may require:

  • A Quality Target Product Profile (QTPP) to have been created.
  • Product characterisation data to be available.
  • Critical Quality Attributes to have been determined.
  • Preliminary Critical Process Parameters to have been developed (what they are even if numeric values are not yet available).
  • A draft process description, although at early clinical phase stages this is expected to change significantly before a commercial scale process is defined.
  • Details of development work and batch records of previous manufacturing especially at the pilot scale. – what went right, but also what went wrong), any process development reports.
  • Scale up and regulatory strategies may also be developed at this stage.
  • Analytical methods to be used – these should be developed in parallel with the process to ensure that appropriate tests were defined and appropriate sensitivities for the developing process were possible. This may require some preliminary values for critical process parameters to ensure analytical requirements and capabilities are matched. it is not expected that all methods and specifications be defined for a Phase I biologic. For a Phase I biologic, analytical methods should be in place; however, at this early stage, they do not need to be validated. 

Process validation at this stage of technology transfer can seem a long way off – something to be thought about later. However, it is worth emphasising that planning for eventual process validation is something that should be incorporated in the technology transfer at this stage of product development as decisions taken now can greatly affect the ability of the process to be validated without having to either repeat studies incorrectly performed or indeed not at all.

Late phase (phase 2 and 3) Clinical Phase transfers

At these stages the product should be characterised and the process should be being developed in a way that is GMP compliant, taking all the data that has been generated to date and then using it to replicate the early phase clinical process  under GMP conditions, using GMP compliant materials.

Quite often the process changes during these phases and it is vital that bridging studies are performed to demonstrate that the product is still comparable with the product from earlier clinical stages.

The process description should be in late draft stage, and process Critical Quality Attributes and Critical Process Parameters should have been defined (even if their values are still to be confirmed), the Design Space should be in draft state.

Risk assessments should be performed during these phases and remaining studies (i.e. confirmation of formulation) that may still be required to “fill the gaps” in the process knowledge such as scale-up or scale down studies should be planned and performed. Analytical methods if not compendial need to be defined and any validation required planned.

If technology transfer is performed during these stages the main aim should be to transfer what is known, and what is yet to be defined / determined.

By the end of these clinical phases all studies, process parameters, process descriptions and analytical methods should be complete and validated as necessary, as this will be the process that is used for the Process Validation.

Commercial manufacturing stage

At the commercial stage the process validation studies will have been completed and product licences granted. Although the clinical stages have been completed. However Technology Transfer has not – contrary to popular belief – been completed. Demonstrating that the transferred process is controlled and stable must surely be the aim of a successful transfer, and this requires careful post Process Validation monitoring and analysis as part of the process Continued Process Validation (CPV). Of course, I’m not suggesting that like CPV technology transfer never actually ends, just that a statistically significant number of initial commercial batches be studied to demonstrate consistency and control. How many is “significant” is probably another topic for another blog.

Some of the references used are:

  1. Chattaway, T. (2020, September 22). Tech Transfer: Unraveling the Complexities. BioProcess International. bioprocessintl.com
  2. Fallentine, B. (2019, February 1). Mind the Gap: Tech Transfer from Early Stage Cell Culture to Phase I Clinical Manufacture. Pharmaceutical Technology. pharmatechassociates.com
  3. Jones, T. (2018, January 16). 4 Steps for Successful Tech Transfer of Gene and Cell Therapy Products. Bluehatch Consultancy Ltd. bioprocessonline.com
  4. Makowiecki, J. (2020). Best Practices for A Successful Bioprocess Technology Transfer. Cytiva Life Sciences. lifescienceleader.com
  5. O’Sullivan, C., Rutten, P., & Schatz, C. (2020, July 23). Why tech transfer may be critical to beating COVID-19. McKinsey & Company. mckinsey.com
  6. Witcher, M. (2020, September 23). Can We Eradicate Tech Transfer and Other 20th Century Pharma Manufacturing Practices? BioProcess Online. bioprocessonline.com
  7. Agrawal, M., Eloot, K., Mancini, M., & Patel, A. (2020, October 20). Industry 4.0: Reimagining manufacturing operations after COVID-19. McKinsey & Company. mckinsey.com
  8. Best Practices for Technology Transfer. (2011, June 1). BioPharm International. biopharminternational.com
  9. National Institutes of Health: National Cancer Institute. (2020). Technology Transfer Considerations. PREVENT Cancer Preclinical Drug Development Program. prevention.cancer.gov
  10. Steinmetz, K. L. (2009, June 12). The basics of preclinical drug development for neurodegenerative disease indications – BMC Neurology. BioMed Central. bmcneurol.biomedcentral.com

About The Author:

Trefor Jones is a technology transfer specialist with Bluehatch Consultancy Ltd. After spending over 30 years in the pharmaceutical / biopharmaceutical industry in engineering design,  biopharmaceutical processes, and scale-up of new manufacturing processes, he now specializes in technology transfer especially of biotechnology and sterile products.

He can be reached at trefor ”at” bluehatchconsultancy.com.

Blog #11 – How many Process Performance Qualification batches are required?

The question of how many batches have to be run is a recuring question, not always easy to answer and there is a great temptation to just say three (3)!

Although industry has typically used three batches during the process performance qualification (PPQ) phase to demonstrate that a process is capable of consistently delivering quality product, this “rule of three” batches or runs is no longer appropriate and the regulations do not now prescribe the number of runs required for process validation activities.

The FDA just state that, “each manufacturer should judge whether it has gained sufficient understanding to provide a high degree of assurance in its manufacturing process”, The EMA guidelines state “a minimum of 3 production scale batches should be submitted unless otherwise justified”.

ICH Q7 (12.50) says “3 consecutive batches should be used as a guide, but . . . “

The past typical use of three batches has always seemed very much a “finger in the air” to me as how (statistically) confident can you be after only three batches – a statistician friend tells me that for 95% confidence and 90% reliability, 30 runs would be required, however that is neither practical nor cost effective.

So – how do you determine how many batches to run?

There are two main approaches:

a) Risk based.

This uses a comprehensive risk analysis to assess how much process variation risk remains after applying existing process knowledge and process design data.

This is performed by establishing or assessing the main sources of process risk, perhaps using such techniques as Failure Modes and Effects Analysis (FMEA).

Another method that can be used to identify and characterize the significant process factors and parameters is based upon the design of experiments (DOE). Again, this method can require a large number of pre-PPQ DOE experiments to demonstrate the significant and interactive factors and can require a relatively high number of PPQ batches to be run (two significant interactive factors would require 4 PPQ runs, while 3 interactive factors would require 8 PPQ runs etc.to cover the minimum and maximum values of each parameter in combination.

A risk method is not statistically based and may lead to a high number of PPQ batches required if more than 2 sources of variation are determined, and for that reason alone is probably not practicable for anything other than for simple processes.

b) Statistical / pre-knowledge based.

Where practical and meaningful, a statistical method of determining the number of batches is recommended, although there is no standard industry approach.

The statistical: based process relies on calculations targeting capability, tolerance intervals, or overall reliability of meeting CQA acceptance criteria.

The concept of this approach is that sufficient historical data from development, engineering or pre-PPQ runs is obtained to develop a deep enough understanding of the process so that you can have a high enough statistical confidence to be able to predict the behaviour of the PPQ batches. the PPQ runs themselves are then confirmatory runs of your predicted process performance. The number of PPQ runs can be as low as 2 runs if your % confidence level can be shown to be high enough.

The downside of this approach is that it requires a relatively high number of batches to have been manufactured and sufficiently well monitored to be able to build up this statistical “model” of the process prior to PPQ.

There is a third approach, – being a “structural” approach with the number of PPQ batches required being determined by reference to the process’s complexity, dosage form strengths, and number of equipment trains. This can include bracketing and matrix strategies and may involve separating groups of unit operations into separate PPQ protocols. However, bracketing and matrix strategies are difficult and time consuming to justify and not all regulatory bodies will accept such a method.

A statistical approach to determining the number of PPQ batches is often used in combination with risk-based or structural strategies.

So – to come back to the original question – how many PPQ batches are requires?

The number of batches should be based on the variability of the process, the complexity of the process / product, process knowledge gained during development, supportive data and the overall experience of the manufacturer.  As such there can never be a single answer to this question but in general the number of three (3) consecutive would be the general answer and is the one usually chosen by the industry. Even this should be justified but I rarely see this done. These should be full size commercial batches, however some process validation studies may be conducted on pilot scale batches if the process has not yet been scaled up – but again a justification of this approach is required.

However if you have extensive process development history and data and can write a well thought out justification then less than 3 batches may be acceptable.

If you have little or no prior knowledge of the equipment, process or product then it would be wise to plan on more than 3. Likewise, if you have a complex process or are looking to use a matrix style approach then the likelihood is that you will need significantly more than 3.

Tech Transfer #9 Engineering batches

Tech Transfer Blog #10 – Engineering runs – Just an expensive Insurance policy?

An Engineering Run can be also called many other names such as an engineering trial run, engineering lot or practice runs or demonstration runs but they are usually non-GMP runs used to demonstrate that the manufacturing equipment and processes work as required and that the end product can meet the required quality specifications.

Engineering batches do not have to use the actual API but may just simply be run with simply media, buffer, and other components, often called a “water batch.”

However, the term “engineering run” can also be used for any non-GMP trial, for example commissioning runs following the installation of new equipment or the installation of a new manufacturing line. It should be noted though that all validation runs must be run under full GMP conditions and are thus not engineering runs.

The scale of an engineering trial can vary according to the purpose and complexity of the trial; however, engineering runs are not mandatory, just a good idea in most situations.

From the Technology Transfer point of view, engineering runs can be used for:

  • Demonstrating that the manufacturing equipment itself works, such as being able to fill 2ml. vials at the speed required. A water run can be used in this respect although the use of product is recommended to demonstrate viscosity or foaming issues would not be a problem. If the actual product cannot be used due to scarcity, cost or toxicity concerns, then a substitute with close physical or chemical properties could be used.
  • Demonstrating that individual process steps or unit operations work as required and that the process step produces product to the right quality.
  •  Demonstrating that individual process steps work as part of an integrated process.
  • Optimizing process equipment.
  • Developing or optimising manufacturing parameters.
  • Optimise sampling regimes.
  • Finalising the process control strategy.
  • Identifying and resolving any potential issues before the formal cGMP documentation and activities. Challenges will inevitably be encountered before or during manufacturing and addressing these in a non-GMP situation is easier, an engineering batch run can help avoid many predictable and unpredictable issues.

Quite often a transferred process will be new to both operators and engineers, and an engineering run allows for familiarisation and training of operational staff. And don’t forget that quality staff will also need to become familiar with sampling methods and actual product to use for analytical process and verification activities.

Rarely are such process related Standard Operating Procedures (SOP) and manufacturing documentation such as batch records complete, error free and ready for use prior to GMP use, and engineering batches are often used to “red line” these documents prior to being used in anger.

Engineering batches can be of any scale, from “pilot” scale up to full scale depending on what aspect of the process is to be studied and the full process or only part of the process.

In fact, an engineering run can be run at any time before or after the product has been launched such as for process improvement purposes.

As stated before, engineering runs are not mandatory, but form part of the company’s risk assessment process, it’s a risk assessment thing with the balance depending on project costs and timelines versus confidence in the GMP production process and brand reputation.

It’s a Risk Thing

If something goes wrong with the GMP run, the consequences could be severe but there is no easy way to attribute a hard number to the amount of time or money that could be lost, it partially depends on the stage of the process the error occurs. Moreover, if you are using a CMO then remember that many CMO’s book GMP slots back-to-back, so if an organization misses its scheduled GMP run, another slot may not be immediately available.

If you have a lot of confidence in the process development work and you already have experience with this process in other plants and at other scales, and that the product is easy to work with then the risk may be in your favour. However, it is a mistake to assume that a particular drug product will run the same as similar drug products. One common issue is viscosity, where mixing or filling systems must be adjusted to accommodate the characteristics of the product.

If process unit operations can be run independently then it might make sense to just execute engineering runs on the higher-risk operations particularly if they have a well-known or a relatively common process for other process operations.

Disadvantages: – Of course, there are also disadvantages associated with engineering runs in that they arerun at GMP scale and, as such, demands the same resources as a GMP run meaning that they can be costly and time-consuming.

If viable material is used to execute the run, the resulting product usually is lost, since it was not produced according to GMP standards.

Engineering runs can use scarce single-use media, an issue all to frequent during the last pandemic manufacturing operations.

Engineering runs can be viewed as a very expensive insurance policy.

Blog 9 – Technology Transfer Success Criteria

What do I mean by the “Success Criteria”  – often call acceptance criteria – for technology transfer, and why do you need it?

The WHO Technical Report Series, No. 961, 2011 Annex 7 WHO guidelines on transfer of technology in pharmaceutical manufacturing considers Technology Transfer to be successful if there is documented evidence that the receiving Unit (RU) can routinely reproduce the transferred product, process or method against a predefined set of specifications as agreed with the Sending Unit (SU).

The key points here are “documented evidence” & “predefined set of specifications”.

WHY?

Well, as is frequently stated, the key to successful technology transfer is “communication”. You would never set off on a journey without knowing where you were heading, and it’s the same for technology transfer, success criteria define the journey and its end point, moreover predefining and agreeing these criteria provides the communication for everyone to understand the common goal, and their part in achieving it.

 Of course, defined success criteria also have other major benefits,

  • Defines the required deliverables – what and by who.
  • Defines responsibilities.
  • Defines the point at which responsibility for the manufacturing moves from Technical Transfer to Manufacturing. This quite often is a commercial and legal milestone triggering legal responsibilities and payments. And here, documented evidence of acceptance criteria being met is key.

This last point is quite important in that many times I have seen technology transfer projects just drag on with no one actually knowing who was responsible for the product manufacture, completion of technology transfer becomes a moving target.

Sometimes defining the success / acceptance criteria is relatively straightforward, such as successful completion of all required PPQ batches, but may not be so easy to define if the technology transfer is from say research and development to clinical trials where the technology transfer is part of a continuing process.

If you don’t define the success criteria, how can you tell if the technology transfer has been successful?

PREDEFINED

The success criteria should be defined, documented, and approved and agreed by both sending and receiving sites at the start of the technology transfer project. More importantly, it should be clear and unambiguous – there should be no doubt about whether the success criteria has been achieved or not. The document should contain:

  • Scope and objective 
  • Resources and budget 
  • Timeline and milestone dates 
  • Roles and responsibilities 
  • Key deliverables

For transfer to commercial operations the focus would be on the stages towards being ready for process validation, but other aspects of the technology transfer may also be considered, such as

  • Initial risk assessment
  • Knowledge Transfer
  • Analytical TT
  • Packing TT
  • Design Transfer
  • Cleaning Validation Plans for Process Validation.

Once the Technology Transfer is completed evidence that the transfer is successful (i.e., has achieved its acceptance criteria) can be documented in a summary technology transfer report which should summarise the scope of the transfer, the critical parameters as obtained in the SU and RU (preferably in a tabulated format) and the final conclusions of the transfer. Possible discrepancies should be listed and appropriate actions, where needed, taken to resolve them [Ref: WHO Technical Report Series, No. 961, 2011 Annex 7 WHO guidelines on transfer of technology in pharmaceutical manufacturing]

Memo 8 – tech transfer documentation

Documentation is a keystone of Technology transfer so in this memo I thought I would have a quick look at some of the baseline documents that would be needed from both the sending and receiving sides. I believe that the FDA once said that “if it isn’t documented then its only a rumour, and we don’t deal with rumours”.

I once made a list of documents / work packages that would probably be required for a vaccine technology transfer and listed some seventy five (75) work packages that would be needed.

Each project is different and has different requirements, but I have tried to list some of the key documents below.

At the onset of the project the rules / responsibilities and scope of the project should be defined. You really shouldn’t start on a journey unless you know where you are heading. Unless of course chaos and uncertainty are your food of life.

  • Quality and Technical agreements – quality and technical agreements are legal documents that defines both specific quality, technical parameters, and responsibilities. It is crucial to set clear expectations and responsibilities between partners to avoiding confusion and/or conflict later.
  • “Technology Transfer Charter” or at least a Project Scope and any required by / anticipated timeline in the form of a Technology Transfer Plan. Projects without a scope and anticipated timeline will be subject to constant amendment and change, doomed drift on forever like the Flying Dutchman.
  • -clarifies the technology transfer in sufficient detail for all parties to understand the scope of work, their role, the timing, and the resource needed.

Once the project has been initiated you will need to know what to make and how to make it – the details of the product and the process are documented in:

  • Research and Development Reports – historical data of pharmaceutical development of a new drug substance and drug products at stage from early development the final application of approval – Quality profiles of manufacturing batches (including stability data) –
  • Process Flow diagram / draft process description: Describes the manufacturing process in detail and will be used as a reference source for all parties.
  • Critical quality attributes (CQAs): typical properties or characteristics that should be within an appropriate limit or range to ensure the desired product quality.
  • Critical process parameters (CPPs): They are generally identified by assessing the extent to which their variation could impact the quality of the drug product.

CQA’s and CPP are often detailed in a “Control Strategy” document

  • Bill of material – List of all components and where in the process they are used.
  • Analytical methods to be used, the results of the analyses are used for validation & comparability assessments as well as for the release of products from the transferred process. These methods should include test methods for drug substances, intermediates, drug products, raw materials, and components.

The above are sometimes collected in the form of a “technology transfer file” which may include additional detail such as safety, environment / stability, packaging (cold chain requirements etc), cleaning processes, shipment characteristics.

  • Technical gap analysis: This is a formal documentation of the assessment of known and potential gaps between the donor and receiving sites’ capabilities. Quite often this can take the form of a Gap Risk Assessment.

After the details of the process and any gaps have been identified & understood

  • Process risk assessment (such as a process FMEA). Where are the risk in the receiving site process – how can these risks be evaluated, eliminated, or mitigated? In my experience this is poorly performed, and often treated as just a tick-box exercise or a justification as to why a high-risk item is really just a low risk. This item should be regularly updated as process knowledge increases but rarely is so.
  • Detailed / updated process description.
  • Detailed project plan.
  • Sampling plan for both routine and non-routine (e.g., process validation) samples.
  • Supporting studies, there can be many of these but depend on the type of product being manufactured:
  • Media fills (process simulations) for sterile products.
  • E& L assessments (especially important if single use components are used.
  • Filter studies
  • Container closure integrity studies
  • Dissolution studies
  • Validation activities: These are usually detailed in the site Validation Plan (VMP) and can give rise to various sub-plans:
  • Cleaning validation plan
  • Process validation plan

PPQ preparation & execution

  • SOP’s – if not already covered by existing SOP’s, training plans and records for these should also be contemporary.
  • Rationale for the number of PPQ batches to be manufactured. There is no longer a requirement to perform 3 PPQ batches – however whatever number is chosen; it must be substantiated by a science-based rationale.
  • Protocols – for each activity performed, especially for those validation activities described in the Validation Master Plan. All validation protocols must include pre-determined acceptance criteria.

Post PPQ

  • PPQ report – listing results (and any deviations) from all PPQ batches. If any PPQ batches have been invalidated, then this and the reason why should also be disclosed.
  • Technology Transfer Report- This can be a full detailed description of all technology transfer activities and results but is often simply an update of the “Technology Integrated Technology Transfer Strategy (ITTS) / technology transfer protocol Transfer Charter” demonstrating that all the agreed activities have been performed and that all acceptance criteria have been met. This can in effect be the end of project or handover document.
  • Data recording list – online and offline data to be monitored and recorded during the process, and how this will be recorded and assessed as part of the Continued Process Verification.
  • Deviation inventory – description in details of the deviations, status, and reporting of the impact on the product quality.

Technology Transfer Blog #7 – Risk Assessments (use & misuse)

The recently published (August 2022) EU GMP volume 4 Annex 1 introduces several changes. In particular there is an increased requirement for the use of risk assessment and risk management methodologies.

These are probably one of the most mis-used and mis-understood pharmaceutical activities.

Firstly, what is “risk assessment”?

This has been defined by the FDA as A systematic process of organizing information to support a risk decision to be made within a risk management process. It consists of the identification of hazards and the analysis and evaluation of risks associated with exposure to those hazards.

ICH Q9 (Quality Risk Management) – defines this as a systematic process of organizing information to support a risk decision to be made within a risk management process. It consists of the identification of hazards and the analysis and evaluation of risks associated with exposure to those hazards and focuses the behaviours of industry and regulatory authorities on the two primary principles of Quality Risk Management, which are:

  • The evaluation of the risk to quality should be based on scientific knowledge and ultimately link to the protection of the patient; and
  • The level of effort, formality and documentation of the Quality Risk Management process should be commensurate with the level of risk.

Ideally, risk assessments are used to identify areas of process risk or weakness, allowing the company to identify, prioritise and  focus their resources on the most important risks and to eliminate them or implement mitigation activities.

Current Issues

However, all too often these are not properly performed, seen as a “tick box” activity or even used to create / validate a reason or excuse as to why some activities don’t need to be done.  I was once asked “if we upgrade our filling machine, will we have to revalidate it or can we just risk assess it”.

Risk assessments are typically used at the time of process sign-off or approval from the customer, then “Archived” and thus fail to be properly updated and communicated to the team. 

The resulting “risk number” or “risk priority number” is usually the result of arbitrary assignment of severity and occurrence, risks sometimes manipulated to try and demonstrated a process or system is “low risk” or worse still, intentionally ranking failure modes lower than true risk in order to avoid required improvement activities.

I’ve also seen them being used to try and retrospectively justify and action or decision already taken.

In the main, risk assessments are only every as good as the team behind them and this is where the majority of risk assessments fall down:

  • The amount of time needed to complete a comprehensive risk assessment is almost always underestimated.
  • The team used to create the risk assessment are limited. Issues beyond the team members knowledge aren’t likely to be assessed, detected or resolved, thus it stands to reason that a team composed of widely diverse experience knowledge and disciplines perform better. In the past I have seen risk assessments written by a single person, not ideal as this negates fact the team concept is one of the values of risk assessments such as FMEA.  

The wrong risk assessment tools for the task are often used. While there are many risk assessment methodologies and tools available, unless they are used by those with some expertise in the field of risk assessment the tools recommended by the ICH should be used:

  • Failure Mode Effects (Criticality) Analysis (FMEA & FMECA)
  • Fault Tree Analysis (FTA)
  • Hazard Analysis and Critical Control Points (HACCP)
  • Hazard Operability Analysis (HAZOP)
  • Preliminary Hazard Analysis (PHA)

One weakness of theses is they concentrate on the consequence of the risk and do not relate to the source of the risk event, or the circumstances linking the two.

However, the biggest issue with many risk assessments is that they only assess the “risk” of an end results happening, without understanding risks are not just events. Risks are fundamentally the relationships between events. To manage risks, you must focus on understanding the system mechanisms that control the cause-and-effect relationships.

Of course, there are many others such as “bow Tie” assessments used mainly in engineering-based industries. Other tools such as Ishikawa diagrams can also be used but while these are useful in identifying possible causes for an effect or problem, they are not usually suitable for use as a risk assessment tool.

The Upside.

Although I have looked at the issues surrounding the mis-use of risk assessments, I would not like to give the impression I was “anti-risk” assessment, to the contrary I believe risk assessments if performed correctly provide a structured and an easy to understand method identifying potential risks and communicating these risks to all – from senior management to those on the shop floor.

When used correctly risk assessments are not only a regulatory requirement, they provide you with the right information to allow you to make the right decisions and to prioritise the right actions. They provide a structured and thus consistent way of identifying probable design or process failures that impacts product quality, process reliability and safety or environmental hazards.

Risk assessments also:

  • Provide a method for prioritising the importance of potential risks, and indeed allows you to choose which risks are worth accepting and which are not.
  • Provides a method for capturing “lessons learned” about failures or potential failures from similar situations and processes as a means of minimizing risk.
  • Helps to define actions and responsibilities in cross functional teams.
  • When combined with Control Plans or Control Strategies they can help optimise process control and reliability and most importantly product quality.
  • Substantially reduce costs by identifying design and process improvements early in the development process when relatively easy and inexpensive changes can be made.
  • Provide new ideas for improvements in similar designs or processes.
  • If used early enough in process design or during technology transfer, they can save great amounts of time and money by identifying issues before they become complex or expensive to rectify.