Prism

Article: Setting Acceptance Criteria

We investigate setting acceptance criteria in this final article in our series exploring statistical methodology in Technology Transfer studies.

Written by Dr. Paul Nelson, Technical Director.

An acceptable bias in the data generated by a Technology Transfer study needs to be decided upon prior to the commencement of the study. That is, a decision needs to be made as to how different the means for accuracy, or standard deviations for the purposes of precision, must be before the data is not considered to be practically equivalent.

This decision should take into account the intended use of the method, the homogeneity of the test sample, the precision of the method, as well as the anticipated difference (bias) between the sending and the receiving laboratories.

Criteria should be set on a case-by-case basis, and justified using supporting data from method validation, or data collected during pre-transfer runs in the receiving laboratory, or data from pilot studies.

For solid dosage forms, acceptance criteria for Assay, Content uniformity, Impurities and degradation products, Residual solvents, Dissolution and Identification are given in the ISPE Good Practice Guide for Technology Transfer. The Guide also gives recommendations on the design of Technology Transfer studies for delivered dose and particle size distribution of orally inhaled and nasal drug products. However, no acceptance criteria are given. Recommendations on sample size and acceptance criteria for in-vitro bioequivalence of inhaled products can be found in Guidance for Industry on Bioavailability and Bioequivalence Studies for Nasal Spray and Local Action (FDA, 1999c, 2003c). Although the guidance is primarily aimed at testing for bioequivalence between different products, the recommendations are applicable to equivalence testing for technology transfer.

All data must support previously set specifications, both release and stability if different, regardless of how the acceptance criteria are set. Also, the choice of sample has to be both random and representative (stratified random sample) of the future population of batches. Otherwise the failure to meet acceptance criteria could be wrongly inferred to be down to method performance when it could be resulting from sample variability.

Failure to meet the acceptance criteria can occur for many reasons. This is not uncommon, and is often resolved when the assignable reason for the failure is identified. Failure to meet the acceptance criteria should trigger a review of the equivalence study to determine whether anything inherent about the method has caused the problem. For a properly designed study, the total variability in the data can be decomposed into its components e.g. within-laboratory and between-laboratory variability, instrument-to-instrument variability, analyst-to-analyst variability and day-to-day variability. The components of variability can be used to identify the source of unexpected variation so that the necessary action is taken e.g. re-training of the analysts, controlling laboratory environment. When a method fails the acceptance criteria the decision on how to proceed should be based on all relevant available data and sound scientific judgment.

We hope you've enjoyed this series of articles (click on the following links for Part One, Part Two and Part Three). If you have a tricky statistical problem that needs solving - whether it's linked to technology transfer studies or something entirely different - please visit our consultancy page for details on how Prism can assist you. Equally, you can use our free, online Nested statistical tool with your own data; do read our How-To: Nested Analysis guide to get started.

Related tags: bioequivalence (3), equivalence test (5), method validation (3), technology transfer (5)