Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Developing and Validating an In Silico Model for Proarrhythmia Risk Assessment Under the CiPA Initiative
May 2018
Zhihua Li, PhD for the CiPA In Silico Working GroupUS Food and Drug Administration
This presentation reflects the views of the author and should not be construed to representFDA’s views or policies
Comprehensive in vitro Proarrhythmia Assay (CiPA)
IKs and INa Peak in specific situations
Torsade Metric Score
Check for unanticipated human effects, confirm mixed channel effects
Can be considered for unanticipated nonclinical effects, or if human ECG data is insufficient
Model Development and Validation Strategy
CiPA Training Drugs (12)Select a Base Cardiomyocyte Model
Model Optimization
Metric Development
Evaluate the Training Results;Freeze Model for Validation
CiPA Validation Drugs (16)
Compare Prediction Accuracy to Pre-defined Performance Measures
Predict Validation Drugs
Model Training
Model Validation
4
Model Development and Validation Strategy
CiPA Training Drugs (12)Select a Base Cardiomyocyte Model
Model Optimization
Metric Development
Evaluate the Training Results;Freeze Model for Validation
CiPA Validation Drugs (16)
Compare Prediction Accuracy to Pre-defined Performance Measures
Predict Validation Drugs
Model Training
Model Validation
5
Improving the ORd Model for CiPA
• Modeling dynamic drug-hERG interactions rather than using simple IC50s to distinguish drugs with similar hERG block potency but different TdP liabilities Li Z et al. Circulation: Arrhythmia & Electrophysiology. 2017;10:e004628
• Optimizing model parameters so that the model can better recapitulate experimental data on human ventricular myocytes Dutta et al. Frontiers in Physiology. 2017;8:616
Model Development and Validation Strategy
6
CiPA Training Drugs (12)Select a Base Cardiomyocyte Model
Model Optimization
Metric Development
Evaluate the Training Results;Freeze Model for Validation
CiPA Validation Drugs (16)
Compare Prediction Accuracy to Pre-defined Performance Measures
Predict Validation Drugs
Model Training
Model Validation
7
Key Mechanism of TdP: imbalance of Inward and Outward Currents
Inward OutwardICaL (L type calcium) IKr (potassium)
INaL (late sodium) IKs (potassium)
IK1 (potassium)
Ito (potassium)
Major currents modulating repolarization The net current between inward and outward currents reflect their balance.
Inet = ICaL+INaL+IKr+IKs+IK1+Ito
qNet: Amount of electronic charge carried by Inet
Early after depolarization (EAD)
QT
ECG
Action potential
Torsade de pointes
Increased ratio between inward and outward currents
plateau
8
Performance of qNet on 12 CiPA Training Compounds
• Drug separation is good along all concentrations from 1x to 25x Cmax
Simulation with 2000 ms cycle length
Drug concentration (fold change over clinical exposure)
Arrh
ythm
ia M
etric
(qN
et) Low
Risk
Intermediate Risk
High RiskEAD generated
9
Uncertainty Quantification for TdPRisk Assessment under CiPA
• Developed a statistical method to translate each drug’s experimental uncertainty into 2000 metric values, describing the probability distribution of its TdP risk
• Found that uncertainty is lowest when drug concentration is 1-4x Cmax
10
Torsade Metric Score for Manual Training Data
High risk
Torsade Metric Score (qNet averaged 1-4 Cmax)hERG (potassium channel) data: manual patch clampNon-hERG (sodium and calcium channel) data: manual patch clamp
Low riskIntermediate risk
95%CI and median point of each drug’s 2000 scores are shown as error bars
11
Torsade Metric Score forHybrid Training Data
hERG (potassium channel) data: manual patch clampNon-hERG (sodium and calcium channel) data: automated high throughput patch clamp systems
Torsade Metric Score (qNet averaged 1-4 Cmax)
High risk
Low riskIntermediate risk
95%CI and median point of each drug’s 2000 scores are shown as error bars
Model Development and Validation StrategyModel Development and Validation Strategy
12
CiPA Training Drugs (12)Select a Base Cardiomyocyte Model
Model Optimization
Metric Development
Evaluate the Training Results;Freeze Model for Validation
CiPA Validation Drugs (16)
Compare Prediction Accuracy to Pre-defined Performance Measures
Predict Validation Drugs
Model Training
Model Validation
13
Evaluating and Freezing Model Prior to Validation
• On March 15th 2017, FDA held a Pharmaceutical Science and Clinical Pharmacology Advisory Committee Meeting on the topic of “Model Informed Drug Development”, where CiPA was presented as a potential new regulatory paradigm to seek external expert opinions
• A Validation Procedure document was vetted by CiPA In Silico Working Group and Ion Channel Working Group, and approved by Steering Committee prior to validation The published CiPAORdv1.0 model and qNet (Torsade Metric Score)
metric, as well as classification thresholds, were“frozen” Defined two validation datasets: one manual and one hybrid, each
16 drugs Defined two types of performance measurements: ranking TdP risk
without specific classification thresholds, and classifying drugs into one of the three risk categories using specific thresholds
Model Development and Validation StrategyModel Development and Validation StrategyModel Development and Validation Strategy
14
CiPA Training Drugs (12)Select a Base Cardiomyocyte Model
Model Optimization
Metric Development
Evaluate the Training Results;Freeze Model for Validation
CiPA Validation Drugs (16)
Compare Prediction Accuracy to Pre-defined Performance Measures
Predict Validation Drugs
Model Training
Model Validation
15
Overall Performance on Validation Drugs
High
Intermediate
Low
Torsade Metric Score (qNet averaged 1-4 Cmax)
16
Rank Performance: AUC of ROC• ROC (Receiver Operating Characteristic) : a
curve of sensitivity vs 1-specificity for all possible cut-off (thresholds) of the metric
• Area Under the Curve (AUC) of ROC: probability of ranking a higher risk drug above a lower risk one
• ROC1: Low risk vs High-or-Intermediate• ROC2: High vs Low-or-Intermediate
Procedure of ROC1 Analysis
17
• 16 validation drugs, each with 2000 torsade metric scores• Two categories: High-or-Intermediate Risk; Low Risk
Randomly take one score per drug
Rank the 16 scores
Construct ROC curveCalculate AUC
Repeat 10,000 times
Highest ScoreLowest Score
(red: High-or-Intermediate; blue: Low Risk)
18
One of 10000 ROC Curves
AUC = 0.98
Analyze 10,000 ROC curves
AUC (95% CI): 0.98 (0.93 – 1)
Performance
Measure
Interpretation Minimally
acceptable
Good Excellent
AUC of ROC1 Probability of ranking an Intermediate-
or-High risk drug above a Low risk drug
>~0.7 >~0.8 >~0.9
Excellentperformance
Rank Performance using ROC1:Low vs High-or-Intermediate
19
Rank Performance using ROC2:High vs Low-or-Intermediate
One of 10000 ROC Curves
AUC = 0.96
Analyze 10,000 ROC curves
AUC (95% CI): 0.94 (0.88- 0.98)
Performance
Measure
Interpretation Minimally
acceptable
Good Excellent
AUC of ROC2 Probability of ranking a High risk drug
above an Intermediate-or- Low risk drug
>~0.7 >~0.8 >~0.9
Excellentperformance
20
Rank Performance usingPairwise Comparison• This measure does not reduce three risk categories into two (so
more comprehensive)• 28 CiPA drugs -> 378 possible pairwise comparisons• Removing within-category drug pairs• Removing training-only drug pairs• Resulting in 211 drug pairs for validation• Use the model to predict pairwise ranking for each drug pair,
comparing results to known ranking of TdP risk• “Correct prediction fraction” among the 211 pairs indicates
ranking performance across all 3 categories• Repeat 10,000 times through random sampling, estimate
confidence interval of the “correct prediction fraction”
PerformanceMeasure
Interpretation Minimally acceptable
Good Excellent
Pairwise comparison Probability of correctly ranking a drug relative to CiPA reference drugs across 3 categories
>~0.7 >~0.8 >~0.9
21
One of the 10,000 prediction of 211 Drug Pairs • Green: drug pairs whose ranking predicted correctly
• Red: drug pairs whose ranking predicted incorrectly
• Correct prediction fraction in this analysis: 0.97
• After 10,000 repeats: 0.96 (0.92 – 0.99)
Excellentperformance
Rank Performance usingPairwise Comparison
22
Performance Measure Interpretation Manual
DatasetHybrid Dataset
AUC of ROC1 Probability of ranking an Intermediate-or-
High risk drug above a Low risk drug
0.89 (0.84 – 0.95)
0.98 (0.93 –1)
AUC of ROC2 Probability of ranking a High risk drug
above an Intermediate-or-Low drug
1 (0.92-1) 0.94 (0.88-0.98)
Pairwise Ranking Probability of correctly ranking a drug
relative to CiPA reference drugs through
pairwise comparison
0.95 (0.92 –0.98)
0.96 (0.92-0.99)
ExcellentMinimally acceptableBelow minimally acceptable
For both manual and hybrid datasets, ranking performance of Torsade Metric Score all reached or are very close to excellent level.
Good
Ranking Performance
23
Classification Performance: Likelihood Ratio (LR)
• Likelihood Ratio positive (LR+): how much more likely a higher risk drug is classified into the higher risk category compared to a lower risk drug
• Likelihood Ratio negative (LR-): 1/LR- indicates how much less likely a higher risk drug is classified into the lower risk category compared to a lower risk drug
• Likelihood Ratio was calculated for each of the two thresholds pre-determined by training data : threshold 1 separating low from the rest while threshold 2 separating high from the rest
24
Classification Performance :Likelihood Ratio of Threshold 1
High-or-Intermediate
Low Risk
• For LR analysis of Threshold 1, High and Intermediate combined as one category (red)
• 10,000 LR analyses are repeated by sampling Torsade Metric Score distributions
The hybrid validation dataset result is shown here as an example
Torsade Metric Score (qNet averaged 1-4 Cmax)
95%CI and median point of each drug’s 2000 scores are shown as error bars
25
Performance
Measure
Interpretation Minimally
acceptable
Good Excellent
LR+ of Threshold 1 How much more likely a High-or-Intermediate
drug will be predicted as High-or-Intermediate,
compared to a Low Risk drug?
>~2 >~5 >~10
1/LR- of Threshold 1 How much less likely a High-or-Intermediate drug
will be predicted as Low Risk, compared to a Low
Risk drug?
>~2 >~5 >~10
LR+ of Threshold 1 for Hybrid Dataset: 8e5 (7e5 – 1e6)1/LR- of Threshold 1 for Hybrid Dataset: 5.5 (3.7- 1e6)
Classification Performance :Likelihood Ratio of Threshold 1
Wide confidence interval (CI) of 1/LR- due to perfect classification of some samples, leading to upper 95%CI value close to infinity.
26
Classification Performance :Likelihood Ratio of Threshold 2
Performance
Measure
Interpretation Minimally
acceptable
Good Excellent
LR+ of Threshold 1 How much more likely a High Risk drug will be
predicted as High Risk, compared to a Low-or-
Intermediate Risk drug?
>~2 >~5 >~10
1/LR- of Threshold 1 How much less likely a High Risk drug will be
predicted as Low-or-Intermediate, compared to a
Low–or-Intermediate Risk drug?
>~2 >~5 >~10
LR+ of Threshold 2 for Hybrid Dataset : 6 (3 – 12)1/LR- of Threshold 2 for Hybrid Dataset: 3.7 (3.– 9e5)
Wide confidence interval (CI) of 1/LR- due to perfect classification of some samples, leading to upper 95%CI value close to infinity.
27
Mean Classification Error
High
Intermediate
Low
Performance
Measure
Interpretation Minimally
acceptable
Good Excellent
Mean Classification Error
Average error of classifying each of the 16 validation drugs into High, Intermediate, or Low risk category
<~1 <~0.5 <~0.3
Mean error across 16 drugs x 2000 samples 0.25 (0.23-0.27)
Torsade Metric Score (qNet averaged 1-4 Cmax)
28
Classification PerformancePerformance Measure Interpretation Manual Dataset Hybrid Dataset
LR+ of Threshold 1 How much more likely a High-or-Intermediate drug will be
predicted as High-or-Intermediate, compared to a Low Risk
drug?
4.5 (2.3 – 5) 8e5 (7e5 – 1e6)
1/LR- of Threshold 1 How much less likely a High-or-Intermediate drug will be
predicted as Low Risk, compared to a Low Risk drug?
8.8 (4.4– 8e5) 5.5 (3.7 – 1e6)
LR+ of Threshold 2 How much more likely a High Risk drug will be predicted as
High Risk, compared to a Low-or-Intermediate Risk drug?
12 (4.5 – 1e6) 6 (3 – 12)
1/LR- of Threshold 2 How much less likely a High Risk drug will be predicted as High
Risk, compared to a Low –or-Intermediate Risk drug?
9e5 (3.3 – 1e6) 3.7 (3 – 9e5)
Mean Classification Error Average error of classifying each of the 16 validation drugs into
High, Intermediate, or Low risk category
0.19 (0.17-0.21) 0.25 (0.23-0.27)
For classification measures, Torsade Metric Score on the manual and hybrid datasets mostly hit good to excellent performance.
ExcellentMinimally acceptableBelow minimally acceptable Good
29
Performance measure
Dataset Torsade Metric Score
APD90 APD50 & diastolic Ca
AUC of ROC1Manual 0.901
(0.883 - 0.924)0.842 (0.801 - 0.877)
0.854 (0.825 - 0.889)
Hybrid 0.971 (0.936 - 1)
0.848 (0.807 - 0.889)
0.854 (0.807 - 0.906)
AUC of ROC2Manual 0.988
(0.95 - 1)0.975 (0.962 - 0.988)
0.988(0.944 - 1)
Hybrid 0.919 (0.869 - 0.962)
0.975(0.956 - 0.981)
0.969 (0.925 - 0.981)
Pairwise Comparison Correct Rate
Manual 0.929 (0.905 - 0.943)
0.886 (0.858 - 0.91)
0.891 (0.829 - 0.924)
Hybrid 0.943 (0.905 - 0.976)
0.891 (0.863 - 0.919)
0.896 (0.858 - 0.929)
Comparing to Alternative Metrics:Ranking Performance
ExcellentMinimally acceptableBelow minimally acceptable Good
• All 28 drugs and leave-one-out cross validation were used.• CiPA metric (qNet/TMS) has “excellent” (>0.9) performance over all datasets and
measures.
30
Performance measure
Dataset Torsade Metric Score
APD90 APD50 & diastolic Ca
LR+ of Threshold 1
Manual 8.05 (4.03 - 9)
2.53 (1.89 - 2.84)
4.03(2.68 - 4.26)
Hybrid 8.05 (4.03 - 9.47e+05)
2.68 (2.01 - 4.03)
3.55 (2.37 - 4.26)
1/LR- of Threshold 1
Manual 14.7 (5.6 – 8e5)
5.3(3.16 – 12.6)
7.4(4.9 – 14.7)
Hybrid 14.7(6.3 – 8e5)
6.3(3.5 – 12.6)
4.9(3.16 – 12.6)
LR+ of Threshold 2
Manual 7.5e+05 (8.75 - 1e+06)
15 (12.5 - 17.5)
17.5 (15 - 8.75e+05)
Hybrid 15 (6.25 - 17.5)
15 (12.5 - 17.5)
15 (7.5 - 17.5)
LR- of Threshold 2
Manual 4(3.8 – 1e6)
3.8(2.53 – 7.58)
4(3.8 – 9e5)
Hybrid 3.8(2.53 – 7.58)
3.8 (2.53 – 7.58)
3.8 (2.53 – 7.58)
Mean Classification Error
Manual 0.158 (0.141 - 0.174)
0.305 (0.285 - 0.325)
0.224 (0.206 - 0.243)
Hybrid 0.203(0.185 - 0.221)
0.285 (0.265 - 0.305)
0.291(0.271 - 0.311)
Comparing to Alternative Metrics:Classification Performance
ExcellentMinimally acceptableBelow minimally acceptable Good
Summary
31
• The CiPA model adopts the most stringent validation strategy for evaluating TdP risk prediction accuracy
• Over two validation datasets, the CiPA model/metric generally reaches pre-defined “excellent” ranking performance (5 times excellent and 1 time good), and generally “good” to “excellent” classification performance (5 times excellent, 3 good, and 2 minimally acceptable).
• The model’s ability to achieve high performance levels across both datasets despite the data differences shows its flexibility and robustness in handling heterogeneous sources of data
• The current CiPA model/metric outperforms all alternative metrics tested
• Taken together, this work supports the regulatory use of the CiPA model
Updated CiPA Drugs Torsade Metric Scores and Thresholds
32
High
Intermediate
Low
• All 28 drugs were used to update the classification thresholds• Some drugs’ in vitro data were updated with higher quality data
to better reflect the underlying pharmacology
95%CI of each drug’s 2000 scores are shown as error bars
Torsade Metric Score (qNet averaged 1-4 Cmax)
AcknowledgementsFDA Contributors
• Norman Stockbridge• Christine Garnett• David Strauss• Zhihua Li• Wendy Wu• Sara Dutta• Phu Tran• Jiangsong Sheng• Kelly Chang• Kylie Beattie• Xiaomei Han• Bradley Ridder• Min Wu• Aaron Randolph• Richard Gray• Jose Vicente• Lars Johannesen
CiPA Steering CommitteeAyako Takei, Bernard Fermini, Colette Strnadova, David Strauss, Derek Leishman, Gary Gintant, Jean-Pierre Valentin, Jennifer Pierson, Kaori Shinagawa, Krishna Prasad, Kyle Kolaja, Natalia Trayanova, Norman Stockbridge, Philip Sager, Tom Colatsky, Yuko Sekino, Zhihua Li, Gary Mirams
All CiPA Working groups• Ion Channel working group• In silico working group• Cardiomyocyte working group• Phase 1 ECG working group
ALL contributors to CiPA (there are a lot!)• Public-private partnerships: HESI, SPS, CSRC• Regulatory Agencies: FDA, EMA, PMDA/NIHS, Health
Canada• Many pharmaceutical, CRO, and laboratory device
companies• Academic collaborators
Back-Up
34
35
Improving the ORd Model for CiPA
• Making the IKr/hERG component temperature dependent
• Modeling dynamic drug-hERG interactions rather than using simple IC50s
• Optimizing model parameters based on experimentally recorded drug effects on human ventricular myocytes
36
Modeling Dynamic drug-hERG Interactions Rather Than IC50s
• Drug block of ion channels is dynamic and can exhibit a marked dependence on voltage, time, pulse frequency and channel state (open, closed, inactivated)
• Measurements of IC50s are therefore only “snapshots” of the drug-channel interaction
• Drugs with similar IC50 values but different kinetics for channel blocking (and unblocking) may carry different levels of risk for producing TdP (dofetilide vs. cisapride)
• Specially designed voltage protocol and model are needed to capture dynamic drug-channel interaction
Dynamic hERG Model to Capture Drug Binding Kinetics
37
• Model can capture dynamic drug-hERG interactions, especially drug being trapped within closed channel (red arrows)
• Different drugs have different propensity to be trapped in the closed-bound state, leading to different TdP liability
Li Z et al. Improving the In Silico Assessment of Proarrhythmia Risk by Combining hERG-Drug Binding Kinetics and Multi-channel Pharmacology. In preparation
O = OpenC = ClosedI = Inactivated
38
-80 mV
0 mV
close
open
-80 mVcloseVoltage protocol
hERG Current (I)
Fractional Block
1 −𝐼𝐼𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝐼𝐼𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑑𝑑𝑐𝑐𝑐𝑐
Modified from Milnes et al. J. Pharmacol. Toxicol. Methods. 2010
Dynamic hERG Protocol Recommended by Ion Channel Working Group
Time (s)
Frac
tiona
l Blo
ck
39
Block Development for Drugs with Different Binding Kinetics
Dofetilide(highly trapped):No block recovery
during channel closingNo block recovery during channel closing
Cisapride (less trapped):
Block recovery during channel closing
Block recovery during channel closing
-80 mV
0 mV
-80 mV
0 mV
-80 mV
0 mV
close
open
close
open open
close
40
Improving the ORd Model for CiPA
• Making the IKr/hERG component temperature dependent
• Modeling dynamic drug-hERG interactions rather than using simple IC50s
• Optimizing model parameters based on experimentally recorded drug effects on human ventricular myocytes
41
Optimizing ORd Model Parameters Based on Human Cardiomyocyte Data
Dutta S et al. Optimization of an In Silico Cardiac Cell Model for Proarrhythmia Risk. 2017.Experimental data were taken from O’Hara et al. PloS Computational Biology. 2011
• Goal: adjust model parameters to more faithfully recapitulate experimental data
• Human cardiomyocyte action potential duration (APD) was recorded under L-type calcium current (ICaL) blocker (1 µM nisoldipine)
• The optimized model was able to reproduce the experimental data better than the original model
Experimental dataBefore optimizationAfter optimization
.
Cycle Length (ms)
Actio
n Po
tent
ial D
urat
ion
(ms)
42
qNet vs APD : A Case Study
Q: Which cell is in a more dangerous status (closer to EAD generation)?• APD: The cell with ranolazine (black) • qNet: The cell with cisapride (grey)
Applying the same pro-EAD “push”
Added 91.6% IKr conductance reduction (perturbation)
• qNet, but not APD, correctly predicts the distance from EAD• qNet, but not APD, independently supports the rank order of the two
drugs in CiPA categories
Time (ms) Time (ms)
43
Comparing to Other Metrics
• qNet is the only metric with 0 training error across all concentrations
• Metrics based on action potential duration (APD), the cellular basis for QT interval, failed to classify all training drugs
New metrics
APD Basedmetrics
Othermetrics
Training Error
Drug Conc. (fold change over Cmax)
qNet
44
Incorporating Experimental Uncertainty
• Experimental data have intrinsic (i.e. inherent randomness) and extrinsic (i.e. cell-to-cell variability) uncertainty
• This will lead to uncertainty in metric calculation and TdP risk assessment
• A method is needed to incorporate experimental uncertainty and calculate a probabilistic distribution of the metric
4545
Find the Optimal Concentrations for CiPA Prediction
• Prediction error based on leave-one-out cross validation
• At each concentration, there are 12 errors, corresponding to 12 training drugs
• Black line: mean error across 12 training drugs
• Lowest prediction error achieved for 1-4x Cmax
Conclusion: For CiPA manual training dataset, concentrations 1-4x Cmax should be used for qNet calculation and TdP risk prediction.
Relationship Between In Vitro Data and In Silico Model Prediction
• The two datasets (Manual and Hybrid) were generated using very different experimental protocols and quality control procedures, leading to some in vitro data differences between them
• Internal investigation suggests dataset specific bias might be the cause of some mismatches between model prediction and actual CiPA risk
• Standardizing protocols and quality control criteria (ongoing effort) can potentially further increase model prediction accuracy.
• The model’s ability to achieve high performance levels across both datasets despite the data differences shows its flexibility and robustness in handling heterogeneous sources of data
47
A Possible Explanation of the Outliers
INaL IC50 (µM) INaL IC50 (µM)
Disopyramide 366 20
Metoprolol 627 24
Manual Dataset Hybrid Dataset
IC50 underestimated?Risk underestimated in HybridIC50 overestimated?Risk overestimated in Manual
Systematic difference: 25 out of 28 drugs have a higher INaL IC50 in Manual than in Hybrid Dataset
ICaL IC50 (nM) ICaL IC50 (nM)
Domperidone 74 1687
Manual Dataset Hybrid Dataset
IC50 overestimated?Risk overestimated in Hybrid
Systematic difference: 21 out of 28 drugs have a higher ICaL IC50 in Hybrid than in Hybrid Dataset
• Outliers are dataset specific, and associated with systematic in vitro data bias due to different experimental conditions and quality criteria
• Establishing standard experimental procedures and quality criteria may further increase prediction accuracy
48
High Risk
Intermediate-or-Low
• For LR analysis of Threshold 2, Low and Intermediate combined as one category (blue)
• 10,000 LR analyses are repeated by sampling Torsade Metric Score distributions
Classification Performance :Likelihood Ratio of Threshold 2
The hybrid validation dataset result is shown here as an example
Torsade Metric Score (qNet averaged 1-4 Cmax)
95%CI and median point of each drug’s 2000 scores are shown as error bars
49
Manual hERG dynamic + Manual non-hERG IC50s
High
IntermediateLow
Torsade Metric Score (qNet averaged 1-4 Cmax)
50
HTS hERG IC50 + HTS non-hERG IC50s
High
IntermediateLow
Torsade Metric Score (qNet averaged 1-4 Cmax)
51
Manual hERG IC50 + Manual non-hERG IC50s
High
Intermediate
Low
Torsade Metric Score (qNet averaged 1-4 Cmax)