Problems of Interpreting Diagnostic Tests for SARS-CoV-2: Analytical Chemistry Concerns

: The COVID-19 pandemic outbreak made the development of reliable, sensitive, and reproducible testing methods crucial throughout the world. Without proper analytical validation, testing results can be misinterpreted, leading to a certain degree of misinformation in the clinical area. To accurately assess the methods, the determination of an analytical linear range of response of the assay is fundamental. Based on this curve, the evaluation of some parameters as sensitivity, limit of detection, and limit of quantifi cation can be done, as well as the establishment of cut-off values. Statistical treatments of the collected data can be performed for reproducibility and reliability evaluations. In this context, there is a wide range of analytical concerns that should be in-depth discussed in medical, biomedical and chemical areas. This letter aims to brieﬂ y clarify some analytical chemistry concepts, as sensitivity, cut-off and limit of detection, and their application towards clinical diagnosis.

The COVID-19 pandemic outbreak highlighted one of the most relevant issues in analytical chemistry: the differences between the testing technique (TT) and testing method (TM) is a common misinterpreted concept in clinical area. TT consists of several procedures beyond the testing technique, such as the collection of specimens, its preservation, handling, transporting, labelling, and delivering. Patient pre-test preparation procedures are also part of the method (Fischbach & Dunning III 2015). These protocols, also known as preanalytical factors, were previously reported to be the main sources of testing errors in laboratories (Lippi et. al. 2020). While developing the TM, the most suitable TT for the TM must be validated for the detection of the target analyte, otherwise the analysis is executed within a wide range of analytical errors that lead to false results.
A successful validation comprises various factors: 1) characteristics of the samples to be analyzed (i.e., blood, serum, plasma, solid residues); 2) the need of a previous step concerning sample preparation; 3) the specimens to be quantifi ed (i.e., genetic material or antibodies); 4) the determination of analytical parameters, such as the limit of detection (LOD), sensitivity and ultimately 5) the statistical treatment on the applicability of the TT in a suitable sample population. When a TT is validated for clinical purposes, sensitivity is a key factor to determine. It is defi ned as the slope of a calibration curve and is therefore relevant when a concentration range of detections is determined only. It provides information on the testing assay sensitivity to changes in the analyte concentration (Massart et. al. 1978, Mikkelsen & Cortón 2016. In many binary immunological assays, (i.e. those which gives a yes/no result) sensitivity is reported as a percentage quantity, related to the ability of the test in recognizing a positive result (Parikh et al. 2008). However, sensitivity is often misinterpreted as the LOD of an assay in several reports in the clinical area relates to COVID-19 diagnosis (Whitman et al. 2020). By definition, LOD is the minimum amount of an analyte that can be detected, generating a signal three times higher (in standard deviations magnitude) than the blank signal (Mikkelsen & Cortón 2016). It is possible for a TT to exhibit low sensitivity and low LOD, and vice-versa (Massart et. al. 1978). Reproducibility of TT should be correctly determined along with a proper statistical treatment, especially for COVID-19 diagnosis. To determine reproducibility, a large population of samples is necessary and a t-student test can be helpful to determine if the obtained results follow a normal distribution (De Winter 2013). Nonetheless, lack of reproducibility spoils the robustness of the TM.
One of the main side effects of the lack of robustness and reproducibility of TT is the occurrence of false positive and false negative results. Cut-off values that establish false negative-and positive results should be determined. Generally, cut-off values can be chosen as the correspondent signal to the LOD or LOQ (Limit of Quantification) of the assay (Mikkelsen & Cortón 2016). If the target analyte is present in a concentration lower than the cut-off value, and yet the technique indicates that it is present above this value, a false positive result occurs. Conversely, a false negative result occurs when the analyte is present in a concentration above the cut-off value, but the technique indicates that it is absent in the sample (Mikkelsen & Cortón 2016). It should be noticed that cut-off value is not related to the sensitivity of the assay and its TT (Mikkelsen & Cortón 2016).
Based on what has been pointed out in this letter and differently to what is being made in clinical literature (Sethuraman et. al. 2020), we strongly encourage that a proper discussion concerning TT for COVID-19 diagnosis and effectiveness of testing assays should include how these TTs are being validated and if the employed analytical concepts are being correctly interpreted (Figure 1).

Figure 1.
Steps involved in the development of a testing method for COVID-19 diagnosis. The TT to be employed is part of the entire method, although the main source of testing errors is not related to its execution. During the technique validation, it is fundamental to determine the assay reproducibility, sensitivity, LOD, and standard deviations through a statistical treatment of the results.