The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) has been widely accepted in the assessment of diagnostic accuracy and quality. However, it is not suitable for assessing risk of bias in studies comparing diagnostic accuracy. The current common practice in systematic reviews is to derive comparative accuracy from non-comparative diagnostic accuracy studies, which is inherently biased. The QUADAS group developed the QUADAS-Compare (QUADAS-C) tool for assessing the risk of bias in comparative diagnostic accuracy studies. It was officially launched in October 2021. QUADAS-C retains the same 4-domain structure as QUADAS-2: patient selection, index test, reference standard, and flow and timing. It also includes an additional 14 signaling questions and 4 risk of bias questions. This allows researchers to identify high-quality research evidence and avoid bias in research design and conduct. This article interpreted the basic situation, evaluation items, evaluation standards, and usage methods and procedures associated with QUADAS-C to provide references for domestic users.
The QUADAS-2, QUIPS, and PROBAST tools are not specific for prognostic accuracy studies and the use of these tools to assess the risk of bias in prognostic accuracy studies is prone to bias. Therefore, QUAPAS, a risk of bias assessment tool for prognostic accuracy studies, has recently been developed. The tool combines QUADAS-2, QUIPS, and PROBAST, and consists of 5 domains, 18 signaling questions, 5 risk of bias questions, and 4 applicability questions. This paper will introduce the content and usage of QUAPAS to provide inspiration and references for domestic researchers.
As precision medicine continues to gain momentum, the number of predictive model studies is increasing. However, the quality of the methodology and reporting varies greatly, which limits the promotion and application of these models in clinical practice. Systematic reviews of prediction models draw conclusions by summarizing and evaluating the performance of such models in different settings and populations, thus promoting their application in practice. Although the number of systematic reviews of predictive model studies has increased in recent years, the methods used are still not standardized and the quality varies greatly. In this paper, we combine the latest advances in methodologies both domestically and abroad, and summarize the production methods and processes of a systematic review of prediction models. The aim of this study is to provide references for domestic scholars to produce systematic reviews of prediction models.
The use of clinical predictive modeling to guide clinical decision-making and thus provide accurate diagnosis and treatment services for patients has become a clinical consensus and trend. However, the models available for clinical use are more limited due to unstandardised research methods and poor quality of evidence. This paper introduces the development process of clinical prediction models from six aspects, data collection, model development, performance evaluation, model validation, model presentation and model updating, as well as the clinical prediction model research report statement and risk of bias assessment tools in order to provide methodological references for domestic researchers.