west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Clinical prediction model" 6 results
  • Interpretation of the TRIPOD statement: a reporting guideline for multivariable prediction model for individual prognosis or diagnosis

    In recent years, the potential value of clinical big data have been gradually realized, and disease prediction models have begun to become a hot spot in clinical research. Predictive models of different types of diseases play an increasingly important role in individual risk assessment. However, due to the lack of reporting specifications for studies on disease prediction model, the structure and quality of reports are mostly mixed. In 2015, BMJ published a paper entitled "Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement" stated that there should be a uniform study of predictive models for disease diagnosis and prognosis. This article interprets key contents of the statement to promote research and understanding of the report specification.

    Release date:2020-04-30 02:11 Export PDF Favorites Scan
  • Validation of multivariate selection method in clinical prediction models: based on MIMIC database

    ObjectiveTo verify the influence of different variable selection methods on the performance of clinical prediction models. MethodsThree sample sets were extracted from the MIMIC database (acute myocardial infarction group, sepsis group, and cerebral hemorrhage group) using the direct entry of COX regression, step by step forward, step by step backward, LASSO, and ridge regression, based on random forest. These existing six methods of variable importance algorithm, and the optimal variable set of different selected methods were used to construct the model. Through the C index, the area under the ROC curve (AUC value) and the calibration curve, and the results within and between groups were compared. ResultsThe variables and numbers selected by the six variable selection methods were different, however, whether it was within or between groups did not reflect which method had the advantage of significantly improving the performance of the model. ConclusionsPrior to using the variable selection method to establish a clinical prediction model, we should first clarify the research purpose and determine the type of data. Combining medical knowledge to select a method that can meet the data type and simultaneously achieve the research purpose.

    Release date: Export PDF Favorites Scan
  • An introduction to the calibration and update methods of clinical prediction models and its implementation by R software

    The accuracy of the clinical prediction model determines its extrapolation and application value. When the prediction model is applied to a new setting, the differences between the new population and the initially modeled population in terms of study time, population characteristics, region, and other factors could lead to a reduction in its predictive performance. Calibrating or updating the prediction model with appropriate statistical methods is important to improve the accuracy of the prediction model in new populations. The model updating methods mainly include regression coefficients updating, meta-model updating and dynamic model updating. However, due to the limitations of meta-model updating and dynamic model updating in practical applications, the regression coefficient updating method is still the most common method in model updating. This paper introducd several types of model updating methods, the regression coefficients updating methods for two common clinical prediction models based on Logistic regression and Cox regression, and provide corresponding R codes for reference of researchers.

    Release date: Export PDF Favorites Scan
  • Simulation comparison of various prediction model construction strategies under clustering effect

    ObjectiveWhen using multi-center data to construct clinical prediction models, the independence assumption of data will be violated, and there is an obvious clustering effect among research objects. In order to fully consider the clustering effect, this study intends to compare the model performance of the random intercept logistic regression model (RI) and the fixed effects model (FEM) considering the clustering effect with the standard logistic regression model (SLR) and the random forest algorithm (RF) without considering the clustering effect under different scenarios. MethodsIn the process of forecasting model establishment, the prediction performance of different models at the center level was simulated when there were different degrees of clustering effects, including the difference of discrimination and calibration in different scenarios, and the change trend of this difference at different event rates was compared. ResultsAt the center level, different models, except RF, showed little difference in the discrimination of different scenarios under the clustering effect, and the mean of their C-index changed very little. When using multi-center highly clustered data for forecasting, the marginal forecasts (M.RI, SLR and RF) had calibrated intercepts slightly less than 0 compared with the conditional forecasts, which overestimated the average probability of prediction. RF performed well in intercept calibration under the condition of multi-center and large samples, which also reflected the advantage of machine learning algorithm for processing large sample data. When there were few multiple patients in the center, the FEM made conditional predictions, the calibrated intercept was greater than 0, and the predicted mean probability was underestimated. In addition, when the multi-center large sample data were used to develop the prediction model, the slopes of the three conditional forecasts (FEM, A.RI, C.RI) were well calibrated, while the calibrated slopes of the marginal forecasts (M.RI and SLR) were greater than 1, which led to the problem of underfitting, and the underfitting problem became more prominent with the increase in the central aggregation effect. In particular, when there were few centers and few patients, overfitting of the data could mask the difference in calibration performance between marginal and conditional forecasts. Finally, the lower the event rate the central clustering effect at the central level had a more pronounced impact on the forecasting performance of the different models. ConclusionThe highly clustered multi-center data are used to construct the model and apply it to the prediction in a specific environment. RI and FEM can be selected for conditional prediction when the number of centers is small or the difference between centers is large due to different incidence rates. When the number of hearts is large and the sample size is large, RI can be selected for conditional prediction or RF for edge prediction.

    Release date: Export PDF Favorites Scan
  • Methods and procedures of clinical predictive model

    The use of clinical predictive modeling to guide clinical decision-making and thus provide accurate diagnosis and treatment services for patients has become a clinical consensus and trend. However, the models available for clinical use are more limited due to unstandardised research methods and poor quality of evidence. This paper introduces the development process of clinical prediction models from six aspects, data collection, model development, performance evaluation, model validation, model presentation and model updating, as well as the clinical prediction model research report statement and risk of bias assessment tools in order to provide methodological references for domestic researchers.

    Release date: Export PDF Favorites Scan
  • Methodological quality evaluation on clinical prediction models of traditional Chinese medicine: a systematic review

    Objective To systematically review the methodological quality of research on clinical prediction models of traditional Chinese medicine. Methods The PubMed, Embase, Web of Science, CNKI, WanFang Data, VIP and SinoMed databases were electronically searched to collect literature related to the research on clinical prediction models of traditional Chinese medicine from inception to March 31, 2023. Two reviewers independently screened literature, extracted data and assessed the risk of bias of the included studies based on prediction model risk of bias assessment tool (PROBAST). Results A total of 113 studies on clinical prediction models of traditional Chinese medicine (79 diagnostic model studies and 34 prognostic model studies) were included. Among them, 111 (98.2%) studies were rated at high risk of bias, while 1 (0.9%) study was rated at low risk of bias and risk of bias of 1 (0.9%) study was unclear. The analysis domain was rated with the highest proportion of high risk of bias, followed by the participants domain. Due to the widespread lack of reporting of specific study information, risk of bias of a large number of studies was unclear in both predictors and outcome domain. Conclusion Most existing researches on clinical prediction models of traditional Chinese medicine show poor methodological quality and are at high risk of bias. Factors contributing to risk of bias include non-prospective data source, outcome definitions that include predictors, inadequate modeling sample size, inappropriate feature selection, inaccurate performance evaluation, and incorrect internal validation methods. Comprehensive methodological improvements on design, conduct, evaluation, and validation of modeling, as well as reporting of all key information of the models are urgently needed for future modeling studies, aiming to facilitate their translational application in medical practice.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content