Predicting Length of ICU Stay in People with Acute Traumatic Spinal Cord Injury

2024

Data Source: 

Medical Information Mart for Intensive Care (MIMIC)

Organizer: 

Dr. Christopher Grant and Dr. Chel Hee Lee


Background 

Receiving care in an Intensive Care Unit (ICU) is expensive (~ 3x as expensive as a regular hospital bed) and fraught with risk (ICU mortality rates are approximately 9% in Canada). (1) There are many prediction models for ICU survival and predicted ICU length of stay (LOS). One of the more commonly used tools is the APACHE-IV score (2), but many others exist. Not all authors agree that these algorithms are useful for predicting an individual patient's expected course through an ICU. (3) 

In 2017, a systematic review by Verburg et al. found that the coefficient of determination for predictions of ICU lengths of stay for the 31 models that they reviewed was poor (R^2 = 0.05-0.28). (3) In an invited editorial associated with this systematic review, Dr. Kramer notes: 

When examining LOS benchmarks across ICUs, it must be recognized that variations are influenced by a plethora of factors. These include not only measured patient factors (such as diagnosis, severity of illness, etc.) and unmeasured factors (patient survival, response to therapy, and complications) but also a host of institutional factors. (4)
 

Research Question: 


Study Challenges

So the challenge is this. Can your team create a model that better predicts ICU length of stay for individual patients? As a comparator, you can use the APACHE-IV score. Can your team do better than the APACHE score?

Notes

1.    To reduce patient factor variability, the prediction model should only predict ICU lengths of stay for people who experienced a spinal cord injury at the neck (ICD diagnostic codes 806.0, and 806.00 through 806.09). (5) To reduce institutional variability, your model only need consider data from the MIMIC-III and IV datasets. (6) These data are real, anonymized ICU data for patients admitted to the Beth Israel Deaconess Hospital in Boston, Massachusetts from 2001 through 2012. This dataset contains the same information that is available to medical doctors at the bedside (e.g. medications, laboratory results, vital signs, diagnoses, treatments, etc.)
2.    Your team is free to use whatever statistical approach you choose. Your team is also free to define what "better" means. This might mean that your model has an impressive coefficient of determination. Or it might mean that your model only requires a limited amount of information to yield meaningful predictions. Or better might mean something entirely different to your team.

Variables: 


Data Source and Access 

Every participant should access the Medical Information Mart for Intensive Care (MIMIC, https://mimic.mit.edu/) through the typical channel. Please review the instructions and meet the requirements to access the files. You may experience a longer waiting time to get the login credentials when every participant accesses at once. 

•    MIMIC-III (https://physionet.org/content/mimiciii/1.4/
•    MIMIC-IV (https://physionet.org/content/mimiciv/2.2/

Resources for Getting Started

Some references that might be of interest:

1. https://secure.cihi.ca/free_products/ICU_Report_EN.pdf
2. https://intensivecarenetwork.com/Calculators/Files/Apache4.html
3. Verburg IWM, Atashi A, Eslami S, et al.: Which Models Can I Use to Predict Adult ICU Length of Stay? A Systematic Review. Crit Care Med 2017; 45:e222e231
4. Kramer, Andrew A. PhD. Are ICU Length of Stay Predictions Worthwhile?*. Critical Care Medicine 45(2):p 379-380, February 2017. | DOI: 10.1097/CCM.0000000000002111
5. http://icd9.chrisendres.com/index.php?action=child&recordid=8528
6. https://www.physionet.org/content/mimiciii/1.4/
 

Data Access: 


Evaluation & Grading Points

Your case study report and poster must include:

1.    The research question(s) you sought to address with your analysis.
2.    A discussion on the impact of your assumptions and parameters and the limitations of these types of models.
3.    At least one visualization needs to be included.
4.    A summary of the key takeaways from your analysis.

The case study competition will be evaluated as follows:

1. Creative visualizations of the data (25%)
2. Appropriateness, creativity, and understanding of the strengths and limitations of the model proposed (50%)
3. Quality and clarity of presentation (25%)

Award

We are pleased to announce that the winning team will receive an award of \$1,500. In addition to the financial award, there may be potential for research opportunities/collaborations for the successful team members.

Organizer Contact Information 

This case study was prepared by Dr. Christopher Grant and Dr. Chel Hee Lee with help and guidance from the other members of the case study committee of the Statistical Society of Canada. Special thanks to Dr. Alistair Johnson and the PhysioNet Team for the quick credentialing of accessing MIMIC databases. Any concerns and questions can be directed to chelhee.lee@ucalgary.ca