The Berlin Epidemiological Methods Colloquium is pleased to announce that after a year of fruitful and engaging Journal Club meetings adapted to a new digital format, we plan to continue thought-provoking discussions in 2021 in a slightly different format. Our Journal Club will turn into “book club” for the first half of 2021: we will be reading and discussing “The Book of Why: The New Science of Cause and Effect” by Dana Mackenzie and Judea Pearl.
In our first meeting on Jan. 20th, 2021, we will discuss the first two chapters of this book. The ensuing JClub schedule can be found here. If you would like to join us for this discussion, please register for the Zoom meeting in advance here.
We suggest you get your own copy of the book if you want to participate in the JClub meetings. We encourage you to support your local book retailer. Alternatively, you can order the book online– e.g. amazon.de or at Thalia.de (audio and digital versions available, too)
The BEMC talk in June 2020 titled “Disorderly World of Diagnostic and Prognostic Models for Covid-19” was presented by Laure Wynants. The project started at the beginning of the pandemic. The author read a tweet from somebody asking for advice on building a prediction model for COVID-19 patients. There is a clinical need for prediction models to improve care and reduce costs. If a good model is built it could help allocate core resources for clinician to answer these questions (1) who needs to undergo further diagnostic work-up, (2) to speed up CT interpretation, (3) who should be admitted to ICU?
Their first review named “Prediction models for diagnosis and prognosis of covid-19: systematic review of critical appraisal” took 18 days from building the idea to being accepted. This work involved 14 experienced risk modelling reviewers. Screening and data extraction was done independently by 2 reviewers. Laure Wynants and her research team did not only look at the published materials but also pre-prints. The number of pre-prints was much bigger than traditional scientific publishing. Although there are many issues around pre-prints, given the pandemic situation, the traditional scientific publishing is not keeping up the speed of scientific research that’s being done. The scope of the review is having prediction models for diagnosing coronavirus diseases 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at risk of being admitted to hospital for covid-19 pneumonia. Clinical prediction models will support clinical decision-making for individual patients by combining and giving appropriate weights to several inputs (e.g., CT image characteristics, signs and symptoms, lab test results, demographics …). Laure Wynants and her research team used PROBAST tools to assess the risk of bias and use CHARMS for data extraction. Then the PRISMA and TRIPOD were used for the author’s reporting. With collaboration with Cochrane prognosis methods group and the expanded group, the first round was run with an AI tool. The first round included 107 studies with 145 models which proposed diagnostic of prognostic model for covid-19. Among 107 papers, there were 87 preprints and 20 peer-reviewed and published. Among those models, 4 model for detecting people at risk in general population, 91 diagnosis model for COVID-19 or covid-19 pneumonia and 50 prognosis models for predicting mortality risk, progression to severe disease, or composite outcomes.
Mastering R for Epidemiologic Research with Malcolm Barrett
Since it was so well received by many BEMC attendees last year, we wanted to share with you that there are still a few spots left in this summer break intensive short course. The fully digital course is designed for researchers, public health professionals, epidemiologists, and clinicians who want to improve their R coding skills, learn modern tools in the R ecosystem, like the tidyverse and Shiny, and maybe even think about the first steps of developing their own packages in R.
Course language: English
ECTS points: 3
Course Fee: 510€ for students, 750€ for other participants
31.8.-11.9.2020, asynchronous online with optional live “office hours”
If you are interested, please click on this link to find out more or register.
Over the years, we have received several requests to start filming our talks, and we were always a little bit hesitant. It is labor intensive, potentially distracting and not attractive to some speakers. Although we hope to reach those outside of Berlin with this effort, maybe our local attendance will drop. In spite of these initial fears and challenges, after testing some options, we have decided that we will give it a (provisional) go with an improvised, rather rudimentary setup.
Where to start? We have asked previous presenters if they would prefer live streaming or video recording, and streaming came out just on top, however, most were also ok with recording. We then asked our target audience, who strongly preferred recorded video. The BEMC Talk videos will be available for everybody wherever, whenever, potentially multiplying the number of people who benefit from the extra effort. So, video it is.
Of course, we understand that not all presenters will feel comfortable with recording. Sometimes it is because of the work that they are presenting; sometimes it is because the thought of being online until the end of time is a bit daunting. We will respect their wishes if they want the talk to stay completely offline. However, we will try to work with speakers in such cases to record parts of their talks, when possible, and ask them for a green light after we have shown them the edited video. With this approach we think that a good chunk of the BEMC talks will end up on the internet. Even though the video and sound quality will not be top notch without professional equipment, we hope this first step will still be well received.
We hope that a lot of people will benefit from our videos. Of course, we will keep an eye on the view counts and watch for useful feedback in the comments section on our Youtube page, but please feel free to get in touch directly via Twitter or using our “Contact us” page to let us know what you think.
All the best and thanks for your continued support,
The talk’s title is “A New Approach to the Generalizability of Randomized Trials” presented by Dr. Anders Huitfeldt.
To extrapolate causal effects from one setting (the study population of a RCT) to another setting (a clinically relevant target population) we need to justify some parameters. For example, we have to see if the conditional effect parameter in the target population is equal to the corresponding parameter in the study population. Known as the “effect homogeneity”. The effect homogeneity parameter occurs not only between the study population and the target population but also between two groups in the RCT’s population. The principle of extrapolating causal effects is that, because we have a randomized trial in the study population, we know what happens in this population if everyone takes the drug and what if they don’t take the drug. In addition, because the drug is not available in the target population we know what happens if they don’t take the drug. Then, we target to use the above information in combination with homogeneity assumption to predict what happens if someone in the target population don’t take the drug. However, the different definition of “effect homogeneity” leads to a different empirical prediction. Traditional approaches for it are: Effect Measure Modification, Forest plots, Cochran’s Q, I2. These approaches contain some shortcomings which can be reduced (eg. No biological interpretation). That is why a method which can be used to determine choice between effect measures is necessary. The COST parameter would be a new class of causal models for the purpose. The COST parameter has many advantages as effect quality measuring because of (1) a clear biological interpretation, (2) the effect of a drug is determined by gene, (3) Baseline risk independence. Finally, there is still controversial among methodologists about using COST parameter as an approach to determining the appropriate choice of scale if effect homogeneity is considered in terms of measures of effect.
The title of the talk is “Understanding Population-based Migraine Through Genome-wide Genetics” by Daniel Chasman from Brigham and Women’s Hospital.
Neurological disorders is becoming a global burden and ranks 2nd for number of years lost to disability. History of diabetes and hypertension, postmenopausal hormone use, physical activities, alcohol consumption, and smoking status are more frequent in people with migraines. Aging is a very important factor in migraine development. In the WGHS data, 3 SNPs investigated the relationship with migraine. Among them, PRDM16 rs22651899 increases the risk and TRPM8 rs10166942 decrease the risk of migraine, while LRP1 rs11172113 was not associated with migraine. After the first implementation of genetic analysis in 2009 with 3 SNPs, the number of SNPs that are included in the analysis increased gradually through each study, and reach out to 44 genome-wide significant loci in a large population study called IHGC 2016 with 59,042 participants. The genetic risk score (GRS) has been calculated to investigate the shared genetic contribution of ischemic stroke and migraine. In observational studies, migraine with aura is a risk factor for ischemic stroke. The causality of the relationship between migraines and coronary artery disease (CAD), MI, angina and atrial fibrillation have been assessed using Mendelian Randomization (MR). The confirmation was drawn from CAD, MI, angina. Some loci with likely vascular function show concordant susceptibility between migraine, dissection but inverse susceptibility with stroke/ CAD. The higher degree of heterogeneity in migraine genetics makes a more complex underlying biology investigation of this form of the disease. In conclusion, there is a long road ahead in Science to determine the matrix of migraine, SNPs, and other diseases.
Description: “Decision analysis (or decision-analytic modeling) is a systematic approach to decision making under uncertainty that involves combining evidence for different estimands and outcomes from different study types and different sources in order to derive causal conclusions for future clinical, policy or research actions.
This talk provides an overview on the concepts and methods of causal decision-analytic modeling as a tool for (1) benefit-harm analysis informing clinical guidelines and personalized medical decision making, (2) cost-effectiveness analysis informing reimbursement decision making and (3) value-of-information analysis for future research prioritization.
Case examples of published decision analyses will be used to illustrate the fields of application, different model types, the importance of using causal model input parameters (particularly from real-world data), the choice of different estimands (e.g., from the ICH E9 framework) and their causal interpretations, approaches to integrate intercurrent events, as well as guidelines for best modeling practices, possible pitfalls in causal modeling, and future developments.
Location: Hertwig–Hörsaal (Zweigbibliothek Campus Charité Mitte – Medizinische Bibliothek der Charité), CCM 10117 Berlin