Over the years, we have received several requests to start filming our talks, and we were always a little bit hesitant. It is labor intensive, potentially distracting and not attractive to some speakers. Although we hope to reach those outside of Berlin with this effort, maybe our local attendance will drop. In spite of these initial fears and challenges, after testing some options, we have decided that we will give it a (provisional) go with an improvised, rather rudimentary setup.
Where to start? We have asked previous presenters if they would prefer live streaming or video recording, and streaming came out just on top, however, most were also ok with recording. We then asked our target audience, who strongly preferred recorded video. The BEMC Talk videos will be available for everybody wherever, whenever, potentially multiplying the number of people who benefit from the extra effort. So, video it is.
Of course, we understand that not all presenters will feel comfortable with recording. Sometimes it is because of the work that they are presenting; sometimes it is because the thought of being online until the end of time is a bit daunting. We will respect their wishes if they want the talk to stay completely offline. However, we will try to work with speakers in such cases to record parts of their talks, when possible, and ask them for a green light after we have shown them the edited video. With this approach we think that a good chunk of the BEMC talks will end up on the internet. Even though the video and sound quality will not be top notch without professional equipment, we hope this first step will still be well received.
We hope that a lot of people will benefit from our videos. Of course, we will keep an eye on the view counts and watch for useful feedback in the comments section on our Youtube page, but please feel free to get in touch directly via Twitter or using our “Contact us” page to let us know what you think.
All the best and thanks for your continued support,
The talk’s title is “A New Approach to the Generalizability of Randomized Trials” presented by Dr. Anders Huitfeldt.
To extrapolate causal effects from one setting (the study population of a RCT) to another setting (a clinically relevant target population) we need to justify some parameters. For example, we have to see if the conditional effect parameter in the target population is equal to the corresponding parameter in the study population. Known as the “effect homogeneity”. The effect homogeneity parameter occurs not only between the study population and the target population but also between two groups in the RCT’s population. The principle of extrapolating causal effects is that, because we have a randomized trial in the study population, we know what happens in this population if everyone takes the drug and what if they don’t take the drug. In addition, because the drug is not available in the target population we know what happens if they don’t take the drug. Then, we target to use the above information in combination with homogeneity assumption to predict what happens if someone in the target population don’t take the drug. However, the different definition of “effect homogeneity” leads to a different empirical prediction. Traditional approaches for it are: Effect Measure Modification, Forest plots, Cochran’s Q, I2. These approaches contain some shortcomings which can be reduced (eg. No biological interpretation). That is why a method which can be used to determine choice between effect measures is necessary. The COST parameter would be a new class of causal models for the purpose. The COST parameter has many advantages as effect quality measuring because of (1) a clear biological interpretation, (2) the effect of a drug is determined by gene, (3) Baseline risk independence. Finally, there is still controversial among methodologists about using COST parameter as an approach to determining the appropriate choice of scale if effect homogeneity is considered in terms of measures of effect.
The title of the talk is “Understanding Population-based Migraine Through Genome-wide Genetics” by Daniel Chasman from Brigham and Women’s Hospital.
Neurological disorders is becoming a global burden and ranks 2nd for number of years lost to disability. History of diabetes and hypertension, postmenopausal hormone use, physical activities, alcohol consumption, and smoking status are more frequent in people with migraines. Aging is a very important factor in migraine development. In the WGHS data, 3 SNPs investigated the relationship with migraine. Among them, PRDM16 rs22651899 increases the risk and TRPM8 rs10166942 decrease the risk of migraine, while LRP1 rs11172113 was not associated with migraine. After the first implementation of genetic analysis in 2009 with 3 SNPs, the number of SNPs that are included in the analysis increased gradually through each study, and reach out to 44 genome-wide significant loci in a large population study called IHGC 2016 with 59,042 participants. The genetic risk score (GRS) has been calculated to investigate the shared genetic contribution of ischemic stroke and migraine. In observational studies, migraine with aura is a risk factor for ischemic stroke. The causality of the relationship between migraines and coronary artery disease (CAD), MI, angina and atrial fibrillation have been assessed using Mendelian Randomization (MR). The confirmation was drawn from CAD, MI, angina. Some loci with likely vascular function show concordant susceptibility between migraine, dissection but inverse susceptibility with stroke/ CAD. The higher degree of heterogeneity in migraine genetics makes a more complex underlying biology investigation of this form of the disease. In conclusion, there is a long road ahead in Science to determine the matrix of migraine, SNPs, and other diseases.
Description: “Decision analysis (or decision-analytic modeling) is a systematic approach to decision making under uncertainty that involves combining evidence for different estimands and outcomes from different study types and different sources in order to derive causal conclusions for future clinical, policy or research actions.
This talk provides an overview on the concepts and methods of causal decision-analytic modeling as a tool for (1) benefit-harm analysis informing clinical guidelines and personalized medical decision making, (2) cost-effectiveness analysis informing reimbursement decision making and (3) value-of-information analysis for future research prioritization.
Case examples of published decision analyses will be used to illustrate the fields of application, different model types, the importance of using causal model input parameters (particularly from real-world data), the choice of different estimands (e.g., from the ICH E9 framework) and their causal interpretations, approaches to integrate intercurrent events, as well as guidelines for best modeling practices, possible pitfalls in causal modeling, and future developments.
Location: Hertwig–Hörsaal (Zweigbibliothek Campus Charité Mitte – Medizinische Bibliothek der Charité), CCM 10117 Berlin
You are invited to our next BEMC Talk on Wednesday, November 6th.
BEMC Talk: Wednesday, November 6th, 2019 @ 4pm
“The causes of the causes in context: confronting the burden of proof in lifecourse and social epidemiology” – Michelle Kelly-Irving, Inserm-Toulouse, Equity research team, LEASP, Faculté de Medecine, Toulouse, France
Description: “Social determinants are at the root of many potential causal pathways towards health outcomes. Increasingly, the randomized-control trial approach to establishing causality is questioned with the development of other causal approaches. These methods can be especially challenging for research questions involving social determinants and health inequalities within a lifecourse framework. In complex observational settings understanding and defining the context is a key issue affecting the generalizability of findings and transferability of interventions. Theory-driven research may be especially important when dealing with these methodological challenges, and enable lifecourse researchers to interpret their findings. I will present these challenges in terms of research on social-to-biological questions relating to health inequalities, and discuss how interdisciplinarity and triangulation may help to establish the burden of proof.”
Location: Hertwig–Hörsaal (Oscar Hertwig-Haus, Anatomie), CCM 10117 Berlin
Upcoming Berlin Epi Events:
November 20th – BEMC JClub – Paper will be posted online
December 4th – BEMC Talk– Uwe Siebert, Hall in Tirol
December 18th – BEMC JClub – Paper will be posted online
Interested in other Institute of Public Health events? Visit our calendar to check out upcoming conferences & short courses!
Follow BEMC on Twitter and leave questions for our speakers: @BEMColloquium
There is no BEMC talk in the month of October, but there is still a lot going on in the epi community!
JClub on Oct 16th: Please click on this link to see the chosen journal article. Note: for the first time ever, we will be reading a pre-print and submitting feedback as a group to the authors! Should be a cool experience to influence ongoing research, so don’t miss out!
IPH lecture on Oct 23rd: Professor John Gill is going to give a talk on “Understanding and communicating risk of rare but serious health complications – an example from living kidney donation” – click on this link to find out more and register for the event.
Our next regularly scheduled BEMC talk will be in November.
“An introduction to precisely and ggdag: Tools for modern methods in R” – a summary by Ana Sofia Oliveira Gonçalves
On the 4th September 2019, Malcolm Barrett held a lecture on the topic of “An introduction to precisely and ggdag: Tools for modern methods in R”. Malcolm Barrett is a PhD student in Epidemiology at the University of Southern California. He has experience in epidemiology and has worked with R studio.
During his lecture, he introduced two R packages that he has developed: “precisely” and “ggdag”. He then wrapped up his talk by sharing best practices in creating software for epidemiology analysis.
Malcolm first introduced the package “precisely”. Precisely is an R package which calculates sample size based on precision rather than power. It allows researchers to calculate sample sizes for common epidemiology measures, like risk differences, risk ratios and odds ratios. It can be used with R or just as a calculator on the web. It goes hand-in-hand with the recent discussion regarding statistical significance. During the discussion, he commented that the move away from p-values will still take some time. The motivation behind developing this package came from reading an article from Rothman and Greenland on planning study size based on precision. In this package, researchers need to set a desired precision, proportions of exposed to unexposed, group ratio and coverage. It also allows the calculation of precision given the sample size. The package shiny helps to run webapps, thus, people who do not work with R can still use precisely. He highlighted the common wrong interpretations of confidence intervals.
Malcolm proceeded to introduce his package “ggdag”. Ggdag is a package used to create causal diagrams in R. Dagitty does not always create beautiful plots and ggplot2 is the best data visualization tool at the moment. Hence, ggdag aims to integrate dagitty and ggplot2 (and ggraph which is actually part of ggplot2). Dagitty has powerful, robust algorithms and ggplot2 has unlimited flexibility. Ggdag also provides information (graphically) regarding the variables that need to be adjusted/controlled for.
Later on, he gave some insights on designing software for epidemiology. He mentioned that the developed software should be 1) very flexible, in order to automate tedious parts of analysis and be very loud about the difficult part, 2) expressive (modular code is better than monolithic functions), 3) able to fit into the ecosystem. He finished his lecture describing the package he is currently creating, which will be a tool to help clone datasets.