Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Android | Pandora | iHeartRadio | TuneIn | RSS
Scott Weingart, ED intensivist and seminal educator from the EMCrit podcast, shares his thoughts on how we should be finding truth nowadays: how to read journals, choose experts, use AI, and resolve disagreement.
Learn more at the Intensive Care Academy!
Takeaway pearls
- Scott subscribes to around 60 journals and reads them monthly, meaning skimming the topic index, then for any titles that jump out at him he reads the abstracts, then the actual article for any that still look relevant. He doesn’t recommend doing that. Reading around 10 journals would get you most of the high-yield updates, and you could probably get away with 3–4. It’ll take you about an hour a month.
- Read at least the methods and results. The discussion, abstract, and conclusion are mostly a mini-review paper. Ask: is this “generalizable,” meaning applicable to me—my patients, my practice, my questions? Could we implement this here? Ask also how your personal clinical expertise bears upon this (sometimes it’s greater than that of the authors). Finally, ask if the rigor of the paper supports the findings, though you may need to turn to methodological experts here given the complexity of modern studies. (Statistical and methodological experts are available out there, whether in your department or on the internet.)
- Signals of confidence, charisma, or “reasonableness” are no longer useful (if they ever were) markers of good knowledge in experts, authorities, pundits, authors, podcasters, etc. In fact, truth-tellers are often less compelling communicators, as they tend to hedge and equivocate; that’s what the truth looks like, not clear messages, the latter of which are often fabricated.
- One tool for managing this: seek disconfirmation. When pursuing opinions and expert perspectives, don’t look for those that agree with your prior beliefs, look for those that disagree; that is far more likely to be meaningful and useful data if you’re truly curious about what you’ll find. Even if your mind isn’t changed afterwards, your beliefs will become clearer and deeper. Especially in the current era of algorithms, subcultures, echo chambers, and AI tools that tend to agree with you, you need to actively seek these differing views. You should be able to make the counter-argument to your beliefs better than anyone who truly believes it—that’s when you really understand your own views.
- When you find a differing view, rather than engaging knee-jerk opposition, ask why? What is different in their population, approach, environment, etc that leads to a differing view from this reasonable person? If your first reaction is “WTF are you talking about?” try to transition into “Hmm… wtf are you talking about?”
- Clinical experience always needs to be thoughtfully integrated into the literature. The subtle lessons of experience are not always studied, and a large study of pooled patients may not address this specific patient’s situation. However, we also tend to overweigh the value of personal lessons, especially when it comes attached to emotional experiences.
- A psychological pitfall for educators, and especially modern content creators (podcasters, bloggers, talking heads, YouTubers, etc) is the pull of speaking to create controversy rather than truth. We have lost great scientists and clinicians from the realm of real medicine towards hucksterism by this temptation.
- AI today is reasonably good at acting as a medical librarian, i.e. “tell me (with references) the major studies addressing this point.” For this purpose, it is probably better than limited attempts at scouring the literature with PubMed, although all it’s doing is scouring the Internet for commentaries that have referred to the literature, and may therefore miss smaller/less known studies. It is less good at answering specific clinical questions, although it can be a useful idea generator, such as suggesting possibilities for your differential you hadn’t considered… since it remembers all facts, and is good at pattern recognition.
- Fight the natural tendency of AI agents to be agreeable by prompting them towards disconfirmation, i.e. “tell me where I’m wrong.” If you solicit agreement or even use a neutral framing, they will tend to roll with whatever you suggest.
- When thinking or teaching, you will be more right and righteous if you speak in probabilities or allow the possibility of uncertainty, then if you speak in platitudes. Say you could be wrong, estimate a degree of probability… if you think in certainty, you anchor yourself and can never become more correct with new data.
- 95% of your decisions should be made in advance, either as shared or internal guidelines, based on your current assessment of our knowledge and how you do what you do. The other 5% will require intensive thought at the time, which is a good thing as long as it’s only 5%, and maybe you can generalize your decisions so that decision will be autopilot next time.
- Listen to differing perspectives and opinions from different sources to add qualitatively to your market of ideas, without trying too hard to weigh which is a “better” opinion or source, as this is mostly impossible; with that perspective, you don’t need to worry as much about the validity of the source as long as it’s worth hearing.
- However, give very little weight to “voting,” or considering how often you hear a perspective. In the modern era of algorithmically served content, media echo chambers, and self-selecting subcultures, as well as the growing rise of completely AI-generated content in infinite volumes, you are likely to hear many voices that share the same opinion, regardless of whether it’s right or wrong, and quantity of perceived voices may have no correlation to the actual number of people who believe something. In fact, many seemingly different sources are really just echoing or referring back to one original source, not reflecting new opinions. “I hear it all the time/everyone thinks that” is no longer a useful tool for finding truth.