Paul Offit’s Vaccines course recently started up on Coursera.org and I intend to follow it through this semester an possibly use elements of this course and the discussions I find there in my own class (esp. Microbiology, but perhaps also in General Biology).
In the discussion forums someone, using the pseudonym, Amy Pond, posed a great question. “How do you decide what constitutes a reliable source of information?”
It is deceptively difficult to answer. If the question regards science, should everyone be expected to track down primary publications and review the data for themselves? If so, how do you even decide which sources to get your data from? If you admit that you do not have the time, ability or inclination to go to the data, is there anyone you can trust to give you the straight dope?
We live in an interconnected world with a surfeit of information. How can we avoid confirmation bias in our online ‘research’? Does the popularity of an opinion (The bandwagon effect) make it more or less believable? How do the search terms you use bias the answers you receive when ‘asking google’, i.e. what about the framing of an argument?
So, how do you decide what sources to listen to?
ryan59479
September 4, 2013 at 4:02 pm
Personally, I think that the source is less important than our ability to independently evaluate the methodology or conclusions reached by a study. I know that in the classroom or in the workplace, peer-reviewed is the gold standard for scientific literature, and I agree that it’s a great starting place. You’re much more likely to find a well-executed experiment or study in a scientific database like CINAHL or a journal put out by the AMA than, say, Googling something and finding an article on CNN. But I think that academia does students a great disservice by presenting the “peer-reviewed = reliable” paradigm. I don’t think anyone should blindly accept any conclusion as fact just because it was published in a journal. I can’t tell you how many studies I’ve read in peer-reviewed journals that warrant a, “Well how did they control for ____?” That doesn’t mean that the source was unreliable, or that the conclusions should just be thrown out the window–merely that the conclusion drawn merits further investigation or that I should assign a lower level of probability to its veracity. I think that the goal should be to have the ability to use critical thinking to objectively analyze evidence and make and argument about it’s strength and validity.
downhousesoftware
September 4, 2013 at 10:31 pm
A good answer, Ryan. I actually agree entirely; There are plenty of articles I’ve read in top tier journals that have questionable controls or interpretations of the data.
With this question, I’m really thinking about the layman – or even a scientist / etc. who is reading about something outside of their area of expertise. That is, short of going to the data oneself, how can one get some reasonable assurance that what they are being told is a fair summary or interpretation of competent (mainstream?) scientists rather than snake-oil salesmen?
I often feel like people I talk with get their information from poor sources, but how can I expect them to know any better?
ryan59479
September 5, 2013 at 11:37 am
I agree, the problem is considerably harder when it comes to the layman. I know it sounds a bit cliche, but I really do blame the education system in America on this one. First, it’s underfunded and understaffed. But second and more importantly, it’s a poor learning model. What modern day American schools teach children is how to cram for exams–how to memorize facts and then quickly forget them right after the test. Schools do not instill a value of learning in their students–they’re too busy trying to get everyone to pass the standardized tests (which is another ridiculous concept) so they look better and get more funding. We no longer teach our students *how* to think; those all-important critical thinking skills are quickly vanishing. I think that a basic understanding of the scientific method and experimental design is within the intellectual grasp of everybody. You don’t need to be a PhD in statistics to understand how a study was designed or whether or not a bias is present. All you need is someone to guide you through the process.
Thanks to the internet, I think that there’s always going to be a certain amount of “bad evidence” circulating that we’ll just have to deal with. But if we spent a little time teaching our kids how to think analytically, giving them more of a discerning eye, perhaps they would be able to see through the snake-oil salesmen.
downhousesoftware
September 15, 2013 at 12:04 am
I completely agree, Ryan. However, it’s important to consider a couple of things…
First, it is incredibly hard to learn – or to teach – critical analysis. Once you ‘get it’ and see that all science is really just a process of trial and error and refining of ideas, it’s hard to look back and understand why others don’t see it too. (Part or the curse of knowledge, I think.) When trying to teach, it’s often surprising when students don’t see the big picture once they’re presented with some good data. But then I remember how long it took me to learn the same things (assuming I have). No wonder my mentor called me an untrained monkey so often.
Second, what makes a good teacher has always been difficult to define and nearly impossible to teach. The Gates Foundation recently did a study researching best practices in teaching only to find that they weren’t sure what they were, only that some teachers made the connection with students and passed on their knowledge and some didn’t.
Again, I agree with your analysis, but I wonder if the best way to improve teaching is just to reward those who do it well (by what standard?) and let go of the ones who don’t seem to learn it.
Mehron
September 5, 2013 at 3:09 am
I think about this a lot and I wish I knew the answer. Sometimes, the internet seems to be the great enabler of confirmation bias and an endless source of misinformation.
I recently came across this “Baloney Detection Kit” video, which looks to be at least partially based on advice from Carl Sagan. This approach to evaluating claims seems like a good option for the layman and should get them little closer to putting specious claims in their place. A lot of it boils down to common sense, but with all the cognitive biases in play, things get a little tricky. It’s hard enough for someone looking for the truth, but a lot of people don’t have a skeptical mindset from the start. Many will be satisfied by any claim that sounds plausible and leave it at that.