Saturday, March 21, 2015

Red flags of bullshit: the ad hominem

The best way to attack a scientific position is the same way to promote it: evidence. Evidence, in the scientific world is data that is collected objectively. 

If you're reading an article or a post and there's no data then you're almost wasting your time - but maybe it's entertaining. 

Below is a post by a friend I found on Facebook. This is by a person with an engineering degree from Lehigh and a math wiz. Apparently this does note make you immune from bad arguments. Ad hominen arguments attack the person (the messenger) and not the theory (the message). I often see this used in the climate change and vaccination debates and often from both sides. 

There are lots of opportunities to point weakness in climate change theory using data. You could show that models use parameters that are not reasonable or that certain feedbacks are not justified. I actually know of no weaknesses in climate models. And try to follow the debate as much as possible. The funny (or sad) thing about this post is the hypocrisy within. When you say "supposed experts" - you're using an ad hominem attack. One of the main proponents of human-induced climate change is Michael Mann and that guy is an actual expert. He gets a ton of hate mail and even has a book on it. However, the grand prize for getting attacked is Al Gore. Just go to the Weather Channel's Facebook page and look for any post on climate change. Thing is, Gore had nothing to do with climate change except for being a mouthpiece. He had nothing to do with the science so bringing him up is non sequitur for talking about the validity of climate models. Certainly, the politics of the people holding a particular scientific position has nothing to do with the validity of the position. 

More broadly, when arguing about a scientific issue remember "Quod gratis asseritur, gratis negatur."  Christopher Hitchens has put this as "What can be asserted without evidence can be dismissed without evidence." 

Red flags of bullshit: testimonials as evidence


  1. AR5 notes a legion of weaknesses in current climate models. For example:

    "The simulation of clouds in climate models remains challenging.
    There is very high confidence that uncertainties in cloud processes
    explain much of the spread in modelled climate sensitivity. However,
    the simulation of clouds in climate models has shown modest improvement
    relative to models available at the time of the AR4, and this has
    been aided by new evaluation techniques and new observations for
    clouds. Nevertheless, biases in cloud simulation lead to regional errors
    on cloud radiative effect of several tens of watts per square meter." (743)

    And here is a flaw in current cloud modeling discussed in a recent paper:

    "[C]louds provide the necessary degrees of freedom to modulate the Earth's albedo setting the hemispheric symmetry. We also show that current climate models lack this same degree of hemispheric symmetry and regulation by clouds."

  2. Great points. I should have been more specific and pointed out that they're not perfect (no model ever is) but the parameters are not cherrypicked as to be misleading. I know about the cloud thing but I believe the cloud issue is one of accuracy and precision in models and not necessarily to the point of negating the use of models. From my understanding, running models backwards as a check has shown several climate models to be very strong. Even Hansen's early models did OK.

  3. Question is - should I find papers and look at their models and the performance of each. I think so. 1. I like modeling. 2. I need a clue.

  4. Before you do that, I would have a go at chapter 9 of AR5WG1:
    starting with the box on pages 824-825, which gets right to this point.

    A bunch of concerns remain for me:
    1. Are "the best" models performing well because they are "the best", or because they lucked into good modeling of the tiny period for which we have an instrumental record?
    2. Are the improvements in modeling more a function of increasing computer power than better understanding of the physics?
    3. Are the improvements in modeling more a function of "adding epicycles" than better understanding of the physics?
    4. Is "adding epicycles" (in a non-perjorative sense) a necessity given a chaotic system in which our understanding acknowledges both "known unknowns" and "unknown unknowns"? (See for example, this recent paper, which the skeptic blogosphere crowed about [or should I say baaed about. Ooops. Going ad hominem. Time to shut up.]