Squaring the Culture




"...and I will make justice the plumb line, and righteousness the level;
then hail will sweep away the refuge of lies,
and the waters will overflow the secret place."
Isaiah 28:17

12/29/2010 (12:04 pm)

PIPA is At It Again

For the past 7 years or so, conservatives have had to tolerate liberals, Progressives, and assorted pseudo-intellectuals informing them superciliously that it had been “proved” by “research” that Fox News viewers were badly misinformed about current issues. They were usually referring to a horribly-constructed bit of research produced by the Program on International Policy Attitudes (PIPA) at the University of Maryland that examined responses to three questions about the run-up to the Iraq war. Somehow, to the left, this proved that Fox was misinforming its viewers about all subjects — and that’s leaving aside glaring concerns about the construction of the questions, and about the utter absence of any attempt to correlate private, personal opinions to particular news reports (I mentioned the report and its flaws briefly here.)

Well, they’re at it again. PIPA, which in the interim has recast itself as World Public Opinion.org, recently produced another amusing bit of research purporting to prove, using more current policy questions, that Fox News viewers are the most misinformed of all news viewers. The actual study can be read here, if you want a lesson in how not to perform genuine research.

The title is a masterpiece of studied neutrality: Misinformation and the 2010 Election. So far, so good. A lot of us have noted the role played by the press in keeping voters ill-informed. It’s a worthwhile topic for research. Funny, though, that they didn’t run this after the 2008 election; I guess these Objective Scholars® considered voting for Obama a well-informed choice, and considered that misinformation is only evident when voters elect Republicans. Examining the minds of Obama voters was left to conservative partisans like John Ziegler, who, in fact, did a much better job of demonstrating that the voters were misinformed than PIPA does here (and Obama voters were the most misinformed of all — but nobody was particularly well-informed). Of course, Ziegler used genuine polling organizations like Zogby and Wilson Research, and there was no disputing the factual nature of his questions. He also did not attempt to blame any particular network, which is a fool’s errand. PIPA, take note.

Why the focus on Fox News? The study introduces itself with a concern about the impact of “corporate funding” on elections in the wake of the Citizens United case, which I wrote about last February. This is a particular concern of Progressives, who went insane predicting a flood of corporate money devastating politics in America (which in fact never materialized,) but not a concern of anybody else. So we know from the start that the researchers are Progressives, and we all know how Progressives feel about Fox. This explains why PIPA is interested in a question like “Are Fox News viewers misinformed” rather than a more neutral question like, say, “Which news reports are more accurate?” Of course, they shrouded their focus on Fox in objective-sounding phrases, but the focus was plainly on Fox.

Be that as it may, the study is a laugh riot of methodological blunders.

In the first place, the study defines “truth” as “agreement with the public statements of a particular government agency.” For example, if you disagree with the Congressional Budget Office in their assessment of the effect of the stimulus, then you are, by this definition, misinformed. That example is particularly egregious: the CBO assessed the effect of the stimulus, not by examining actual results, but by running an economic model using the number of stimulus dollars as input, and applying Keynesian multipliers. In short, if you think the CBO’s model is not a good model, you are misinformed — by definition.

This is genuinely funny. Liberals fancy themselves to be the reservoir of intellectual resistance to the government in America. But as of today, liberals claim that if you disagree with the government, you are wrong, by simple definition. Big Brother knows best. Who knew they’d changed sides?

Next, several questions asked people to opine about what “most economists” think. The study defines “most economists” as “the economists who happen to work for the government agency we chose as our source.” No polling of economists was performed, nor were any such polls consulted. So we know before we start that the study’s “correct” answers to questions involving opinions about what “most economists” think are completely unreliable. They really have not the slightest idea what “most economists” think.

And what is it that people are asked about what these gods of economics think? That’s pretty interesting, too: they’re asked, among other things, whether more economists think the health care reform law will increase the deficit, more think it will reduce the deficit, or whether their views are evenly divided. Or, whether more economists think the economy is getting better or getting worse.

My reaction to that is “What kind of idiot decides where they stand by polling economists to find out which view has 60% support? Who the &@#! cares what ‘most economists’ think?” Why aren’t we examining the actual effects of such laws in other countries, or in various states where they’ve been attempted? Why not examine the history of predictions made by politicians advocating various spending programs (which almost invariably understate costs and overstate revenues?)

But no, to PIPA what matters is whether people know which way the wind was blowing among economists — meaning, of course, which way the particular economists were leaning who were working for the particular branch of the Obama administration they chose by whatever means. And they chose objectively, of course.

How very revealing that these Progressives think truth is determined by agreeing with people they consider important! One scales the heights of intellectual mountains by following the academic herd? Really?

I’m reminded of a wry comment about experts made by John Meier in his analysis of the life of Jesus, A Marginal Jew:

Nothing ages faster than relevance. The “cutting edge” of scholarship at any given moment often turns out to be the sharp cliff of Gerasa, off of which academic lemmings keep hurling themselves.

Choosing sides by polling experts is not always such a good idea.

But here is what I consider the crowning deficiency: the study purports to examine whether a news agency misinforms its viewers — without examining a single news report.

How does that work? The questions about what news source respondents viewed don’t sum to 100%, because most people view multiple sources of news — Fox, CBS, newspapers, various Internet sites, etc. The study does not even include most news purveyors, and makes no attempt to identify which of the various sources it does include was the source of the “misinformation.” Using this method it’s not possible to determine which, or whether any, news organization misinformed anybody.

Ultimately, all the study demonstrates is which set of voters was more likely to agree with the current administration’s talking points. For some reason, I don’t consider that a useful test of accuracy, nor does the failure to swallow ObamaCrap® fill me with foreboding about the future of the republic. Call me picky.

Nobody who knows the first thing about social science research can take this “study” seriously. It’s not a study, it’s a paid, partisan hit piece. The fact that so many liberals accept it uncritically and repeat it as fact, constitutes proof that they’re either not capable of critical thinking, or not willing to engage in it when the target is conservative. The fact that PIPA and other progressive-leaning think tanks continue to produce such transparently nonsensical “research”, constitutes proof that the manipulators of the liberal herd know how to move the cows.

« « Pakistan Needs Our Prayers | Main | A Light Critique of Ayn Rand » »

13 Comments »

December 30, 2010 @ 12:05 pm #

Welcome back!

“Why the focus on Fox News?”

Pick a target, freeze it, destroy it. (or something close – Alinsky)

They’re crazy to destroy Fox News – they simply can’t have anybody out there who doesn’t spout the party line. They’re soon going to control the FCC, and if they can establish enough rules about what is necessary to inform people for either radio or TV, then they will effectively make Fox into another NPR.

Dennis Praeger had a caller on yesterday who was trying to establish that the US has more poverty than any other country within the group of countries covered by the international group he was quoting. I don’t remember the group doing the investigation. Eventually, the fact came out that the determination was based on taking the median income and anybody below a certain percentage of that median income was in poverty. Dennis pointed out that such a non-objective standard meant that effectively, you would _always_ have poverty – unless everybody made exactly the same income. The caller simply repeated the statement that the gap between the very rich and the very poor had widened.

To be honest, I’m not even sure that’s actually true – or that it made a difference – but the fact is that if you don’t have an objective standard, comparisons are pointless. Their goal is for everybody to be the same.

Remember that song about the “little houses made of Ticky-Tacky”? I keep thinking of that. It was intended to be completely derogatory, but at the same time, that’s their goal. I never understood the inconsistency of it.

December 31, 2010 @ 7:39 pm #

Phil said:

“In the first place, the study defines “truth” as “agreement with the public statements of a particular government agency.” For example, if you disagree with the Congressional Budget Office in their assessment of the effect of the stimulus, then you are, by this definition, misinformed.”

The authors of the study said:

“In most cases we inquired about respondents’ views of expert opinion, as well as the respondents’ own views. While one may argue that a respondent who had a belief that is at odds with expert opinion is misinformed, in designing this study we took the position that some respondents may have had correct information about prevailing expert opinion but nonetheless came to a contrary conclusion, and thus should not be regarded as ‘misinformed.’”

You need to be more careful.

Happy new year Phil!

Joe

January 1, 2011 @ 10:22 am #

Happy New Year!!!

January 1, 2011 @ 12:14 pm #

Joseph’s comment,

“In most cases we inquired about respondents’ views of expert opinion, as well as the respondents’ own views. While one may argue that a respondent who had a belief that is at odds with expert opinion is misinformed, in designing this study we took the position that some respondents may have had correct information about prevailing expert opinion but nonetheless came to a contrary conclusion, and thus should not be regarded as ‘misinformed.’”

This was sort of like a disclaimer; they through it in because they damn well know their study was flawed. Phil made so many other points in this article, and this was the best you could come up with to pick at?

Like everyone else I wish all of us a Happy New Year, and I also hope that none of us, including you Joe, have to live with the kinds of things you believe in.

Cheers…

January 1, 2011 @ 6:10 pm #

Dale,

Phil made a claim about what the authors said. It turns out that what Phil told you the authors said is precisely the opposite of what they actually said. I pointed this out. In response, you impugned the authors’ motives (as well as my motives) and made excuses for Phil?

How objective is that? Is that how someone who wants to know the truth acts? Even if, as you say, the authors inserted their statement as a disingenuous disclaimer, it is what they actually said. Whether the authors believed what they said or not, it is still what they actually said. Phil claimed that they said the exact opposite of what they actually said. No matter how disingenuous you (assume) the authors to be, Phil misinformed you. There’s no getting around that. And if you truly want to know the truth, that should concern you.

Given the way academic research works, if the study’s method were seriously flawed, every researcher working in this discipline would immediately expose the problematic method. The only alternative explanation for the failure of this to happen is a massive worldwide academic conspiracy to attack Fox News. Why doesn’t the failure of the experts to weigh in with their devastating critiques suggest that Phil, who is not trained in this type of research, might be wrong?

Might it be that you refuse to even consider the possibility that Fox News viewers are more misinformed than others? Could it be that, because Phil is singing the song you want to hear, you listen uncritically, ignore the problematic implications of Phil’s argument, and offer distracting excuses when someone points out that what Phil told you the researchers said was precisely the opposite of what they actually said?

I have no plans to critique Phil’s critique. I decided against that course after the very first claim he made, that I examined, turned out to be completely and utterly wrong. Frankly, I don’t care whether the study’s conclusion is right or wrong. Phil’s central point – that all viewers are substantially misinformed, is undoubtedly correct.

What I do care about is that there are millions of people like you who are more concerned about maintaining their current belief system, and their partisan loyalties, then they are about acknowledging the truth. This concerns me no matter what the ideology of the believer is.

Joe H.

January 1, 2011 @ 6:25 pm #

It should also be noted that I did not call Phil a “liar,” but instead suggested that he should “be more careful.” I seriously doubt Phil was intentionally misleading his readers, given that he linked to the study and invited his readers to read it themselves.

I suspect that Phil’s partisan devotion got the best of him and caused him to misread the study – just as Dale’s partisan devotion got the best of him and caused him to introduce a distraction and dismiss the significance of Phil’s error.

But that’s just a guess.

Joe H.

January 2, 2011 @ 9:46 pm #

Joe,

“Precisely the opposite of what they said?” Please, Joe, that’s just flat-out dishonest. You’re not that stupid.

Here’s where they established their criteria:

In the course of this study, to identify “misinformation” among voters, we used as reference points the conclusions of key government agencies that are run by professional experts and have a strong reputation for being immune to partisan influences. These include the Congressional Budget Office, the Department of Commerce, and the National Academy of Sciences.

The comment you posted from the study was an exception to this general rule. It’s very clearly worded as such. The general rule was exactly what I said it was. You have to know this as well as I do, because you had to read their criteria to find that quote. So why are you pretending otherwise?

Furthermore, what you did not mention about that exception is that in it, the authors vindicate my criticism of their methods. They note precisely the problem that I point out. But do they correct the error in the method? No. They just decided to violate their own criteria in one or more cases based on their personal assessments of the respondents. They don’t tell us how many data points were affected, but as I explain in a bit, it actually affects all of them.

In fact, I would not be surprised to learn that during their data-gathering work, some respondent laughed at their questions, explained just how lame the “most economists” line was, and explained to them the flaws in the CBO’s method. They may have even stumbled over an economics professor. They probably looked at each other and said, “We can’t really call this guy ‘misinformed,’ can we?” So they marked him down as “informed” on those questions, then threw in the little explanation you quoted in order to cover themselves. That’s my guess.

Only, saying that does not actually cover them. In fact, though they were probably trying to be fair and honest, it completely invalidates their research. You see, the purpose for establishing easily measurable criteria in a study of this sort is to avoid the charge that the authors/researchers are simply asserting their personal point of view. They’re trying to make the claim that their assessment is completely objective. By setting up this little work-around for manifestly well-informed responses that happen to disagree with their formal criteria, they’re actually making a personal fitness assessment of every respondent: this one gets the work-around, that one does not. By doing so, they’ve inadvertently changed their criteria for determining whether a person is “well-informed” or “misinformed” to something like this (these are my words, not theirs, but they may as well be theirs):

We asked questions regarding current economic topics, and then we decided whether we thought the respondent was an idiot. If we thought he was an idiot, we marked him down as “misinformed.” If we thought he was smart, we called him “well informed.”

That’s what happens when the data-gatherers are required to make personal assessments of respondents. They may be intelligent assessments of the respondents; we might even agree with them given the details of the conversations. But the fact that they’re simply weighing the answers themselves and making the call about the data point makes this entire exercise nothing but an editorial. It’s not research, it’s just the opinions of the researchers. Which — surprise! — is exactly what I said about it.

So next time, don’t go telling my readers that I’ve told them the opposite of what the researchers actually did, when in fact I’ve been both careful and accurate. But thank you for adding another, valid criticism of their methods.

January 3, 2011 @ 9:52 pm #

Phil, bear with my stupidity.

In your blog post, you originally said:

“In the first place, the study defines “truth” as “agreement with the public statements of a particular government agency.” For example, if you disagree with the Congressional Budget Office in their assessment of the effect of the stimulus, then you are, by this definition, misinformed.”

You now say your assertion above is a fair interpretation of the following statement from the study:

“In the course of this study, to identify “misinformation” among voters, we used as reference points the conclusions of key government agencies that are run by professional experts and have a strong reputation for being immune to partisan influences. These include the Congressional Budget Office, the Department of Commerce, and the National Academy of Sciences.”

You said, this statement is “where they established their criteria,”by which you meant their “criterion for truth.”

First, saying that you are using the conclusions of non-partisan government experts as “reference points” invites the question, reference points for what? Truth, or being informed, as you say? Or something else?

The only way to answer that question is to ask, “what were the authors studying?” As the authors clearly explained, they were attempting to determine the degree to which viewers were misinformed about what the opinions of non-partisan governmental experts actually were.

In other words, the opinions of non-partisan government experts were not used as a criterion for truth, as you told your readers. They were used as reference points for determining the extent to which viewers were misinformed about non-partisan expert opinion.

Then, in an apparent attempt to head off precisely the sort of confusion you fell into, the authors added the following clarification:

“In most cases we inquired about respondents’ views of expert opinion, as well as the respondents’ own views. While one may argue that a respondent who had a belief that is at odds with expert opinion is misinformed, in designing this study we took the position that some respondents may have had correct information about prevailing expert opinion but nonetheless came to a contrary conclusion, and thus should not be regarded as ‘misinformed.””

This statement is not an “exception” to their general method, or an acknowledgement of the problem you pointed out. It is a clarification regarding their criteria for who they counted as misinformed. Put another way, the authors said, “in case you think we used the opinions of non-partisan experts as the criterion for “truth” and judged those who disagreed with the non-partisan government experts as among the ‘misinformed,’ we didn’t. That was not and is not our position.”

I don’t see how the authors could have been any clearer – and yet you still misinterpreted them. In fact, not only did you misinterpret them, you said they said the exact opposite of what they expressly renounced.

Wonder why that is?
Joe H.

January 10, 2011 @ 6:04 am #

Joe wrote:

In other words, the opinions of non-partisan government experts were not used as a criterion for truth, as you told your readers. They were used as reference points for determining the extent to which viewers were misinformed about non-partisan expert opinion.

They substitute “what does this government agency say” for “what do experts say.” Then they substitute, by implication, “what do experts say” for “what is correct.” By this pair of substitutions, they promote the opinion of a particular agency to the status of “what every informed citizen must know.” Waving statements expressing their good intent is no defense for methods that produce such a result.

Their non-partisan expert opinion is not necessarily non-partisan, nor is it necessarily expert, nor does it represent a broad survey of expert opinion. It’s just a particular government agency. One can easily make the case that “government” and “non-partisan” are mutually exclusive, particularly regarding administration policy. One does not even need to make the case that the employees of the CBO or the OMB are not chosen scientifically to represent the broad spectrum of expert opinions; this is obvious on the face of things. One can easily make the case on particular topics that the government agency is dead wrong, expert or not. One can make the case on particular topics that the statements of the government agency do not actually represent facts, just theories that are removed from the actual events.

And yet, the survey team ignores all of this. To the designers of the survey, the opinion of their single, chosen agency on each topic gets elevated to the status of being the one, crucial bit of information that every informed citizen must know. They ask people “Do you know what experts say about X?” meaning “Do you know what this government agency says about X?” but they never explain their definition of “experts” to their subjects. And then they say, not “Fox News viewers disagree with particular government agencies,” nor even “Fox News viewers disagree with certain experts.” What is being said — by the researchers, not just the press — is “Fox News viewers are misinformed about facts.”

It is quite clear that they’re implying that what “most experts” say is “fact” — it’s what well-informed people know. It is quite clear that they’re representing one particular agency as “most experts.” They can say any self-exonerating thing they like about why they did it that way, but it doesn’t matter; they’re still positing a particular government agency as the standard of truth. Saying “we took the position that some respondents may have had correct information about prevailing expert opinion but nonetheless came to a contrary conclusion,” does not make it so.

If you want to convince me that they’re actually doing the opposite of what I said, you need to do better than provide a single sentence in which they express their intent. You need to show an actual method that they employed that changes that characteristic of the study — a method that does not, at the same time, remove all semblance of objectivity from the study. As I explained in my last comment, I don’t see how that is possible without a completely different design.

I have informed my readers correctly.

January 10, 2011 @ 9:46 am #

Pardon me for butting in after such a long time, Phil, but while you may be correct on the nature of the article, you did tell an untruth (inadvertent?) right out front. You said right out the study defined truth as one thing, when it explicitly stated otherwise.

Perhaps it would be good to apologize for either being unclear or mistaken.

January 10, 2011 @ 12:34 pm #

Gordon:

The authors say they are going to do one thing, but in actual fact, they do another. What Joe is insisting, and what you suggest, is that I honor their stated intent. What I did was accurately report what they did — which is different from what they said. That’s not untruth, that’s good analysis. Whatever they claimed was their intent, they did actually make the opinion of particular government agencies the standard for truth in their research. It’s what I saw immediately upon reading the report, and it’s my job to say so. That they did this while claiming to be performing meaningful, accurate research, was the very point of my article.

Sometimes I see the result of a particular approach without consciously analyzing how they went about producing it, and it takes me additional thought and explanation to reproduce the path I took to arrive there. This discussion with Joe Huster exposes that I was unclear about what particular method the researchers used to turn government opinion into indisputable fact. If that’s what you mean, I’m sorry you were confused, and I certainly did not mean to confuse you. My original article was not intended to be a detailed critique of the authors’ methods, nor did I represent it as such. I had to resort to a more detailed approach to answer Joe. I hope the more detailed answer made my meaning clear to you.

January 10, 2011 @ 1:21 pm #

One more thing:

The one place in this article where, on reflection, I said “Oh, gee, I should not have written that that way,” was this:

No polling of economists was performed, nor were any such polls consulted.

The authors of the research claim:

We also noted efforts to survey elite opinion, such as the regular survey of economists conducted by the Wall Street Journal; however, we only used this as supporting evidence for what constitutes expert opinion.

So they did take note of some surveys; only, it is not clear when or how they were used, and they themselves say they only used such surveys to support what they’d acquired by other means in some specific but unidentified instance or instances.

Perhaps, then, I should have written that particular paragraph more like this:

They did not base their opinion of what “most experts” think on surveys of experts, and though they claim to have consulted one or two such surveys as a second thought, they are not specific about on which topics, nor do they provide any discussion beyond “we looked at a few.” This falls far short of scientific rigor. In actual fact, they probably had very little idea what “most experts” think.

For that inaccuracy, I do apologize. The outcome does not change my analysis at all; they truly had very little idea what “most experts” think on most topics, and the purpose of the research seems to be to express outrage that certain citizens don’t take the opinions of the CBO, the Department of Commerce, or the National Science Foundation as indisputable fact. It would have been just as reasonable, given the data, to conclude “Fox News viewers are more apt to think for themselves than are viewers of MSNBC,” but, of course, that would not have fit the researchers’ partisan agenda very well.

Whichever way you want to cut it, this “research” is not research, it’s opinion.

January 10, 2011 @ 10:30 pm #

That’s much more clear, Phil…and I will give you credit that the authors may not have held to their plan, no matter what their original claim was.

Thanks!

RSS feed for comments on this post. TrackBack URI

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>