News & Politics

katzmeow726
"There are three kinds of lies...
October 3, 2012 at 2:51 PM

...Lies, damned lies, and statistics." --Mark Twain

   I remember growing up, this was one of my dads favorite quotes.  I didn't really understand it as well as I thought I did until taking a couple of basic psychology courses, and a statistics course in college.

Boy am I glad I took those classes now.

   I have yet to see a day pass by where some statistic, mostly political in nature, come up that says this or that.  Usually this is followed by, or even accompanied by, a statistic that says the opposite.  It doesn't take a statistics course to tell you that you really can not trust statistics.

A mediocre statistician (or whomever is leading  a poll/study) can easily manipulate the figures and data to represent a result they want.  That is assuming the study or poll itself was not compromised.  The number of variables in a study make a HUGE difference.

Size of the sampled group
Demographics of those sampled:
--Religions, political affiliations, race, incomes, etc.
Location of those sampled
Questions asked--a good study can pose questions that can more easily lead people to a specific answer they want.

And the list goes on...the variables are so easily manipulated, it is scary.

  So where do you start in the process of validating a statistic?  I always start with finding who funded the research, be it a phone poll or scientific study.  If an extreme group or organization funds a political study that supports their view, then chances are that study is not going to really be accurate (this goes for Democrats, Republicans, AND independents).  If a study that claims smoking is not as dangerous as we thought is funded by a tobacco company, you can probably bet that it too was strongly manipulated.

From there, I look up other statistics regarding the topic.  Do the numbers of other studies reflect similar results?  Can the results be easily repeated, or (even better) have they been repeated?  Are the other studies reliable (again, this requires researching who funded it).

Then I like to try to find the demographics of those sampled.  Race, religion, political affiliation, income...all of that.  For example. Lets say someone takes a group people and asks them if they think welfare should be abolished.  If the majority of the group is primarily in the upper income brackets, you will have a totally different result if you asked the same to groups in the middle and lower income brackets.  

Or like, what questions are asked.  I did a project on this in college.  I made three questionnaires on parenting.  All asked the same five questions, except for the last question.  One stated "Do you spank your child,"  one said "do you hit your child" and the third said "do you use any form of physical discipline."  

Granted it would not be accepted as a "good study" but it was interesting.  My group spent three straight weekends at the mall in rotations (thankfully it meant only doing one weekend lol).  Our mall is visited by people across varying demographics, and I felt like we had a pretty balanced sample group.  The  form for spanking had moderate results.  Out of the 70 filled out, 42 said yes.  The form for "hitting" was different.  Only 13 people out of 70 said yes.
   To rub the point in, as one person walked off, I saw her pop her child on the hand for trying to grab at a garbage can.  Keep in mind, some people see spanking, popping, and hitting as the same, Others view them as different levels of physical discipline.

Then the third sheet about "physical discipline."  It was only somewhat higher than the "spanking one" at 51 yes.  

  Now, why is this study invalid?  First off, we phrased teh questions in such a way that people who might have said one answer on one form, would have given a different answer on another.  Secondly, one of the biggest variables is getting people to answer honestly. I have to wonder how many of the parents who said "no" to hitting only said so because they spank, and did not want to think about it as "hitting" their child.  When you use phrases that are softer like "spanking" or "physical discipline" people tend to be more honest, and more accepting of the idea.  When you use something like "hitting," it has a much more negative connotation.  
   And, to delve in to psychology, we also tried to balance out asking parents that children with them, and parents that did not.  We also found out (after making a note) that parents who filled out the survey when their child was wth them and being fussy, they were more likely to answer the questions regarding discipline with a yes.  It was a small trend however, and only should be something considered for further study, and not proof of a correlation.

Okay, sorry for the book.  But the point is, statistics really can not be trusted at face value.  It is not easy to research them. I've even gone so far as to call the schools or companies that funded and performed the studies and research.  Which is why I almost always ignore statistics, because I feel the need to dive in and research them lol.  

Oh, and my group made a 98/100 on our project....the more manipulated and "invalid" your study, the better the grade.  We even got 3 points of extra credit to include the information about  fussy kids being a possible reason people say yes or no to the key question in the study.


    

Replies

There aren't any replies to this post. Be the first to reply!

News & Politics