What if the more you knew about something, the less you knew about what you cared to know about?
I have wanted to write something regarding assessment for some time and for many reasons. I have seen many organizations, mine included, become fascinated with the concept of knowing more about their products, chapters or business. I hear “data” and “assessment” about as equally as I hear “values” and “feminist” at Association of Fraternity & Sorority Advisors Annual Meetings (Hashtag AFAAM).
Assessment and data may be the most over-used buzzwords of the 2010’s. More than “terrorism,” more than “fair share,” more than “radical left/right.” Everyone wants it, everyone believes it makes what they say sound brilliant, and everyone believes whatever they read, so long as numbers back it up.
People are desperate to turn politics, baseball and even fraternity and sorority life into something that is predictable in the hopes to either make a quick buck by being a regular predictor or in the hopes of eliminating risk from the world.
But collecting information doesn’t necessarily make a person more knowledgeable in the same manner that being “well-educated” doesn’t mean that one makes wise decisions.
Consider the National Security Agency (NSA), the group that collects “metadata” (how much you spend on your credit card and where or who you call, for example). We estimate having spent approximately half of a trillion dollars ($500,000,000,000) collecting data on just about every one of the 310,000,000+ American citizens and tens of millions of others from around the world since 9/11 and yet not one terrorist plot has been thwarted due to that data collection.
In theory, having such a vast accumulation of information, information many would consider valuable in identifying terrorists planning to make a move, would have eliminated the risk of the Boston Bombers or San Bernardino shooters, but it didn’t. Why? A conspiracy theorist could come up with any number of ideas, but I and many others simply think that too much data is being collected for meaningful analysis to take place.
The same can be said for fraternities, sororities, and any element of life in which we believe that collecting data is necessary.
In collecting data from “likely voters” polls in Kentucky incorrectly predicted the winner of the 2015 governor’s election. It wasn’t a matter of the actual vote being 51%-49% compared to a poll average of 49%-51%. No. The polls were off by almost 10%; at that point they were completely irrelevant even though they dominated the conversation leading up to the election.
Needle In A Haystack
Oftentimes, collecting a ton of information simply creates a bigger haystack, making it harder to determine which needle we are trying to find. We see trends and correlations and immediately begin researching each of them to determine if their is “causation,” but in many cases we already know the answers to the questions we are assessing.
Why do we ignore our gut? Why do we ignore what people are openly telling us? Why must there always be some elaborate, exclusive formula for making a prediction?
In many ways, it’s simply the manner in which analysts create work for themselves.That’s not to say that doing so is a bad thing; it’s important for a person to protect his or her profession as long as possible, but we have reached a point where there is a religious attachment to data and assessment where it has become the only way to prove that something is or isn’t.
Steve Jobs believed that focus groups were a waste of time, and I’m very inclined to agree. The products that he was creating fit a need. He knew that need; focus groups consisting of people who already had CD players and Walkmans were less likely to advise a company like Apple to continue with an expensive product like an iPod when they already have something that works, that works with the music they’ve collected and that wouldn’t cost them any additional funds.
In the same way, a professional with no real-world experience in dealing with a chapter is probably just as likely to figure out where that chapter’s problems are stemming from through an elaborate assessment as they are by asking the chapter and its advisers: “what’s wrong here.”
I don’t mean to suggest that assessment and data collection are irrelevant. But if one does not know specifically what he or she is looking for then he or she is likely to find things to look for after the data has been collected. That’s an easy way to end up on a completely incorrect track, with data to back him or her up, and then make decisions based on faulty assumptions.
Worse is that most people who work in assessment or polling or experimentation have an expectation for what will come of the data and may twist it to fit their narrative as opposed to objectively reporting the data.
Most polls leave out that they ask no one under the age of 40 who they will vote for. Scientists are published when they report their “significant” findings, leaving the equally relevant “insignificant” findings for those who care to dig through dozens of pages of analysis.
Everything is biased, and it doesn’t create a better world for any of us when fallible, biased humans have a stash of data for no good reason.