| by basexblog | No comments

Understanding Our Information Diet

The somewhat elusive key to understanding Information Overload, and thus developing meaningful solutions to lessen its impact, is to first develop a clear picture of the amount of information that individuals receive and consume and also develop an understanding of how much information is too much in a given circumstance.

This is a tricky set of problems because information does not lend itself to direct measurement.  Traditionally, researchers have approached this question in one of three ways, namely looking at words, bytes, or time.  A document, for instance, could be high in words, low in bytes, and high in time spent reading it.  A video clip on the other hand, could be low in words, high in bytes, and low in time.

Research conducted at the University of California, San Diego tells us that roughly 3.6 zettabytes of information were consumed by Americans in their homes in 2008.  This translates to ca. 11.8 hours a day of information consumption.  Those numbers are, as stated, for information received and consumed solely in the home and do not address business settings.

In the coming months we will begin our efforts to determine how much information knowledge workers consume in the course of their work, thereby developing a profile and understanding of the knowledge workers’ information diet.

One concept we are studying is satisficing, a method of decision making that seeks to reach an “adequate” solution to a problem, as opposed to searching relentlessly for the optimal solution that may cost more in time spent than it is worth.  Satisficing is a naturally occurring and largely subconscious thought process that probably kept humankind from starving at some point in history, when our ancestors decided that they could make do with the berries on the tree and not wait forever for the perfect mammoth to pass by.

Depending on the circumstances, knowledge workers are both under- and overusing this strategy.  This frequently leaves them with sub-par solutions to a problem or results in wasted time when a simpler solution exists.

Another interesting concept we are grappling with is how to measure information.  The Shannon entropy, developed by Claude E. Shannon in 1948, is a way to measure the average information content of a message in units such as bits.  Perhaps more intriguing, it also provides a way to measure the information content that knowledge workers miss when they are unaware of a random variable.  For example, if only the last letter of a word is missing, it would be relatively easy to determine the word, as the other letters would provide context.  However, if only one or two of the letters in the word are presented, it will be much harder to determine the word, as there is little or no context.

Since this is ongoing work, and many of you readers have backgrounds in this area, we would like to hear from you in the coming weeks.  What do you think is the most valid way to measure information?  How much work related information do you estimate you are exposed to on a daily basis and how are you making these estimates?

Please participate in the discussion below.