Tags

, , , , , , , , , , , , , ,

Lately I’ve been reading the philosophy of science. How boring is that, you say? Not at all. It is actually very interesting and quite useful also. I know there are people who think that “academic wisdom” is just philosophers debating things that have little to do with reality. Well, that is exactly why they fall into traps like Taylorism or Systems Thinking without even realizing it. I mean it is completely different thing to use them consciously, understanding their pitfalls, benefits and shortcomings, than to use them blindly. So here’s my first topic to cover – the Trap of Positivism.

(If you want to take a look about mys second post in this series, here it is: “Relativism and stupid(?) management decisions” and the third one is about “Realism and motivated employees“)

Trap of Positivism

The positivist movement started with Auguste Comte, who was also the father of Sociology. His thinking was that the development of science has 3 stages: Theological, Metaphysical and finally Positive. Positive stage of science meant that metaphysics and all that nonsense will be abandoned and knowledge is gathered only through our own senses. In other words, Positive science will rely on empirical evidence. Another important aspect of this thinking was that social sciences should use exactly the same positivist methodologies as natural sciences.

The reasoning that was used in the positive, or positivist, science is called inductive reasoning. It means that whenever something is perceived it can be used to formulate conclusions. If you see hundred swans that are white, you conclude that all the swans are white. This is where the common phrase “black swan” comes from. With inductive logic, you can never be sure if your theory will survive the test of time.

John Stuart Mill concluded that all positive theories are based on this kind of logic: First you perceive an event and then formulate a general law that explains it. For example: A SW designer didn’t check the statistical analysis tool before making a commit. Why is that? Answer: 1) Lazy people don’t check the tool before committing 2) The SW designer is lazy. If that was the only SW designer you have met you can even conclude: All the SW designers are lazy. So, you see this kind of positivist logic is used to make generalized assumptions. Of course, when you see a coder that actually checks the tool before committing, your theory has to be sharpened: All the SW designers that don’t check the tool are lazy. And if you happen to meet a SW designer that didn’t check the tool for some other reasons than laziness, your theory has to be adjusted again.

For positivists this was the only acceptable method for reasoning. The deductive reasoning wasn’t allowed as it made non-empirical assumptions. E.g. 1) People who don’t check the statistical analysis tool before committing are lazy. 2) A SW designer didn’t check the tool. -> The SW designer is lazy.

The birth of reductionism and behaviorism

The empiricism and singleness of methodologies led to the rise of reductionism. Reductionism means that all the theories can be reduced to some more basic theory. An example would be: Organizational Development theory -> psychology -> behavior. In the field of psychology this thinking led to behaviorism.

Behaviorism is a thinking that utilizes empiricism quite heavily. In that thinking it is completely unnecessary and even nonsense to try to make any theories or concepts that can’t be directly perceived from the human behavior. For example, our SW designer is just a complicated set of stimulus-response series: SW designer finishes the code, which triggers a stimulus to commit the code -> SW designer commits the code. There is absolutely no reason to think about any mental phenomena like anxiety, hurry, laziness, etc. If we want the designer to check the statistical analysis tool before committing, we need to make sure he has a stimulus to trigger him to do so. Here we could use rewards and punishments to condition him accordingly.

Reductionism also leads to methodological individualism, which means that all the social phenomena can be reduced to individual behavior or mind states. So if we see that our SW has quality problems, we assume it is the behavior of individuals that causes it. Probably they are conditioned wrongly to seek fast reward from committing instead of checking the quality first. So, we might build processes to ensure every SW designer is conditioned rightly. If we think that the behavior comes from certain mind states (like physicalism, another form of reductionism assumes), we might try to ensure that all the SW designers have suitable mind state (or more commonly, mindset) to behave rightly.

Problems of Positivism

It is easy to see that this positivist philosophy is underlying in Scientific Management / Taylorism culture. People are seen as individuals, behaving according to their conditioning. The role of the manager is then to build processes to ensure right kind of conditioning mechanisms are in place. If the processes are good, the mindsets and thus the behavior changes. To talk about “soft” subjects, like feelings or motivation, is either nonsense (metaphysics) or just another way to refer to the mind states that needs to be conditioned.

This kind of thinking is nowadays strongly criticized by social psychology, and modern organizational development methodologies that utilize systems thinking. However, it is quite easy to point out the weaknesses of positivist thinking also in the more basic level:

  • For example empiricism has many problems. Nowadays we know that perceiving itself is very much theory / context oriented. We perceive only things that we expect to perceive, or at least our expectations affect what we perceive. (see e.g. the selective attention test) Also, it isn’t clear what the perception even is. Is it empirical evidence if we use some technical tools (like electron microscope) to aid us?
  • Behaviorism has also lots of problems. E.g. What about the subjective experiences we have? Aren’t those real? Don’t they affect our behavior? Also, what about the wide array of things we might do in a given mental state? (For example, If I’m in a “happy mind state” I might work harder or I might take a break and watch the birds flying in the sky.) And what about the context? Doesn’t that affect the behavior too? And you have to agree we also might have some mental states that can’t be seen from outside. (E.g. I might smile and nod to your suggestions but still think that you are an idiot…)

So, to conclude. It is important to know what kind of thinking we are utilizing. We might continue to use it, but at least we should know the limits of it. I think behaviorism is great. So is reductionism. They enable me to think about my reality from different perspectives. But they are huge abstractions and blindly using them can lead to big problems.

Advertisements