J’aime beaucoup les review de Herbert Gintis sur Amazon.com, toujours très détaillées et souvent très justes. La dernière en date est particulièrement intéressante parce que Gintis y expose de manière particulièrement claire et concise ce que dit la théorie du choix rationnel et ce qu’elle ne dit pas. Voici les extraits les plus importants :
« I come from a theoretical position in behavioral science that takes the so-called rational actor model seriously as a tool for understanding human behavior. The model assumes people have certain beliefs (called subjective priors) which they use to maximize an objective function that reflects their personal values, needs, likes, and dislikes. The model is very well ensconced in traditional economic theory, and describes human behavior extremely well, as long as we recognize a few « provisos. » First, people don’t literally « maximize » anything, any more than does a baseball player chasing down a fly ball or a fox chasing down a rabbit.
The theory says that if peoples preferences are consistent, we can model them « as if » they were maximizing, just as we act « as if » a thermodynamical system is maximizing entropy—it’s an innocuous short-hand expression. Second, individual wants and desires are not perfectly attuned to individual well-being. People engage in all sorts of harmful practices, such as smoking cigarettes and eating unhealthy foods. Third, people are generally not selfish, but rather exhibit other-regarding preferences and prefer in many circumstances to behave morally even when this is costly in terms of forgone alternatives. Thus, the sort of rationality embodied in the rational actor model is rather thin, but it is sufficient to enable economists (and biologists who use the model in studying animal behavior and epidemiology) to get lots of things right lots of the time.
The core problem with the rational actor model is that, while it deals with risk rather well, it has absolutely no handle on genuine « uncertainty, » as the famous Ellsberg Paradox so well illustrates. When a situation has a probabilistic outcome but the probabilities are clearly known, as for instance in flipping a fair coin, people behave different than in a situation in which they are not sure of the probabilities.
The standard axiomatic of rational choice cannot deal with such a situation. Of course, we may treat uncertainty by positing that we live in one of several « possible worlds, » in each of which the probabilities are known. But then we must know the probabilities assigned to being in each of these worlds. If we know these probabilities, it is a simple exercise to calculate the meta-probabilities for our situation. This is absolutely nothing new. However, suppose we don’t know the probabilities for each of the possible worlds. Then we must posit a universe of higher-level worlds, in each of which the probability distribution over the lower-level worlds takes some determinate form. Once again, however, we can perform simple calculations (so-called « compound lottery » calculations) to get determinate probabilities (« risks ») for our world. And so on, up the ladder of meta-possible-universes.
The central fact is that people do not engage in such infinitely recursive reasoning. This is not a failure of rationality, but rather a weakness of the whole recursive possible worlds framework, which is really a device for reducing uncertainty to risk.
What do people do when there is fundamental uncertainty? This is the question that has haunted me for many years. I do not claim to have a scientifically acceptable model of human behavior under uncertainty yet, but I’m looking around, for sure (Joni Mitchell once wrote « Everybody’s saying/Hell’s the hippest way to go/Well, I don’t think so/But I’ll take a look around it though. ») Here is a brief summary of where I am at, and I invite commentary.
I expect such a model would exhibit the following properties, depending on the parameters of the situation. For situations in which success is public information, there would be a Nash equilibrium with some « experimenters » who try out new ideas and « traditionalists » who stick with the known and true until the success of the experimenters is sufficiently clear. There are papers in the literature that model this phenomenon: John Conlisk, « Optimization Cost », Journal of Economic Behavior and Organization 9 (1988):213-228; Robert Boyd and Peter J. Richerson, « Group Beneficial Norms Can Spread Rapidly in a Cultural Population », Journal of Theoretical Biology 215 (2002):287-296.
Within the same category of imitation, there can be « bandwagon » effects, as modeled in Sushil Bikhchandani, David Hirshleifer and Ivo Welsh, « A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades », Journal of Political Economy 100 (1992):992-1026; B. Douglas Bernheim, « A Theory of Conformity, » Journal of Political Economy », University of Chicago Press 102,5 (1994):841-877; Abhijit V. Banerjee, « A Simple Model of Herd Behavior », Quarterly Journal of Economics 107 (1992):797-817. In these models, difference social groups can settle on distinct decisions, and there is little tendency for groups to switch to the decision of another group because the distinct decisions increase the social distance between members of different groups, for a variety of reasons. This is stressed in a very nice paper: Joseph Henrich and Robert Boyd, « The Evolution of Conformist Transmission and the Emergence of Between-Group Differences », Evolution and Human Behavior 19 (1998):215-242. For this reason, several distinct religions can persist among spatially and social distanced groups, each holding firm to its own beliefs.
A key element in the rational actor model is that of « Bayesian updating. » What this means is that with a given set of subjective priors, when new evidence comes in, there is exactly one « rational » way to transforms beliefs in light of the new information. Many beliefs, however bizarre, have little to fear from Bayesian updating, because there is virtually never new information that impacts on these beliefs. The intellectual/scientific problem is that individuals often update the credibility of the evidence in light of their beliefs rather than the other way around. For instance, when Christians discovered that the Earth was millions of years old, not the six thousand odd years portrayed in the Bible, and the humans are the product of Darwinian evolution, many revised their cosmologies and their scientific preconceptions (including the Catholic Church), while others mounted vicious attacks on the scientific community, calling them a group of atheists who were colluding with the Devil to thwart the will of God. In terms of the rational actor model, the strategy of the second group is no less Bayesian updating that that of the first.
There are also situations in which people are collectively unsure what to do, and where they must make a choice that will commit them collectively to a single decision. This is the setting for the emergence of ideological divisions and cultural politics so characteristic of the socialism/capitalism debates of the past, or the current global warming debates. »
On peut discuter certains détails (et, de fait, certains économistes ne seraient pas totalement d’accord avec Gintis), mais cette description est une bonne base pour quelqu’un cherchant à comprendre sincèrement la manière dont procèdent les économistes (la plupart d’entre eux en tout cas).