Sunday, 25 August 2013

Gigerenzer and psychological theorizing


“Several years ago, I spent a day and a night in a library reading through issues of the
Journal of Experimental Psychology from the 1920s and 1930s. This was professionally a most depressing experience. Not because these articles were methodologically mediocre. On the contrary, many of them make today’s research pale in comparison to their diversity of methods and statistics, their detailed reporting of single-case data rather than mere averages, and their careful selection of trained subjects. And many topics—such as the influence of the gender of the experimenter
on the performance of the participants—were of interest then as now. What depressed me was that almost all of this work is forgotten; it does not seem to have left a trace in the collective memory of our profession. It struck me that most of it involved collecting data without substantive theory. Data without theory are like a baby without a parent: their life expectancy is low.”

It can rarely be said of a psychologist that everything they write is worth reading. Gigerenzer is one such psychologist. He writes in plain English (presumably his second language) and understands his material so thoroughly that he can explain it simply, the sign of an intelligent and honest teacher. This straightforward approach means that you can follow this heuristic to make you smart: if you cannot understand him first time round, it is worth reading him several times until you do. With lesser writers, if you cannot understand them first time, turn elsewhere.

Gigerenzer's lament is about the paucity of solid theories in psychology. He laments that there are only surrogates: one word explanations (vague, unspecified references to something like “similarity” offered without any definition or metrics, even when such things are available); redescriptions (such as opium making you sleepy because of its dormative properties); muddy dichotomies (pointless battles between overlapping categories) and data fitting (exquisite mathematical models which re-describe the findings, but cannot explain them).

In defence of the beleaguered dabblers in mental philosophy, it could be argued one has to start somewhere. Data fitting may at least show you where the main currents are in the stream, even if one has no theory of fluid dynamics to explain why there should be currents in the first place.

So, where next? No good calling for more pointless theories and grand delusional castles in the sky. Perhaps we should concentrate on some basic problems.

For example, what makes problems difficult? To make that a little easier, what makes one simple problem slightly more difficult than another simple problem? Other than it having the quality of being slightly more difficult?

No time limit.


  1. If we choose evolutionary psychology as our theory, then we probably accept the hypothesis that the mind is not a holistic "self" but rather a hodgepodge of highly specialized modules with their own genetic survival in mind. This specialization suggests that some classes of problems will be easier for me if my ancestors bequeathed me with a module that is well-adapted for those classes of problems. If this is true, then it is necessarily true that the module was adaptive according to the principles of selection. Similarly, it must have historically been adaptive to solve such problems as those for which the module is specialized.

    In short: evopsyche predicts that problems will be easier to solve if our ancestors found it adaptive to evolve specialized circuitry for them (or potential circuitry).

    Not saying I believe this, only posing it as a possble answer.

  2. Gigerenzer's book on risk is tremendous.

  3. Thanks for your comment. Although it seems feasible that the mind should be made up of successful modules it is hard to see how this could be tested against the observation that human skills have a positive manifold, such that one presumes a central processor deals with many of life's problems. Indeed, unless that were the case we would be stumped by novel problems, and would die. So far, this has not happened. We are billions, even though we are coping with problems unknown to our species even one or two generations ago. So, whilst I am sure that some problem forms are easier than others in a general sense, the evolutionary hypothesis does not really help me understand why, not in any detail anyway. I was wondering how one specifies the complexity of even simple tasks. Will try to post on that sometime soon.

    1. That's not even paradoxical. A general processor uses different circuitry for different opcodes, even though these opcodes can be combined to solve many novel problems. This is why I referred to "classes" of problems.

      A better gaming computer often has a GPU in addition to a CPU. This will show up as a slight general superiority in handling the operating system, and a greater superiority in specialized classes of problems.

  4. Intersting points. I think there is a paradox. I assume you have seen my earlier post:
    Perhaps we have to move from analogies to contemporary brain data (of good quality) P-FIT is the front runner from brain correlates of intelligence, and it seems general.

    1. I've read it, although I had to brush up, (Was that really last year?)

      The positive manifold is impossible to deny, but it supports both the "tree" model and the CPU/GPU model, if not the nose cone. I still don't believe the point I'm arguing, but I am enjoying the exchange nonetheless.

      For completeness, I'll note that my opinion is that human variation in intelligence is due to various levels of retardation from a previous, more ideal design. So we all have an extraordinarily competent general processor (and support system), but they are mostly malfunctioning with the exception of a few geniuses whose creativity and rational abilities never seem to fail them (Newton, Mozart and company).

      But I suppose that marks me as one of those kooks, doesn't it? I'll just let myself out :-).

  5. Thanks. When was the golden age of intelligence? Charles Murray has some suggestions in Human Excellence, but it is hard to be sure about such estimates, alluring though they may be. On the previous point, that the positive manifold is consistent with g and with the modular approach, then I suggest this is a low redefinition of what is usually meant by modules.

    1. I apologize. I promise that this redefinition was due to incompetence and not malice.

    2. I make that apology without qualification. It would be worse to harm the good faith of an intellectual community than to omit my opinions (however well-intentioned).

      If that is understood, then I believe I may continue with your blessing.

      I would be intellectually remiss if I did not also mention that both conceptions of modularity initially seem to fit the evopsyche hypothesis. On my end, I failed to distinguish between the two because I did not realize the definition is a highly specific one. So until we can rediscover some vocabulary that will suit, I will refer to "modules" within the CPU/GPU model as "chipsets" and "modules" within the orthodox understanding (which correspond more closely to peripherals) as modules.

      After this agonizing detour, I'll try to address your original question within the chipset model. As for modules, I consider the question settled. You asked a question, I offered a possible answer, and you properly drubbed it as an unlikely explanation.