Friday 29 May 2015

Temporary problem

Slight problem here at Psychological Comments, in that Windows Live Writer no longer finds Google Blogger compatible, and refuses to post up my material, sulkily saying "Not found, not found". Incidentally, this may be a good phrase to use when interlocutors use vacuous terminology.

So, I will be using the slow and not particularly pictorial route of blogger composition to reach you. In the meantime, I am reading a pre-publication copy of "Intelligence: All that matters" by Stuart Ritchie, which is very good.

Magic solutions gratefully received.


Thursday 28 May 2015

The US is good at advertising itself, and the soft power it projects is often greater than the hard power it unleashes at its foes. In most conflicts it is better to have Hollywood and Elvis Presley on your side rather than the US Army, because locals more readily queue to be entertained than incinerated, and tend to look more kindly at what is fashionable than that which is imposed upon them.

So great is US power that the colonies do not know that they have been colonialized (my speller checker rejected “colonialised”). In Wim Wenders’ “Im Lauf der Zeit” (Kings of the Road) the protagonist listens to:

Trailer for sale or rent, rooms to let, fifty cents.
No phone, no pool, no pets, I ain't got no cigarettes
Ah, but, two hours of pushin' broom
Buys an eight by twelve four-bit room
I'm a man of means by no means, king of the road.
(Roger Miller).

and then exclaims: “The Americans have colonized our subconscious”. It was said as much in admiration as complaint.

Colonials watch US movies, sing US songs, and view US news from the rich and confident No 1 nation. US tragedies are well reported, US accomplishments even more so. One of the many results is that racial matters in other countries are often seen through the prism of the American experience. Movies, books, and songs chronicle the Civil Rights Movement until it becomes an emblem even more accessible than the struggle against apartheid in South Africa. So great is this power that internal US discussions about the correct language of race quickly affect the English speaking world, and the global English vocabulary is bound by internal US sensibilities and social theories.

Of course, it need not be so. There is another large country with a big African slave population with an entirely different racial history, one of much mixing and inter-marriage which should attract our attention and understanding: Brazil. They describe races as: branco (white), pardo (brown), preto (black), amarelo (yellow), and indigenous. Brazil has as many “browns” as “whites” and while that does not preclude all discriminatory practices, it is very different from the US. If the particular social history of the US is a major reason for lower African ability and achievement, then the ability and outcomes for Brazilian Africans should be distinctly better. Brazil is the test case that deserves further study. Any interest among researchers?

Meanwhile, back to the US. I have put together a list of race words to see how they have fared over the last two centuries. I first looked at “black” and “white”, and while these are general rather that racial descriptions, they are more frequent than other non-racial colour words (such as red, green, blue) but in the end I thought it better to concentrate on what appear to be specifically racial words, and followed them by Ngram.

 

image

As per Figueredo and Woodley, http://drjamesthompson.blogspot.co.uk/2015/04/in-beginning-was-word.html I have looked up each “word age” on the reasonable supposition that words which have been used for a long time (first used long ago) have proved they serve a purpose, and thus will probably be frequently used today unless some social change alters them. In judging date of first use I have relied upon the Oxford English Dictionary.

“Racial” (first used in 1854: “Such extremes have always been characteristic of the barbarous northmen, and we may not be wrong in referring to racial causes for the solution of this problem of contrarieties otherwise inexplicable”) as a concept begins to be visible in the 1880s and then oscillates upwards: important in the Second World War (forced integration through conscription?), the 60s (Civil Rights Movement) and then a sharp rise to the millenium, with something of a fall since.

“Colored” (Coloured, first used 1758, “was adopted in the US by emancipated slaves as a term of racial pride after the end of the American Civil War. It was rapidly replaced from the late 1960s as a self-designation by black and later by African-American, although it is retained in the name of the National Association for the Advancement of Colored People. In Britain it was the accepted term for black, Asian, or mixed-race people until the 1960s”) was far more far more frequently used for a longer period. It shows the same peak in the Second World War, probably for the same reasons, and then falls out of favour.

“African American” (first used in 1835) is of relatively recent confection, and shows a very sharp rise in the 1980s.

“Racism” (first used 1903) begins to show up in the 1940s. “Genetics” (first used 1905) rises slowly, but is relatively little used.

“Negro” (first used 1555, meaning “black”) was the most established word in the 1800s, makes a rise in the American Civil War, rises again in the First World War, and then falls away to less usage, with a slight recent upturn.

“Nigger” (first used 1557,  originally a neutral term meaning “from the river Niger” but also nigor, nigre, nigar meaning “black”) was in some use since the 1830s but was never particularly popular,  and is now considered taboo, except when used by African Americans about themselves. This exception is similar to the notion that some Jewish jokes ought to be told only by Jews, and raises an interesting point about self descriptors.

Why does racial self-description matter? As a matter of principle, some people may regard their football team as more important than their genetics. Nonetheless, racial appearance is obvious to all, and identifying with similar people comes naturally. Why all the changes in descriptors: negro/colored/black/african-american? The conventional explanation for changes in descriptors is that the old ones have become tarnished. Cassius Marcellus Clay said his name arose from slavery, and re-named himself Muhammed Ali. Why the stigma? Companies rebrand themselves after bad events. Malaysian Airways (see my frequent posts on their missing plane) are about to re-launch themselves, naturally with a new name. For those who find their descriptors irksome, and worse, the change can be liberating. For others citizens it becomes something of a minefield: a slightly out of date racial descriptor can be seen as an insensitive slur. A white British actor who companionably bemoaned the lack of good roles for colored actors was excoriated for his use of the word “colored”, thus missing the point of his lament.

The dynamic of self-description brings paradoxes: a person’s name is their own choice, but their racial description contains both polite and entirely factual elements. Individuals can choose to favour one set of ancestry over another when they describe themselves socially, but they cannot change their genome. Others people are allowed, or ought to be allowed, to describe the person as they see fit, because self-descriptions do not trump reality. Descriptors vary but the genome carries the real result of past choices of mate.

What is the motivation for such frequent changes of descriptions? Traditionally such changes would indicate a desire to abandon former associations: a fugue from past characteristics and associates, like those who change their name by deed poll to begin again with a new self, not bound by parental naming choices, or to dump the stain of a criminal past. Understandable, but at a cost. I don’t do policy, but I think that too frequent a name change is counter-productive. Building reputation requires a strong and consistent brand.

In sum, these word counts reveal the rise and fall of epithets, shibboleths, and euphemisms, like slivers of litmus paper indicating preferences, imaginings, self-perceptions and self-doubts, as well as the acidity or sweetness of social forces.

Chinese Question

As far as I can see, I have no readers in China. A colleague reports that one of the co-authors of the twins meta-analysis paper tried to get into my blog while in China, and found he could not do so.
Could those of you in China or with colleagues in China, see if it is possible to log in?

Monday 25 May 2015

Tweeted Science

Razib Khan has written an interesting piece on the evolving norms of science: http://www.unz.com/gnxp/science-evolved-and-reimagined/

I like his argument, and want to add to it. Scientific publishing as we know it today was an invention. The original scientific communications were letters written between one scholar and another, sometimes entirely private, sometimes read out to select local groups of fellow scholars. Science books were published, but the exchange of letters provided the everyday links between researchers, akin to that other publishing innovation: conference proceedings. Societies were the first publishers of scientific letters, and Nature originally regarded all the communications they received as no better than letters.

So, emails and blogs would be familiar in content and intent to Newton and Leibniz, even if the new technology would intrigue them, mildly. In fact, academic journals strike me as being the temporary historical aberration: scholars writing for nothing because their careers depend on it, and then paying to read what they have written for free. Nice work if you can get it. Disintermediation is under way, and the days of the current journal publishers are numbered. I doubt that the new journals that replace them will be free, nor ought they to be, but they will probably make publications available at far lower cost, whilst paying a wage to those who do the necessary work of keeping the system going. We are going back to scholar talking to scholar, the way it was and ought to be.

Razib drew attention to a recent complaint from a scholar defending himself against imputations of procedural errors in his published paper, namely that his critic “broke the “social norms” of science by initially posting the critique on Twitter”. Well, the scholar is right, or was right. Normally a critic would email the author in question, requesting explanations and further data. Then, in the traditional mode, the critic would eventually submit the written critical comment to a journal, usually the one that had published the original article, with an advance copy to the author as a courtesy. It would be sent for review, and might eventually appear long after the original paper had been published. The author would then reply, at leisure. In this light, a Tweet seems vulgar, premature, curt and peremptory. However, it very quickly alerts scholars to a potential error, which is a public service. Mistakes can be corrected quickly, and the search for the truth nimbly proceeds. Tweets are the vanguard of public science, science journals the rickety stage coach carrying the out of date news to a public who have moved on to more interesting things.

Reading Razib’s comments I found myself a little alarmed, and to my considerable surprise, slightly hurt, when he correctly observes of Twitter: It’s a public firm which is traded on the stock market and exists to make a profit and return value to its shareholders. There was a time when AOL, or Myspace, were ubiquitous corners of the internet. Though Twitter allows for a level of disintermediation, to some extent it is a stealth intermediary in and of itself.

I am a great fan of Twitter, though I hold no stock in the company, and see it as person to person communication, even as it becomes more commercial. I like the immediacy with which I receive news from conferences, alerts about upcoming publications, suggested readings from other researchers and a vibrant sense of the science community. I also get many disparate and sometimes entirely discordant views, which are instructive. Opinions are various, and although I believe I could do without many of them, it shows me how hard it is to convince people of things. A challenge for anyone who has ever wanted to explain anything.

Twitter has great limitations, and is often the cause of misunderstanding, sometimes of unintended rudeness, but I think the good far outweighs the bad. Not all thoughts are improved by verbosity. Brevity is the soul of thought. Twitter is fast, still cheap, and can have enormous impact across the globe. The immediacy of ideas is its forte.  I go so far as to say that it is one of the mechanisms transforming the communication of science.

(Please re-tweet this immediately).

Addendum

In my recent post about the reception given to the paper on fifty years of twin studies I referred to the Match statistics regarding requests as at 19 May, saying that my interest in the paper was shared with 23,000 persons. The following day requests rose to 42,853 and the total requests so far amount to 94,118. I am not aware of the usual level for Match statistics, but given that published papers, on average, are read by 20 people, these seem on the high side, indicating considerable interest.

 

PastedGraphic-6

“Gone with the Wind” keeps blowing

It is said that every political candidate ends up convinced that they are going to win the election, so long as big crowds turn out to hear them speak where ever they go. The delusion of popularity is hard to avoid: can all those adoring crowds be wrong? Adoration is not given to many of us, so the effects are as powerful on politicians as they are on pop stars. Turning out to see a politician takes some effort on the part of the electorate, but often denotes no more than curiosity. Celebrity attracts attention. Actual voting is another matter. For those reasons, plus the distorting effects constantly reassuring staff, flattering hangers-on and compliant advisors looking to keep up the candidate’s spirits to the very end, most candidates end up deluded, which is good for confidence, but makes for a rude awakening the morning after the count.

So, it is highly likely that I am now under the delusion that people in general are beginning to get used to the findings that many of their behaviours and attitudes have a substantial heritable component. The reason for my strongly held delusion is that for the two and a half years of writing this blog I have developed a keen sense of the importance of page views. They are to me what adoring crowds are to pop stars: a measure of success. As part of my delusional state, I assumed that my loyal readers would be interested in the papers arising from the London Conference on Intelligence, and they were.  AJ Figueredo’s paper on the diminishing use of altruistic words has gained 330 readers so far. The argument is subtle, the implications important, the reader response contained and modest, as you would expect from my distinguished readers. Ed Dutton, using the clever strategem of showing striking photo of a muscle-bound hairy warrior on his paper about androgens, drew 518 readers, many of whom, like me, probably resolved to spend more time lifting heavy weights. Dalibor Jurasek’s paper on Roma intelligence had even more chance of drawing attention: frighteningly low intelligence figures for those who maintained marriages within the Roma, and higher scores for those who became assimilated, either through flight or inter-marriage with locals, drew 782 avid readers.

However, as the screen grab today shows, like its namesake before it, the twin-studies meta-analysis has blown all before it. Readers of this blog will understand the issue of metrics: if a blog post does spectacularly well in 4 days, should the comparison be with other posts in a week, or a month, or all time? The first is easily defended as the most appropriate, the second (as used in my comments above) puts the achievement into contemporary context, the last denotes its overall significance in my small universe.

image

image

image

I know that Steve Sailer gave it a good write up, and he is the miglior fabbro of seditious empiricism, with stadia of followers. Nonetheless, he has noted several of my posts, such that I can compute the “Sailer effect” on my little blog, and large as that effect always is, this particular effect is larger. So now I am ensnared by another delusion: perhaps more people are interested in, ready to read about, and minded to acknowledge the power of ancestry. Stepping outside my cocoon, Danielle Posthuma tells me that the paper received 23k requests on the Match website by 19 May, so it seems my delusion is not “Folie à deux” but “folie a 23,000”.

Perhaps times are changing and we have got somewhere beyond talking to ourselves.

Saturday 23 May 2015

“Gone with the Wind” strikes a chord

image

After only 3 days, the post on the twin studies meta-analysis seems to have found an audience.

Wednesday 20 May 2015

Gone with the Wind

 

Every now and then a blockbuster paper comes along which, like the 1939 top grossing $1,640,602,400 movie “Gone with the Wind”, carries all before it. It may be that “Meta-analysis of the heritability of human traits based on fifty years of twin studies” by Tinca Polderman, Beben Benyamin, Christiaan de Leeuw, Patrick Sullivan, Arjen van Bochoven, Peter Visscher & Danielle Posthuma in Nature Genetics is such a paper. Although the title is not very snappy, it is a relief to read a paper on genetics where the author list is less than a thousand. This Magnificent Seven have ploughed through fifty years of twin studies to assemble 14,558,903 partly dependent twin pairs drawn from 2,748 publications. After all these monumental labours, perhaps their publication may yet get known as the “Gone with the Wind” paper, because it blows away much confusion, much prevarication and much obfuscation about twin studies.

Nature Genetics (2015) doi:10.1038/ng.3285

https://drive.google.com/file/d/0B3c4TxciNeJZaC1rNks5ajQ3VkE/view?usp=sharing

The authors find that estimates of heritability cluster strongly within functional domains, and across all traits the reported heritability is 49%. For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation. (This finding has been under-reported).

All human traits contain a substantial heritable element. The blank slate is totally false. If you have colleagues who doubts the twin method or who have difficulty accepting the power of ancestry, shall I repeat for them Rhett Butler’s last words to Scarlett O'Hara right now, or is it better that I tell you a little more about the findings?

I expect you have an interest in the results on cognition, so rest easy, heritability is high, though not as strong as for skeletal, metabolic, ophthalmological, dermatological, respiratory, and neurological traits. Usually there is a big difference (top line of figures) between the high correlations for monozygotic and the lower correlations for dizygotic twins, showing a strong genetic effect. The exception is social values, in which the environment makes a bigger contribution than usual, though not quite as big as heredity.

 

image

 

 

Cognitive traits correlate 0.646 in identical twins, 0.371 in fraternal twins, with miniscule error terms of .01 in these enormous samples. An additive model seems appropriate for cognition.

They conclude: Our results provide compelling evidence that all human traits are heritable: not one trait had a weighted heritability estimate of zero. The relative influences of genes and environment are not randomly distributed across all traits but cluster in functional domains. In general, we showed that reported estimates of variance components from model-fitting can underestimate the true trait heritability, when compared with heritability based on twin correlations. Roughly two-thirds of traits show a pattern of monozygotic and dizygotic twin correlations that is consistent with a simple model whereby trait resemblance is solely due to additive genetic variation. This implies that, for the majority of complex traits, causal genetic variants can be detected using a simple additive genetic model.

Approximately one-third of traits did not follow the simple pattern of a twofold ratio of monozygotic to dizygotic correlations. For these traits, a simple additive genetic model does not sufficiently describe the population variance. An incorrect assumption about narrow-sense heritability (the proportion of total phenotypic variation due to additive genetic variation) can lead to a mismatch between the results from gene-finding studies and previous expectations. If the pattern of twin correlations is consistent with a substantial contribution from shared environmental factors, as we find for conduct disorders, religion and spirituality, and education, then gene-mapping studies may yield disappointing results. If the cause of departure from a simple additive genetic model is the existence of non-additive genetic variation, as is, for example, suggested by the average twin correlations for recurrent depressive disorder, hyperkinetic disorders and atopic dermatitis, then it may be tempting to fit non-additive models in gene-mapping studies (for example, GWAS or sequencing studies). However, the statistical power of such scans is extremely low owing to the many non-additive models that can be fitted (for example, within-locus dominance versus between-locus additive-by-additive effects) and the penalty incurred by multiple testing. Our current results signal traits for which an additive model cannot be assumed. For most of these traits, dizygotic twin correlations are higher than half the monozygotic twin correlations, suggesting that shared environmental effects are causing the deviation from a simple additive genetic model. Yet, data from twin pairs only do not provide sufficient information to resolve the actual causes of deviation from a simple additive genetic model. More detailed studies may identify the likely causes of such deviation and may as such uncover epidemiological or biological factors that drive family resemblance. To make stronger inferences about the causes underlying resemblance between relatives for traits that deviate from the additive genetic model, additional data are required, for example, from large population samples with extensive phenotypic and DNA sequence information, detailed measures of environmental exposures and larger pedigrees including non-twin relationships.

By all standards of academic debate, this is the mother of “F*** Off” samples, which should lay low five decades of quibbling about the twin method. Author Beben Benyamin has given reassuring interviews saying that it is not a case of “genetics versus environment” but genetics with environments (which was always understood by researchers). The genetic component is increasingly understood, the environmental component remains vague, with ad hoc speculations about shared variance which are usually not validated.  Despite this massive paper being reported in The Guardian (though without drawing attention to the findings on cognitive ability) I fear that many journalists, commentators and researchers, in the attributed words of a British Trade Union leader turning down a management offer he considered too low, will treat it “with a complete and utter ignoral”.

Where do the heredity sceptics go now?

"Frankly, my dear, I don't give a damn"

Monday 18 May 2015

Are academics open to hypotheses?

I once attended a garden party and found myself next to Kate Adie, a notable BBC reporter who had covered the Iranian Embassy siege in London, and then subsequently a wide range of international affairs, mostly in war zones.

At this particular time the big domestic story was the surge of a group of 121 children in Cleveland, UK who had been diagnosed as having suffered sexual abuse at the hands of their parents or relatives.  This was based on a particular, non-standard, diagnostic procedure called the “anal dilation test”. Opinion was split between those who doubted that so many parents and care-givers would bugger their children, and those who felt that child abuse was widespread and unacknowledged. The local authority believed the test was sound and removed the children from their parents, and put them into social care. A purely scientific approach would have been to be open to the hypothesis that child abuse might be more prevalent than previously estimated and then looked at the supporting evidence in a critical light. What was particular about the case was the extremely elevated estimate of anal intercourse implied by the test administered by two paediatricians. Anyway, at the parents eventually obtained a court judgement that the children could come home.

The BBC, Kate Adie said, had just put this as the lead item on the news as “Children re-united with their parents”. I thought about this, and it seemed a perfectly fair description to me. “Although I am not on duty” Kate continued “I rang up the news desk to correct them”. I was still bewildered, but kept listening. “The News should have said ‘The children were returned to their parents”. Finally, the penny dropped. We would not have said “The children were reunited with the paedophiles who had been abusing them”. “Returned” was factual, “re-united” implied a happy family, and that it had been wrong to take the children from their families.

Leaving aside the later knowledge that the supposed diagnostic test was deficient, this revealed to me the power of unexamined assumptions. Bias is most powerful when it seems reasonable and universal. In social science most researchers and most of the student audience share particular assumptions about society, about causal variables and even about values. In American academia, the most influential in the world, about 80% are of Liberal persuasion.  From a high minded perspective this should not matter, because methods should be strong enough to counter all assumptions. If researchers are open to all hypotheses, then all explanatory possibilities will be examined with equal rigour and rectitude. However, that is not always the case.

Much of the literature on maternal deprivation made light of possible genetic confounders. Some of the developmental literature avoids genetic group comparisons. As far as I know, no one is repeating Dan Freedman’s observational work on neonate behaviour, showing profound differences in behavioural reactions in the first days of life.

https://www.youtube.com/watch?v=eHeSlMui-2k

Friday 15 May 2015

The (diminishing) kindness of strangers?

 

Speaking of the moral sense and the social virtues of humans and other animals, and discussing social and moral faculties specifically in the context of the human social evolution, Darwin used a number of words to describe noble sentiments: aid, courage, duty, fidelity, heroism, kindness, obedience, patriotism, self-sacrifice and sympathy. Assuming that people retain their altruism, one would expect them to continue to use these words in their written texts: books, newspapers, articles and the like. The words are well established in the language, and not the fruits of recent and passing fashions, so if strangers still value kindness, then the word “kindness” should continue to be used in written language. Are these altruistic words still used?

AJ Figueredo and colleagues have dipped into Google Ngram to test the hypothesis that between 1850 and 2000 “eminence”, a relatively rare combination of genius and altruism, was selected for in the process of inter-group competition and selected against in the concurrent process of inter-individual competition. They propose a multilevel selection model in which they expect to find associations at the aggregate level between higher levels of cognitive intellectual abilities and natural behavioural dispositions that are costly to the self but beneficial to others, in other words “altruistic” as defined in evolutionary theory.

Henry Harpending observes that Figueredo and Woodley propose that selection on intelligence within and between human groups works in opposing directions. Between group competition, especially in the context of hard times, favours groups with high genotypic IQ, especially innovators with IQs beyond 140 or so. Easier times relax this selection within groups leading to genotype IQ decline.

The multi-level model distinguishes between words (or tasks) which are “Hard”  theoretically indicating heritable general mental ability (g.h)  and words (or tasks) which are “Easy” theoretically indicating environmentally-influenced specialized mental abilities (s.e).

 

image

The dysgenic hypothesis proposes that after the Great Exhibition in 1851 the age old rule of brighter and wealthier parents having more surviving children was reversed. Will this result in less altruism, less general intelligence, and the rise of environmentally driven specialised abilities? So it would seem:

image

image

image

image

Read the whole thing here:

https://drive.google.com/file/d/0B3c4TxciNeJZWkViMjNvSWZEcHc/view?usp=sharing

Thursday 14 May 2015

Marginal tribes, disparate outcomes

There are two genetic/cultural groups who constitute minorities in Europe. For various reasons they were kept at the margins of society, made liable to restrictions in terms of the trades they could follow, and often hounded and maltreated. In the 20th Century both were slaughtered by German National Socialists.

One of those tribes is well known and highly accomplished, the other less well studied, and with far fewer accomplishments. The contrast is striking. However, explaining that difference must wait for another day. At this point we need to get a better understanding of the abilities of the less well studied tribe, particularly since studies on their abilities are often not published in English, the de facto language of Google science.

I have covered the topic before,

http://drjamesthompson.blogspot.co.uk/2014/11/gypsy-intelligence.html

but the London Conference paper brings things more up to date, and carries out a formal meta-analysis.

image

 

Dalibor Jurasek has found 38 usable datasets from 7 countries, resulting in a total N = 4468. He concludes that the best estimate of Roma intelligence is IQ 74. This is extremely low. There does not seem to be any verbal bias in the tests; there is weak or no Flynn effect; Spearman’s rule seems to hold.

Below you will find his Powerpoint presentation and, in a move which I hope many speakers will follow, his speaker notes, which give further particulars.

https://drive.google.com/file/d/0B3c4TxciNeJZRDhXZ3k3TzFoczA/view?usp=sharing

Now we can start working on why the Roma and the Ashkenazi differ in their accomplishments.

Wednesday 13 May 2015

Hairs on your chest: Androgens and intellect

 

 

image

Continuing the series on selected papers from the 2015 London Conference on Intelligence I am testing the power of an arresting image, as used by Ed Dutton in his presentation: Population Differences in Androgen levels: A test of the Differential K theory.

I should point out that the fine figure of a man above was neither a speaker nor a guest at the conference, but hair is an indicator of hormones, so examine the hairiness of the back of your middle finger before reading any more.

All species face a dilemma: is it better to have very many offspring, and hope that some will survive; or have a few and work hard to ensure that they survive? The first strategy involves lots of sex and not much parental involvement; the second less sex and much more parental investment. The Reproductive strategy leads to fast, “live for the moment” lives, the Konservative strategy slower, “live for tomorrow” lives.

Dutton argues that most of the data fits well with the r-k continuum, but points out anomalies regarding body hair, which against prediction is higher in European groups, possibly as an adaptation to mildly colder climates.

Read it all here:

https://drive.google.com/file/d/0B3c4TxciNeJZdmJ1MkwyNW1KNFk/view?usp=sharing

Monday 11 May 2015

Bringing Intelligence to Life (Today 5 pm)

 

 

OK, Dancing Mice won’t be playing at the Darwin Lecture theatre tonight, but that is only because their lead singer, rockstar Ian Deary, will be giving the 2015 Jonckheere Lecture.

The 2015 A. R. Jonckheere Lecture entitled 'Bringing Intelligence to Life'
will be delivered by Professor Ian Deary (University of Edinburgh) on Monday
11th May at 5pm in the UCL Darwin Lecture Theatre, followed by a reception
in the South Cloisters.


Professor Deary is one of the foremost international experts on  cognitive ageing and is the Director of the University of Edinburgh's Centre  for Cognitive Ageing and Cognitive Epidemiology. His books include 'Looking Down on Human Intelligence: From Psychometrics to the Brain' (2000) which  won the British Psychological Society's Book Award in 2002, and (with Whalley and Starr) 'A Lifetime of Intelligence' (2009).  The Lecture is hosted by the UCL Division of Psychology and Language Sciences.

The Lecture is free of charge and open to all staff, students and alumni.

Strictly speaking, this is a UCL event, but the thirst for knowledge is the beginning of wisdom, so in the unlikely event that anyone in the audience wonders what department you were in, say you were invited by James Thompson, Honorary Senior Lecturer in the Dept of Psychology and Language Sciences. They will probably look at you totally blankly, so just lower your voice and mutter “Psychological Comments”. That should suffice.

Sunday 10 May 2015

London Conference on Intelligence 2015 Keynote

Never let it be said that our conference lacked ambition. There were two keynote talks. Michael Woodley gave a detailed exposition of his recent paper “By their words ye shall know them” but I had already covered that a few days ago. Heiner Rindermann launched a Gesamtkunstwerk entitled “Evolution versus Culture in international intelligence differences”. This tour de force included a summary of the hereditarian hypotheses, where in Heiner’s view the evidence was only indirect,  and also a very good exposition of religion as a cultural force.

https://drive.google.com/file/d/0B3c4TxciNeJZaEVNMFQ5RWxSSlE/view?usp=sharing

Wednesday 6 May 2015

Intelligence conferences test intelligence

You will probably be aware that getting myself to a conference tests my intelligence to the limit. Calculating how to coordinate the flights with the hire car and the hotel booking creates great consternation. The reverse process is just as challenging, particularly estimating when I will be handing in the car (usually best done before the departing flight takes off). The complexities of passports, currencies, tickets all conspire to confuse me, more so when different time zones come into play.

In contrast, staying in one’s home city and organising a conference should be easy. Language, currency and time zone are all set to one’s advantage. However, new tasks arise as an organiser, such as coordinating later arrivals with speaker’s timetables, organising the program so it looks as if it has a structure, and finding suitable eating and drinking venues for hard working, hard thinking colleagues, without bankrupting less well paid young researchers.

Hence, I have not be able to multi-task and attend to the blog whilst also making conference arrangements, so I will be quiet until the event starts, and then I will send you a few comments on each paper, together with the abstracts. Mind you, I am likely to get distracted by the papers themselves, and multi-tasking by blogging could be too much for me, so perhaps my silence will be even more prolonged. 

Finally, given that the United Kingdom is having a general election tomorrow, I thought that I would use the opportunity to link that event to some aspect of intelligence. That has proved hard.