Intelligence has always been a major and controversial issue for psychologists. Intelligence has three major areas of debate: its definition, its measurement, and its heritability (Source: Weinberg, 1989). The title of this essay asks specifically about the measurement of intelligence, but this thereby requires an investigation into the definition of intelligence used, because of its massive influence on its potential measurement. It also raises the question of whether intelligence can be measured at all. A common criticism of I.Q. tests are that they only show how good you are at I.Q. tests and do not reflect ‘true’ intelligence. The solution therefore to understand intelligence better before we try to measure it, although this is by no means an easy task. Cicero was the first to use the term ‘intelligentsia in an attempt to provide a Latin equivalent for a Greek philosophical term (Source: Cyril Burt, 1955 pp. 159).
Today there are many different definitions of intelligence, and obviously, this shows that it means different things to different people. Intelligence, therefore, is a term that is vague yet flexible and has many characteristics. (Source: I. Roth 1990) Nowadays it is widely accepted that intelligence is a ‘general cognitive ability (i.e. capacity), but this is still far too vague a definition to be useful in measuring it. Binet and Simon (1905) raised the issue that intelligence’s generality is a problem: “Almost all the phenomena that occupy psychology are phenomena of intelligence….. Should we put all of psychology in the tests?” (Binet and Simon, 1905; Quoted in Wolf, 1973, p.178) There are 3 major approaches to intelligence: the psychometric approach, the information processing approach, and the developmental approach.
Prices start at $12
Prices start at $11
Prices start at $14
Prices start at $12
The psychometric approach, as the name infers, focuses on the measurement of intelligence. Psychometrics takes a practical approach to intelligence, but the definition of intelligence it uses- “that which is measured by IQ tests” – is flawed in that it does not avoid the problem of defining intelligence, it merely predisposes the problems of definition onto the structure and type of test used. The information processing approach is more complex than the psychometric approach: it enquires into the nature of intelligence and how it works, rather than attempting to measure it, and in doing this it is a more advanced and mature approach. Hebb (1949) divided intelligence into two categories that have proved useful in approaching intelligence: Intelligence A and Intelligence B. “Intelligence A” is that part of intelligence which is inherited (i.e. genetically coded for), “the biological underlay of all cognitive activities.” (Quote: Weinberg).
“Intelligence B” is that part of intelligence that is learned. The usefulness of this insight is however limited because the two categories are functionally and intrinsically linked, making it extremely hard (if not impossible) to study, or test, one alone from the other. Another more important question concerning the structure of intelligence is that of whether it is based on a single factor (monarchic) or multiple factors (oligarchic). Details of such structural definitions of intelligence severely affect the structure and scoring of the tests. Binet and Simon saw intelligence as monarchic. The statistical technique of factor analysis was first applied by Charles Spearman (18..) in an attempt to settle the argument between factor theories.
Spearman concluded that intelligence does have a general underlying factor (which he termed ‘g’) and that, on top of g there are capacities that are specific to a particular task: ‘s’. Catell’s model of intelligence divided Spearman’s g into two: fluid intelligence: the biological capacity to solve novel problems creatively; and crystallized intelligence: the learned capacity to solving knowledge-based problems. The phenomenon of idiots savants says something about the complexity of the structure of intelligence. Cyril Burt, Philip Vernon, and others in the 1940s and 1950s carried out research using factor analysis and concluded that Spearman’s 2-factor theory was too simplistic. Vernon developed a hierarchical model in the 1950s, which broke g down into many subcategories. This injects further complexity into the design of an I.Q. test that sums up all these subcategories in one score- it becomes apparent that the use of a ‘common sense approach in choosing test questions is far too simplistic a method.
Francis Galton constructed the world’s first intelligence test. Galton found were that hereditary factors are “overwhelmingly important”-unsurprising in view of his biased support for the eugenics movement. Galton tried to examine innate (genetically inherited) intelligence and attempted to do this by using a selection of sensory discrimination tests. Galton himself found that his different tests on sensory discrimination do not correlate with one another, nor with other measures of intelligence such as scholastic achievement, nullifying his hypothesis. However, even today, research into a measurable physiological basis of intelligence provides weak and ‘shaky’ evidence (Source: Stott, 1983). Binet & Simon developed the first usable intelligence test in 1905. They used a ‘common-sense’ approach in deciding what type of questions to use and chose a wide variety of tasks normally associated with intelligence. They then went on to use standardization samples to establish test norms within age groups- giving rise to the term “mental age”.
Stern, in 1912 was the first to attempt at constructing an intelligence quotient. He wanted to develop one that reflects a person’s mental age in relation to their real age. He, therefore, derived the formula “IQ = Mental Age / Chronological Age * 100”, but this formula is flawed because it states in adulthood intelligence levels plateau, yet chronological age continues to increase, balancing the formula and causing the IQ level to actually start retarding. The current, most widely used IQ test is derived from David Wechsler’s WISC (Wechsler Intelligence Scale for Children) and WAIS (Wechsler’s Adult Intelligence Scale). Wechsler found that if an intelligence test is given to a large sample of people it resembles the normal distribution curve. He thereby deduced that it would be sensible to represent I.Q. scores as a degree of deviation from the mean, i.e. the standard deviation. A person’s I.Q. is therefore worked out by comparing their test results to the mean of their age group. Such use of mental age with chronological age conflicts with the notion of intellectual potential rejects the concept of the ‘early developer’ or ‘late developer’.
The WISC (the Wechsler Intelligence Scale for Children) manual lays strict criteria for what is a correct answer to the test questions. This gives rise to an undesirable judgmental element for the test examiner, in deciding whether a child’s answer fits into the criteria given. The strong white Anglo-Saxon, middle-class bias of the WISC test questions is easily observed. The WISC contains questions such as “In what way are Whisky and Sherry alike?” and “What should your friend do if she loses one of your toys?” – questions which are most definitely culture-specific. “Intelligence testing aims to obtain a quantitative measurement that expresses an individual’s standing relative to others” (Quote: Joanna Ryan, 1972) – but it seems that to some extent this comparison cannot be cross-cultural.
Attempts have been made to create culture non-specific tests, yet it must be accepted that even the lowest common denominator- the test itself, is somewhat culture-specific. In some cultures the formal testing of mental abilities (verbal, visual, or written) is so uncommon, it is likely that individuals from those cultures would find an IQ test a confusing, if not also pointless, activity. It has also been found that IQ test bias can also be as localized as a rural-urban boundary (Myra Shimberg, 1929). For example, questions like “What is butter made from?” would be rural-biased, whereas questions like “what are the colors of the American flag/” would be urban-biased. There are examples of I.Q. tests that are more culture-fair than the Wechsler’s, for example, Raven’s Progressive Matrices. This is a visual, non-verbal test that is designed to measure abstract reasoning ability. There is also Catell’s ‘Culture Fair’ intelligence test. Although these are better, they are by no means entirely culture fair, and this is a widely accepted fact.
So far in this essay, the theory of intelligence testing has been critically reviewed but not the actual methods of ‘intelligence’ measurement. Intelligence tests, like all mental tests, must be reliable, valid, and unbiased. The reliability of a test is ‘the consistency and stability with which it measures’ (Quote: I. Roth, 1990) Certain techniques can be used to test this reliability, these are called “split-half” reliability, “parallel form” reliability, and “test-retest” reliability. A test is predictively valid if it measures “what it purports to measure.” (Quote: I.Roth, 1990). With IQ tests this is done by correlating the IQ scores with levels of scholastic achievement, eg GCSE or A-level results, known as “criterion validation”. There is also “congruent validation” which is another predictive validation. It involves the correlation of the test score with other test scores, which is useful in seeing if a new test measures the same thing as other existing tests.
As well as predictive validity there is “construct validity” and “face validity”. A test has construct validity if its findings fit into the relevant theoretical construct. A test has face validity if the findings have statistical validity yet do not fit into a valid construct. Pyle & Nuttall (1985) raise the important point that “to validate tests as measures of construct intelligence, we need a theory of intelligence that will predict how measures of intelligence relate to other kinds of construct (such as motivation)” The psychometric approach provides no construct of intelligence. Where one is used (eg Vernon’s hierarchical model) it is based on findings from factor analysis so it only relates to relationships between tests of intellectual attributes and not to other measures (eg of motivation) or to other kinds of evidence about intellectual functioning. (Adapted from Pyle & Nuttall, 1985)
In other words, one of the problems of the psychometric approach is the fact that not enough attention has been paid to the construct validation of the tests other than factor analysis of test scores. (Source: Pyle & Nuttall, 1985) Psychometrics has revealed very little about the nature of intelligence, conversely, it has enabled the development of some dangerous assumptions, the most important of which are mentioned below. Psychometrics also assumes that the tests measure the capacity of a person’s intelligence, whereas they actually only measure test score achievement, which is not necessarily the same thing. Correlation’s with scholastic achievement shows a strong relationship: Jensen (1980, 1981) states that the correlation coefficient between I.Q. and scholastic performance as between 0.5 and 0.8 (i.e. a 50% to 80% likelihood of the relationship is significant.), and the correlation coefficient between IQ and occupational status as 0.5 to 0.7. Such correlations may be impressive yet they do not imply direct causality.
If low IQ scores are interpreted as measuring actual intellectual “capacity”, it allows the educational system to write off underachievers as not having the capacity to perform tasks any better. Why should a low score on an IQ test be the cause for low levels of scholastic achievement? The interpretation of causality into a relationship pattern is a classic mistake in psychological research, and in this case, it has had far-reaching implications in education and employee recruitment. Some researchers are extremely critical of intelligence tests saying that all that tests really accomplish is to label youngsters, stigmatizing the ones who do not do well and creating in them an injurious self-fulfilling prophecy. It is agreed that there may be some truth in this view. It is also true to say that intelligence tests have been used in the past for other reasons than intended, for example, to give scientific weight behind xenophobic views.
There is a vicious circle effect that can be postulated: if at an early age teachers develop low, even negative expectations of the child student then, compounded with inefficient ‘streaming’ (eg placing the child in a low-level class and labeling them as difficult.), this could create negative self-fulfilling prophecies. Could this kind of cause-effect chain contribute to the correlations found between a child’s I.Q. and their future achievements? After an intense analysis of IQ testing, it may seem too cynical that they do more harm than good. Stott (1978) makes interesting comments on this subject: “If a low IQ tells us anything, it is that in certain fields of mental function the child is performing poorly. Once we have learned to resist the temptation to attribute this to ‘low intelligence’, the low IQ, or any other indication of poor performance, can become the starting point for an inquiry into the reasons therefor… Once freed from the concept of intelligence as an all-too-easy explanation, we can look beyond poor mental performance to discover how it has come about.” (Stott, 1978, p.19)
Even today we have no satisfactory definition or testing methods for intelligence. It is now widely accepted that “intelligence” an out-of-date term that is too vague to be of any technical use. We should not embrace the psychometric view that intelligence ‘is an explanatory construct of causality’ ( I.Roth, 1990); i.e. the view that a low level of ‘intelligence’ causes low IQ test results and low levels of scholastic achievement. It is important to remember that intelligence is very much affected by learning (within the biological limits of the individual) and therefore to see intelligence as the effect of learning. This view would also help the plight of low scoring children because it allows us to “look beyond the poor mental performance and discover how it has come about” (Stott, 1978) and, so far as possible, to remedy the poor mental performance using teaching techniques such as ‘task analysis’. It must be highlighted that the problem is mainly the way IQ scores are used, or ‘wielded’ in society (education, employment, etc.). The test results themselves, as explained above, do have useful applications. (Source: Cronbach, 1984)
However, the psychometric approach does have 2 main weaknesses (Source: Pyle & Nuttall, 1985). Firstly, there is a lack of construct validation of the tests other than factor analysis. Secondly, psychometrics widely ignores the processes underlying intelligent behavior. The information processing approach is most certainly advantageous over the psychometric approach. The psychometric approach fails to even approach the ‘how’ question, i.e. how does intelligence work? This is where the information-processing approach comes into play. It would be a very sensible idea to use an information processing approach in decomposing the term ‘intelligence’ by finding out the active processes involved in carrying out intelligent behavior and getting rid of the psychometric illusion of IQ objectivity. (Source: Seigler & Richards, 1982, p.920; and: Joanna Ryan (1972)).
- Cronbach (1984); “Essentials of psychological testing”, Harper & Row publishers
- H.J. Eysenck (1953), “Uses and Abuses of Psychology”, Pelican Books.
- Nuttall “Unit 11: The Nature Of Intelligence”, OU Press.
- Ilona Roth (1990); ” Introduction to Psychology”, OU Press. W. Pyle & D.L. Ryan, J(1972) ‘IQ- The illusion of objectivity-From Ken Richardson (ed.) ‘Race, Culture & intelligence’
- Stott, D.H. (1978) “Helping children with learning difficulties”, London, Ward Lock
- Lecturer Name (6/1/97) ITP lec 17, term 2. “Assessing people – What is Intelligence (1) and (2)” (no publisher)