Standardized Testing IQ Tests (Intelligence Quotient)

An intelligence quotient, or IQ, is a score derived from one of several different standardized tests attempting to measure intelligence. The term "IQ," from the German Intelligenz-Quotient, was coined by the German psychologist William Stern in 1912 as a proposed method of scoring early modern children's intelligence tests such as those developed by Alfred Binet and Théodore Simon in the early 20th Century.[1] Although the term "IQ" is still in common use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is now based on a projection of the subject's measured rank on the Gaussian bell curve with a center value (average IQ) of 100, and a standard deviation of 15, although different tests may have different standard deviations.

IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status, and to a substantial degree, parental IQ. While its inheritance has been investigated for nearly a century, controversy remains as to how much is inheritable, and the mechanisms of inheritance are still a matter of some debate.

IQ scores are used in many contexts: as predictors of educational achievement or special needs, by social scientists who study the distribution of IQ scores in populations and the relationships between IQ score and other variables, and as predictors of job performance and income.

The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century with most of the increase in the lower half of the IQ range: a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities, or merely methodological problems with past or present testing.