An intelligence quotient (IQ) is a total score derived from several standardized tests designed to assess human intelligence.
The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Wrocław he advocated in a 1912 book.
Historically, IQ is a score obtained by dividing a person’s mental age score, obtained by administering an intelligence test, by the person’s chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score. When current IQ tests were developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less, although this was not always so historically. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 5 percent of the population scores above 125, and 5 percent below 75.
IQ is the most thoroughly researched means of measuring intelligence, and by far the most widely used in practical settings. However, while IQ strives to measure some concepts of intelligence, it may fail to serve as an accurate measure of broader definitions of intelligence. IQ tests examine some areas of intelligence, while neglecting to account for other areas, such as creativity and social intelligence.
Critics such as Keith Stanovich do not dispute the reliability of IQ test scores or their capacity to predict some kinds of achievement, but argue that basing a concept of intelligence on IQ test scores alone neglects other important aspects of mental ability.