rss twitter

Down the Rabbit Hole
Paul Kiritsis, PsyD candidate, DPhil., MA (Psychology), MA (History)


Creativity: Definition, Research, and Measurement

Paul Kiritsis - Tuesday, December 06, 2016


Taken at face value the word creativity is fairly vague, ambiguous, and leaves the doors wide open for idiosyncratic interpretation. Depending on one’s individual biases it can be construed as imagination, novelty, originality, genius, talent, giftedness, the avant-garde, and other synonymous terms. Creativity derives from the verb “to create,” meaning to bring something into existence. But what exactly is it that is brought into existence? The word itself evokes images of world-fashioning, magical, or cosmogonic powers which disintegrate and reintegrate, separate and unite, dissolve and coagulate, fuse, and transmute the rudimentary elements of the world and experience into more complex qualities and harmonic unities. Many construe it as a higher order cosmic process driving biological and cultural evolution, an integral characteristic and expression of “mind.” As aesthetically attractive as this impression seems at first blush, it comes hand-in-hand with an intellectual impasse; if creativity were intimately bound with the higher nonphysical processes of mind, then by default it too would stand beyond the “cognitive closed space” that scientific tools birthed by human reason can grasp and perceive. With the mechanisms and real causes of creativity beyond our epistemological skills and comprehension, there would be no purpose to introspection and philosophical analysis, nor any sound reason to recourse to experimentation and neuroimaging investigations for possible neural correlates. We’d have to resign ourselves to the bleak prospect that our knowledge of the phenomenon might be indirect, mediate, and acquired through acquaintance only; we’d have to remain complacent with describing it in semantic terms without being able to experience it directly.

            Scholars who propound this attractive epistemological stalemate are vindicated by a consistent pattern concerning the illumination of historical and psychological insights during altered states of consciousness. It’s no secret that many scientific discoveries and ideas loaded with potential for historical novelty are made in the paradoxical world of dreams. Kekele came up with a simple structure for benzene after seeing a vivid dream in which carbon atoms congregated in the form of an ouroboros, a snake biting its own tail (Mazzarello, 2003). The celebrated Indian mathematician Srinivasan Ramanujan claimed the Hindi goddess revealed mathematical formulas, equations, and conjectures to him in the dream state (Kelly et al., 2007). Similarly for Rene Descartes, a series of dreams served as inspiration for the development of the scientific method (Pagels, 2014). Because these paradigm-shattering and paradigm-making insights all emerged during states of consciousness which remain quite impervious to scientific investigation and human understanding, then it stands to reason that the origin and basis for the insights themselves is also unknowable and ineffable.            

On the one hand we remain cognizant of the role creativity plays in the cosmic cycles of birth, death, and regeneration but, on the other we cannot pinpoint the underlying mechanisms and causes of the phenomenon. As pessimistic as it sounds, this new mysterianism does not stymie our attempts to gather explicit propositional knowledge about its effects in three-dimensional space. We can study facets and dimensions of the generative force or power dubbed “creativity,” provided there is some agreement about what is to be studied. Sternberg (1999) asserts that there is a general consensus amongst the community of creativity researchers concerning the integral manifest characteristics of creativity. Most endorse the notion that it involves the conception, design, or construction of a concrete product or idea that is both novel and valuable; it must be new, unprecedented, and appraised as useful and appropriate according to an imposed set of external criteria. Hence a satisfactory definition in physical and descriptive terms would read something like: “The interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context (Plucker et al., 2004, p. 90).

Not surprisingly, contentious and sometimes pretentious positions have been assumed from within this flexible developmental schema on a host of other identifying details such as whether creativity is a property of a tangible finished product (idea or object); a property of an inner cognitive process or processes which birth the product appraised as creative; or a property of an individual who expresses the associated traits or qualities of the phenomenon in question. Iterated in the last fifty years of empirical research, different forms of this incommensurability have also cropped up in other schematic dichotomies: is creativity a predominantly personal or social phenomenon? Is it rare or universal? Is it domain-general or domain-specific? Is it quantitative or qualitative in nature?

Coming up with an integrative definition able to gratify each affiliate of the creativity research community and at the same time harness conformity and solidarity amongst them in terms of a research agenda is quixotic, unfeasible, and far beyond the scope of the current investigation. Nonetheless, the eminence of schema problems doesn’t necessarily invalidate prior research nor does it render unworkable those research approaches intending to grapple with and examine a dimension of the construct. While differential definitions and methodological approaches are dire threats to internal validity, they can be circumvented if one clearly defines the term and details how it will be quantified–in other words the connection between what will be studied and whether the methodological approach can study and measure it to a satisfactory level. In this spirit, a researcher can hone into, pursue, and study different aspects, levels, and domains  of creativity: the emergence of novel ideas for a radical work of fiction in a novelist, the shift from one scientific paradigm to another (i.e. from atomic theory to quantum mechanics), the evolution of avant-garde cultural movements in the arts and literature (i.e. Surrealism, Abstract expressionism), and the birth of new social networks like Facebook and Google in the technological world which drive new services and applications and boost economic productivity and growth. 

Despite popular impression creativity measurement has a rich and colorful history and is quite progressive (Plucker & Runco, 1998). As early as the nineteenth century Sir Francis Galton had included creativity in the panoply of psychological phenomena which could be measured, motivating two subsequent decades of assiduous research into the nature of both creativity and imagination (Taylor & Barren, 1963). On another note the development of rudimentary divergent thinking tests by Binet and Henri before the 1900s (Barron & Harrington, 1981) and quasi creative problem-solving tasks by Maier (1945) were also proto-scientific developments in creativity research. Much later, J. P. Guilford (1950) gave a landmark Presidential Address to the American Psychological Association (APA) lamenting the dearth of mainstream American psychological literature addressing this vital topic. Drawing attention to this “appalling” neglect was something of a call to arms, a challenge even; it had an instantaneous effect and sparked reengagement with the age-old topic, albeit within the framework of the psychometric tradition. The transposition of creativity measurement from biographical methods to the psychological laboratory had a lot to do with the coterie of researchers’ methodological preferences (Gardner, 1993); these individuals had been scrutinizing cognitive phenomena with psychometric tools, and so using the same approach to study what they believed was an overlapping and related trait [creativity] would have seemed commonsensical, a logical next step.

The psychometric revolution paved the road for a first acquaintance of creativity with mainstream scientific psychology; it demystified the study of creativity from its abstruse conceptual mooring in exceptional genius and extended it to noneminent individuals from the general population (Sternberg, 1999). Guilford’s simple veritably scorable assessment device made use of paper-and-pencil verbal and figural tasks to measure “divergent thinking” [a component of creative thinking] and general aptitude in puzzle-solving behaviors by asking open-ended questions like “How many different uses can you think of for a brick? A shoe?” Alternatively, a psychometrician could also invite the examinee to generate innovative linguistic products like slogans (for a band new product, i.e. “lemon-ice”); to come up with unique names for coffee creamer; or to offer feasible suggestions relating to how certain products in the current market might be enhanced (i.e. how might a car be improved?).

The “creativity” scale juxtaposed responses scored for “fluency,” ‘flexibility,” “originality,” and “elaboration” in volunteers, typically students. Over the next few decades there was a bifurcation of psychometric creativity studies into cognitive and characterological-based exploration:

Creativity tests tend to be of two types–those that involve cognitive-affective skills such as the Torrance Tests of Creative Thinking… and those that attempt to tap a personality syndrome such as the Alpha Biological Inventory… Some educators and psychologists have tried to make an issue of whether creativity is essentially a personality syndrome that includes openness to experience, adventuresomeness, and self-confidence and whether the cognitive processes of rational and logical thinking in creative thinking are precisely the same as those used by high-IQ children. (Torrance, 1979, p. 360)  

Inevitably, the psychometric tradition expanded beyond the skeletal, two-dimensional vision afforded by cognition and personality to embrace more comprehensive systems theories which take things like the finished creative product, environmental features, and innovative ways of measuring personality into account (Plucker & Makel, 2010). Even though some detractors disparaged and trumpeted the approach as trivial and inadequate at investigating creative thinking (Sternberg, 1999), it did not prevent researchers from within the tradition from extending their measurement frontiers to other domains. Today, the field of psychometric assessment for creativity is flourishing.

            The psychometric approach to quantifying the creative process has focused exclusively on “cognition that leads in various directions” (Runco, 1999, p.577). Tests administered under this hegemony are open-ended assignments which necessitate participants to offer multiple responses to verbal or figural prompts. The inaugural appeal to ideational quantity makes the endeavor “divergent” in scope and diametrically opposed to the convergent forms of thinking measured by other standardized achievement and aptitude tests [IQ] in which there is only ever one correct answer. Guildford (1968) underscored this dichotomy when he wrote:

In convergent-thinking tests, the examiner must arrive at one right answer. The information given generally is sufficiently structured so that there is only one right answer… [A]n example with verbal material would be: “What is the opposite of hard?” In divergent thinking, the thinker must do much searching around, and often a number of answers will do or are wanted. If you ask the examinee to name all the things he can think of that are hard, also edible, also white, he has a whole class of things he might do. It is in the divergent-thinking category that we find the abilities that are most significant in creative thinking and invention. (p. 8) emphasis in original

A consistent feature across all divergent thinking tests is the method of categorizing responses. All involve figural or verbal tasks, and the multiple responses given are scored for fluency, flexibility, originality, and elaboration of ideas. Ideational fluency is operationalized as the quantity of responses to a prompt; flexibility is operationalized as the number of differential categories that responses to prompts may be separated into; originality is operationalized as the uniqueness of ideas or responses; and finally, elaboration is operationalized as the embellishment or extension of ideas pertaining to each response category. Most of these assessments accentuate fluency (Plucker & Makel, 2010). Divergent thinking tests include Guilford’s Structure of the Intellect (SOI) divergent production tests (Guilford, 1967), Wallach and Kogan’s divergent thinking test (Wallach & Kogan, 1965), and the Torrance Tests of Creative Thinking (TTCT) (Torrance & Ball, 1984). Administration, scoring, and score reporting in the latter are standardized, making it especially attractive to the modern research tradition and for this reason it has become the most popular divergent thinking test battery available today (Torrance & Ball, 1984).

            For all its popularity with creativity researchers, the voluminous psychometric tradition has not been impervious to standard postmodern critiques of quantitative research methods. Polemics incited by detractors have focused on the methodological flaws, the erroneous clarifications of inferential statistics, and the overemphasis and overdependence on divergent thinking measures in many psychometric studies (Sternberg, 1999). The dearth of construct, discriminant, and predictive validity noted by many critics also holds substantial merit. To give an example, critics have identified response set bias as the primary reason why consistent construct validity evidence has not been identified for the TTCT battery (Plucker & Makel, 2010). The scores obtained for originality and fluency may be influenced by test conditions like timed versus constrain-free administration, generic versus specific instructions, game-like versus test-like settings, and so forth (Sternberg, 1999). The criterion dilemma, in particular, has been especially problematic for researchers. Catel and Butcher (1968) aptly summarized this concern when they said, “obtaining a criterion score on ‘creativity’ to check the predictive power of our tests is going to present formidable conceptual and practical problems (pp. 285-286).”  

            Other complaints have expounded upon the failure of divergent thinking batteries to capture the aesthetic value and phenomenal complexity of creative products or the situational and environment-person factors influencing creative productivity. Weisberg (1993) has openly decried these paper-and-pencil tests as inadequate measures of creative thinking, stating that they “do not measure either creative thinking or the capacity to become creative (p. 61).” On the other hand Wallach (1976) emphasized limitations concerning the discernment of individual differences and talent levels between participants, claiming that “subjects vary widely and systematically in their attainments–yet little if any of that systematic variation is captured by individual differences on ideational fluency tests (p. 60).” Additionally, Amabile (1983) has underscored the inability of such restrictive divergent thinking measures as fluency, flexibility, originality, and elaboration in capturing the essence of the creative process.

Without a doubt there’s a lot more to creativity than identifying the statistically rarest task responses offered by a very small heterogeneous or homogenous sample of the population within mostly rigid and intransigent laboratory settings. There’s a blatant overemphasis on task-specific tests aimed at quantifying cognitive and intellectual processes within the individual and a concomitant naivety regarding the dynamic interaction between person-product-environment, the role of motivational and emotional factors, and alternative forms of creative processing and achievement (i.e. convergent thinking). The address of neglected areas will necessitate the crossing of boundaries set by the traditional psychometric approach; methodologies will have to be revised or replaced in light of veracious criticisms, an undertaking which will demand researchers to think “creatively” or outside the box.

            Amabile attempted to circumvent the criterion problem, that is the conundrum surrounding the assessment of a construct whose operationalization remains equivocal, by shifting attention to assessment of the creative products themselves. She declared that ““a product or response is creative to the extent appropriate observers independently agree it is creative (Amabile, 1982, p. 1001). There’s no need to measure creativity; it is recognized instantaneously, at first glance. Theoretically, the emphasis on ‘real-world creativity,’ as she called it, and the assessment of creative products using a panel of expert judges addresses many of the aforementioned inadequacies of divergent thinking batteries; individual differences are removed from the equation,  criterion problems are circumvented, and situational factors which influence creative processes may be examined.

However, the assessment techniques for creative products are clearly riddled with their own set of problems. During the formative stages of Amabile’s Consensual Assessment Technique (CAT), for instance, empirical evidence intimated that level of formal training and experience required for judges to make optimum appraisals depended on the assessment purpose, the target domain, the skill level of the participants, and various other factors (Plucker & Renzulli, 1999). To further compound things findings have also shown that expert and novice evaluations of creative products are infrequently equivalent and that expert judges are also liable to be more critical in their global ratings of product creativity than the authors of the works themselves (Plucker & Makel, 2010). With particular merits and limitations associated with every type of assessment, the best and most promising approach to any creativity research may, in fact, be a confluence of those methods best able to generate the kind of information being sought (Cooper, 1991; Wakefield, 1991).

In light of traditional but also emerging avenues of investigation in the field of creativity, the current study will employ a multimodal, multidimensional assessment system combining a quantitative measure, the TTCT, with a qualitative measure, the CAT. The mixed-methods approach may be the best way of harnessing a more holistic, integrative conception of creativity based on information from person, process, and environment; for evading the criterion problem faced by the voluminous psychometric tradition; and for rendering conspicuous pre-inventive structures like symbolism and analogical complexity, paragons of the creative imagination which divergent thinking batteries alone cannot detect (Kelly et al., 2007).


Post has no comments.

Log in to comment on this post

Trackback Link