Words in context:  ERPs and the lexical/postlexical distinction

 

Seana Coulson and Kara D. Federmeier

 

Department of Cognitive Science

University of California, San Diego

 

 

 

 

 

 

 

 

 

Correspondence to:

Seana Coulson

Cognitive Science 0515

9500 Gilman Drive

La Jolla, CA 92093-0515

email: coulson@cogsci.ucsd.edu

fax: (858) 534-1128

phone: (858) 822-4037


ABSTRACT

Classic models of language comprehension posit a distinction between lexical processing, thought to be early, fast-acting, and automatic, and post-lexical processing, thought to be slower, more controlled, and to require the output of the lexical processing system for its operation.  The lexical/post-lexical distinction is reviewed in light of data from event-related brain potentials (ERPs) collected from normal volunteers as they read or listened to words in context.  The lexical/post-lexical distinction is undermined by reports that variables assumed to influence lexical processing, including frequency, repetition, association, and orthographic legality, variables assumed to influence post-lexical processing, including predictability, and even variables assumed to influence discourse-level processing, such as congruity with global context, all similarly modulate a negative-going ERP component (N400) evident between 200 and 500 ms after word onset.  The similarity in the time course of lexical and post-lexical ERP effects argues against the temporal precedence of lexical processing.  Data demonstrating that lexical effects can be attenuated by contextual factors further argue against the automaticity of lexical processing.  The interaction of low-level and higher-level manipulations of context suggests that word processing occurs at multiple levels of analysis in parallel, such that lexical access and contextual integration are to some extent interdependent.


 

            Adult humans process language so frequently, so quickly, and so effortlessly that it is easy to forget how complicated the task really is.  To comprehend a spoken sentence, for example, the listener must first process a complicated sensory signal.  Auditory speech contains information in multiple frequency bands, distributed across multiple time scales.  Millisecond-level differences in timing must be appreciated to differentiate the consonants that, in turn, differentiate words like "pat" and "bat."  At the same time, the listener must be able to recognize tonal patterns unfolding over seconds or even minutes to know, for example, if she is being asked for information or ordered to do something.  Moreover, the processing of the sensory signal itself is only the beginning, as the listener must search for familiar patterns in the input, such as words and grammatical constructions, retrieve various sorts of information associated with each, and combine them appropriately. 

The crux of the problem, then, is that language comes in sound by sound or fixation by fixation and yet must be appreciated at word, sentence, and discourse levels.  Part of what makes language processing so difficult is that it requires so many levels of analysis and would seem to recruit so many different subprocesses.  In addressing the question of how the human brain might approach this task, many psycholinguists have sought refuge in the hierarchical structure of language and assumed that the brain exploits this structure level by level, using the outputs of low-level processes as inputs for the construction of meaning at the sentence and discourse level.  On this approach, the role of the word is taken to be somewhat analogous to the role of the cell in biology (e.g., Balota, 1994): although not the most primitive level of analysis, it is assumed to be central to the comprehension process.

Many classic models include the assumption that word-level (lexical) processing necessarily precedes processing at higher levels of analysis. Accordingly, psycholinguists traditionally divide word comprehension into two stages, lexical and post-lexical (e.g., Fodor, 1983; Forster, 1981; Seidenberg, Tanenhaus, Lieman, & Bienkowski, 1982; Swinney, 1991).  Lexical processing is a fast, automatic stage of processing that involves recognition, in which the phonemic or orthographic input is matched to templates in the mental lexicon, and access, in which syntactic and semantic information associated with the word is activated and thereby made available to later stages of processing.  In these later stages, known as post-lexical processing, slower, controlled processes operate to combine linguistic information with contextual and background knowledge in order to integrate the new word with the developing representation at the message level.

 

Lexical Access

 

In the psychological literature, the lexical access process has been studied by testing various factors that affect performance on tasks such as naming and lexical decision, which guarantee that the lexical entry has been accessed.  This research has identified several fundamental phenomena as characterizing the process.  Of these, the most fundamental is the frequency effect, the finding that less time is needed to respond to high frequency words than to low frequency words (Balota & Chumbley, 1984; Forster & Chambers, 1973; Taft, 1979; Whaley, 1978).  Frequency effects have been accounted for in two different, but related, ways.  On models in which lexical access is construed as the result of activation, frequency effects are thought to result because high frequency words have a higher resting level of activation than low-frequency words (e.g., Morton, 1969), or because connections between the input and the lexical level are weighted more heavily for high than low frequency words (e.g., Seidenberg & McClelland, 1989).  On models in which lexical access is construed, instead, as the result of a search process (e.g., Forster, 1979), it is assumed that words are stored and consequently searched in the order of their frequency.  In either case, frequency facilitates the identification and access processes so that lexical information is available more quickly for high frequency words than for low.

Another watershed finding is the semantic priming effect, or the finding that response latencies to a word are reduced when it is preceded by a semantically or associatively related word (Meyer & Schvaneveldt, 1971).  Though the details differ, a majority of the models of lexical access explain semantic priming as resulting from the spread of activation or excitation through the semantic network.  The reason people respond more quickly to "dog" when it follows "cat" than when it follows "cloud" is that the representations for the two related words are connected via an associative pathway.  Much work in the semantic priming literature has assessed the automaticity of this process, where automatic processes are defined as those which are fast-acting, and independent of subjects' conscious control (Posner & Snyder, 1975).

A third finding is that people are faster to respond to a word the second time it is presented than the first, a phenomenon known as repetition priming (Forbach, Stanners, Hochhaus, 1974; Kirsner & Smith, 1974).  On activation models, repetition effects result because once a word's lexical entry has been accessed, its activation decays slowly toward its resting level.  Similarly, on search models, repetition effects result because the lexical entry remains "open" for a short while after the access process (Forster & Davis, 1984).  Forster and Davis (1984) have also proposed that the initial presentation of a word results in the formation of an episodic trace which facilitates the decision component of lexical decision, without necessarily speeding lexical access.

Taken together, these findings suggest that word information is organized associatively and that access is affected by the frequency and recency with which that lexical entry has been used in the past.  Further, researchers using the lexical decision task have identified two effects that suggest important constraints on the architecture of the word identification system.  The first is the lexical status effect, the finding that words elicit shorter response times than nonwords (Rubenstein, Garfield, & Milikan, 1970).  The second is the nonword legality effect, the finding that pseudowords with legal orthographic structure elicit slower response times than nonwords constructed from random letter strings (Stanners, Forbach, & Headley, 1971; Stanners & Forbach, 1973). These findings suggest that while the word identification system can reject orthographically illegal strings out of hand, pseudowords are treated differently and perhaps processed for some time as words before being rejected.

 

Methodological Difficulties

 

As it happens, interpretation of priming phenomena is not as straightforward as one would like, and a certain amount of controversy surrounds almost all of the priming effects described above.  In each case the controversy concerns whether the effect reflects facilitation in the lexical stage of processing or in the post-lexical stage.  This is particularly true of semantic priming, and disputes over the automaticity of the processes underlying these effects.  Theoretically, these disputes concern the issue of whether the associative organization suggested by priming phenomena is a feature of the mental lexicon, or whether it reflects processes involved in integration at other levels. 

Methodologically, these disputes have been rooted in claims about the influence of lexical versus post-lexical processes on the experimental tasks used to elicit semantic priming (reviewed in, e.g., Neely, 1991).  For example, reaction time paradigms are subject to criticism because they require participants to terminate linguistic processing and to initiate performance of the experimental task.  The imposition of a secondary task makes it much harder for the psycholinguist to know what portion of the variance in observed priming effects is attributable to lexical access, and what portion is caused by extra-lexical processing related to task performance.

Uncovering information about the specific nature and time course of lexical access is even more difficult because one must make inferences from measures taken at the end point of processing -- the vocal output or the button press -- rather from more direct access to the intermediate stages of processing.  Indeed, modern neuroimaging techniques are exciting precisely because they promise to afford more direct access to these intermediate stages.  By measuring oxygenation and blood flow associated with neural activation, positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), can indicate which brain areas are active during particular experimental tasks.  Similarities and differences in activation patterns might then be used to help interpret discrepancies between different behavioral tasks thought to index lexical access.

However, for all their strengths, the temporal resolution of hemodynamic measures such as PET and fMRI is fundamentally limited by the character and time course of the signal they measure.  Even the most optimistic assessment of the temporal resolution of fMRI puts it between 0.5 and 1 second (Friston, Fletcher, Josephs, Holmes, Rugg, and Turner, 1998), suggesting this method is not adequate for probing the transient process of lexical access.  To directly answer questions about the when, how, and what of lexical access, one must instead turn to techniques that monitor the electrical activity of neurons in real time.  Although the currents associated with the firing of a single neuron are too small to be detected non-invasively, when a large pool of neurons oriented in roughly the same direction fire synchronously, their summed electrical activity is often detectable at the scalp (Kutas & Dale, 1997).  The electrical fields that result when such pools of neurons -- particularly pyramidal cells of the neocortex -- polarize or depolarize are measured in electroencephalography (EEG), by placing electrodes on the surface of the scalp.

Rather than analyzing EEG elicited in the course of cognitive processing, electrophysiologists interested in cognition have found it useful to examine event-related brain potentials (ERPs).  ERPs are patterned voltage changes in the on-going EEG that are time locked to the onset of a stimulus.  The ERP is obtained by recording subjects' EEG and then averaging across time-locked stimuli.  This allows the researcher to focus on aspects of the EEG that are temporally correlated with the processing of the class of eliciting events.  One particular advantage of ERPs is that they can serve as an on-line measure of subjects' processing that does not interfere with language processing itself.

Averaging the EEG time locked to the presentation of words typically results in a series of positive and negative peaks in the ERP known as components.  Because these peaks are visually salient and seem to be modulated in functionally interpretable ways, cognitive electrophysiologists have devoted a good deal of attention to their characterization.  Components are typically identified by their polarity, that is, whether they represent positive or negative going voltage, and their latency with respect to the eliciting event. For example, the P300 is a positive-going component that typically peaks 300 ms after the onset of an unusual task-relevant event. Besides polarity, latency, and functional characterization, an important aspect of the definition of an ERP component is its topography, or amplitude distribution over the scalp.  To be confident that one is looking at the "same" activity in two tasks, for example, it is important that the distribution of the activity elicited in a particular time-window be similar in both cases.  Though the spatial resolution of ERPs is limited, differences in the topography of two ERP effects almost certainly indicate differences in the underlying neural generators of those effects.

 

Lexical and Post-Lexical Priming Effects on the N400

 

Interestingly, cognitive electrophysiologists have identified a component in the brainwaves that is modulated by many of the same variables that affect tasks taken to index lexical access.  The N400 is a negative-going wave evident between 200 and 500 ms after the presentation of a word, so-called because it tends to peak around 400 ms after stimulus onset (Kutas & Hillyard, 1980).  Though this effect is observed all over the scalp, it is largest over centroparietal areas and is usually larger on the right side of the head than the left (Kutas, Van Petten, & Besson, 1988).  The N400 is elicited by words in all modalities, whether written, spoken, or signed (Holcomb & Neville, 1990; Kutas, 1987).  Moreover, the size, or amplitude, of the N400 is affected in a way that is analogous in many respects to response times in naming and lexical decision tasks.

For instance, in both word lists and in sentences, high frequency words elicit smaller N400s than low frequency words (Smith & Halgren,1989).  The N400 also evidences semantic priming effects, in that the N400 to a word is smaller when it is preceded by a related word than when it is preceded by an unrelated word (Bentin, 1985; Holcomb, 1988).  Third, the N400 is sensitive to repetition -- smaller to subsequent occurrences of a word than to the first (Rugg, 1985; Van Petten, Kutas, Kluender, Mitchiner, & McIsaac, 1991).  Furthermore, while pseudowords elicit even larger N400s than do real words, orthographically illegal nonwords elicit no N400 at all (Kutas & Hillyard, 1980).

At least at first glance, then, the N400 seems like it might provide a window into lexical access -- into a period of processing when pseudowords are still being distinguished from real words and when variables like frequency, repetition, and semantic relatedness have a significant impact.  However, in addition to its sensitivity to variables thought to influence lexical access, the N400 is sensitive to factors that many would consider post-lexical.  For example, one of the best predictors of N400 amplitude for a word in a given sentence is that word's cloze probability (Kutas, Lindamood, & Hillyard, 1984).  N400 amplitudes are largest for unexpected items such as semantic anomalies, and smallest for contextually congruous words with high cloze probabilities.  In general, N400 amplitude varies inversely with the predictability of the target word in the preceding context.

To complicate matters further, N400 is also sensitive to discourse-level factors such as the activation of contextually relevant schemas.  For example, St. George, Mannes, & Hoffman (1994) recorded participants' ERPs as they read ambiguous paragraphs that either were or were not preceded by a disambiguating title.  Although the local contextual clues provided by the paragraphs were identical in the titled and untitled conditions, words in the untitled paragraphs elicited greater amplitude N400s.  Similarly, Van Berkum, Hagoort, & Brown (1999) found that words which elicit N400s of approximately equal amplitude in an isolated sentence, do not elicit equivalent N400s when they occur in a context that makes one version more plausible than the other.  For instance, "quick" and "slow" elicit similar N400s in "Jane told her brother that he was exceptionally quick/slow."  However, "slow" elicits a much larger N400 when this same sentence is preceded by "By five in the morning, Jane's brother had already showered and had even gotten dressed."

This sensitivity of the N400 to higher-order aspects of language would seem to rule it out as a valid index of lexical access.  However, eye tracking and behavioral measures have also suggested that higher-order aspects of language can have strong effects on the processing of individual words.  For example, Duffy, Henderson, & Morris (1989) measured naming latencies for words in sentence contexts and found that both lexical and sentential factors affected priming. Moreover, sentence level congruity elicited priming even when none of the individual words in the sentence worked as associative primes.  Similarly, priming effects as indexed by phoneme monitoring and cross-modal naming latencies also suggest that when words are presented in contexts such as sentences and paragraphs, priming depends on congruity with the larger context, rather than local associations between words in the sentence (Foss & Speer, 1991; Hess, Foss, & Carroll, 1995).  ERP data reviewed above are thus congruent with these behavioral findings and, further, suggest that lexical, sentential, and discourse factors all affect the brain response to words in the same time window and in a qualitatively similar manner.

 

Context, Context, Context

 

When we look independently at the effects of various kinds of linguistic variables on the brain's activity, we find that they all manifest as a change in the amplitude of the scalp-recorded N400.  This is surprising because, based on the models we have assumed thus far, we expect differences in the time course, mechanism, and automaticity of the two stages of processing.  Lexical processing is thought to be early, fast-acting, and automatic. Post-lexical processing is thought to be slower, more controlled, and to require the output of the lexical processing system for its operation.  Consequently, lexical variables are predicted: (1) to have an earlier influence on word processing than post-lexical variables; (2) to influence word processing via a qualitatively different mechanism than post-lexical variables; and (3) to be more robust than post-lexical variables, as post-lexical processing mechanisms depend on the completion of lexical processing.

In contrast to these predictions, N400 priming effects seem to reflect constraints arising from several different levels: lexical variables such as orthographic regularity, frequency, repetition, and semantic relatedness, as well as post-lexical variables such as predictability in the sentence context, congruity with the overall discourse context, and activation of a contextually relevant schema.  The similarity in the timing and the scalp distribution of the various N400 priming effects do not bear out the prediction that lexical processing occurs earlier, and employs a distinct processing mechanism.  However, the mere fact that all these different variables influence N400 amplitude does not necessarily imply that a single process subserves lexical, sentential and discourse level processing.  It could be the case for instance, that lexical and post-lexical effects on the N400 are independently generated.  To assess this possibility, it is necessary to compare the effect of lexical and post-lexical variables in a single group of participants in cases where the task conditions are closely matched.

Accordingly, Kutas (1993) compared the effects of lexical and sentential context by comparing the N400 priming effect in sentences and word pairs.  During the first half of the experiment, participants' ERPs were recorded as they read sentences for comprehension.  Half of these sentences ended with a predictable response (e.g., "Before exercising Jake always stretches his muscles.")  The other half ended with a congruous but unexpected word, such as "Fred put the worm on the table."  In the second half of the experiment, participants' ERPs were recorded as they read word pairs derived from the sentences, such as "biceps - muscles" and "hook - table". Kutas (1993) reasoned that this design would enable comparison of ERPs to the same word in roughly similar lexical and sentential contexts.

Because qualitatively different cognitive and neural mechanisms should elicit qualitatively different ERP effects, one can use the ERP measure to compare the time course and nature of the mechanism that underlies word pair priming with those that underlie the integration of word and sentence meaning.  However, Kutas (1993) found that the ERPs elicited by sentence final words were qualitatively similar to those elicited by the second word in each word pair.  In fact, the priming effect (i.e., the reduction in the N400 amplitude to the more predictable sentence completion or the related word), was larger and began earlier in the sentence condition than in the word pairs.  Kutas (1993) argued that the observed similarities in the waveshape, scalp distribution, and experimental modulation of the N400 in lexical and sentential contexts strongly suggest that a single process underlies both sorts of priming effects.

In a similarly motivated experiment, Van Petten (1993) examined the relationship between lexical and post-lexical processing mechanisms by looking directly at the interaction of semantic priming and sentence context effects within the same material.  Van Petten manipulated sentential context by comparing congruous sentences such as "When the moon is full it is hard to see many stars of the Milky Way," with structurally similar incongruous sentences such as "When the moon is rusted it is available to buy many stars of the Santa Ana."  Lexical context was manipulated by including conditions with unrelated word pairs (such as "insurance" and "refused").  For example, "moon" and "stars" in the congruous and incongruous sentences above could be compared with "insurance" and "refused" in "When the insurance investigators found out that he'd been drinking they refused to pay the claim" and "When the insurance supplies explained that he'd been complaining they refused to speak the keys."  Like Kutas (1993), Van Petten (1993) found that ERP effects elicited by the manipulation of lexical and sentential context were strikingly similar in their polarity, waveshape, and distribution over the head.

The fact that lexical and post-lexical ERP effects display a similar time course argues against a serial model in which lexical processing precedes post-lexical processing.  Moreover, Van Petten and colleagues (1999) present data that suggest that semantic integration can begin before word recognition could possibly be complete. In an experiment designed to compare the relative timing of phonemic analysis and sentential context effects, Van Petten and colleagues (1999) collected ERPs to words in spoken sentences such as, "The highway was under construction, so they had to take a lengthy detour."  Besides congruous expected endings, Van Petten et al. included three types of unexpected endings: plain incongruous, unexpected words (such as "table") with no phonological similarities to the expected completion, rhyme incongruous, (such as "contour") that shared the same final syllable as the expected ending, and cohort incongruous, (such as "detail") that shared the same initial syllable as the expected ending.

If semantic integration requires the prior output of completed word recognition processes, we would expect the N400 to be relatively insensitive to the phonemic similarity of the various incongruent endings with the expected completion.  However, if semantic integration can proceed even on the basis of partial information, we might expect the N400 effect to begin earlier in the plain incongruous words than in the cohort incongruous words that share an initial syllable with the expected completion.  Van Petten and colleagues (1999) found that the N400 effect for plain and rhyme incongruous words began approximately 200 ms after the onset of the word, while the N400 effect for cohort incongruous words like "detail" did not begin until 375 ms post-onset.

The later onset of the N400 effect in the words that initially sounded like the expected completion suggests that sentential integration can            operate on the basis of partial information.  However, definitive evidence on this issue requires knowledge of the recognition point for each of the words, that is, the point in time at which participants            were able to accurately recognize words based on partial phonemic input.  Because Van Petten and colleagues (1999) had previously established the recognition point for these words in a gating task, they were able to record ERPs time locked to the recognition point for each sentence-final word.  In this comparison, Van Petten and colleagues found that while ERPs to cohort incongruous words like "detail" diverged at their recognition point, ERPs to incongruous words like "table" and "contour" diverged well before.  This finding indicates that semantic integration can proceed with partial phonemic input, before word recognition is complete, and thus does not necessarily require a prior stage of lexical access.

Evidence reviewed above thus suggests that lexical and post-lexical variables operate similarly in a similar time-window.  Moreover, Van Petten et al.’s findings suggest that the “postlexical” process of integration can begin before lexical processing is fully complete.  However, what about the final prediction, that lexical effects are somehow more basic?  It might be the case, for example, that even if the stages are not as discrete or mechanistically distinct as we thought, lexical effects are more automatic, and less subject to outside influences.

 

When word and context collide

 

To test this latter hypothesis, Van Petten and Kutas (1990) compared the effects of word frequency and ordinal position in a sentence to see if the two factors were additive or interactive.  These researchers assumed that frequency effects index lexical-level processing and ordinal position effects index the increasing influence of context as the representation of sentence-level meaning is built up.  They reasoned further that if the lexical-level processes that underlie frequency effects are independent of the sentence-level processes that underlie ordinal position effects, the two variables should influence the response measure autonomously and have additive effects on N400 amplitude.  However, if the processes underlying frequency and ordinal position effects are not independent of one another, the two factors should interact -- and the character of their interaction might be instructive about the underlying mechanisms of priming.

In fact, Van Petten & Kutas (1990) found main effects of frequency, position, and an interaction between the two.  Low frequency words  elicited larger N400 components than did high frequency words. Moreover, when Van Petten & Kutas examined data from the sentence intermediate words on a closer basis they found a downward linear trend in the N400 amplitude with word position.  However, the frequency effect was most prominent in words in the beginning of sentences, somewhat attenuated in sentence-intermediate words, and entirely absent from sentence-final words.  Van Petten & Kutas argued that this interaction, especially the fact that sentence-final words exhibit no N400 frequency effect whatsoever, suggests the buildup of contextual knowledge influences the initial activation and retrieval of lexical items, as well as integrative processes typically referred to as post-lexical.

Other recent findings also call into question the automaticity and primacy of so-called lexical level variables.  For example, Coulson and colleagues (2000) compared ERPs to lexically associated words (such as "life" and "death"), in contexts where they either made sense together, as in, "When someone has a heart attack a few minutes can make the difference between life and death," or did not, as in, "The gory details of what he had done convinced everyone that he deserved life in death."  Although the word pairs used in the study elicited N400 priming effects when presented in word lists, they did not do so in these sentence contexts.  The finding that sentential context can apparently eliminate lexical priming effects argues against the idea that this sort of priming reflects automatic lexical access.

Camblin, Federmeier, & Kutas (2000) compared N400 priming effects for lexical associates (and matched nonassociates) that occurred within the same clause, as in "The book described the kings and the queens but lacked details she needed," with the same word pairs separated by a clause boundary, as in "The book described the kings, but the queens were not mentioned at all."  Camblin et al. found that the same pairs of words separated by the same number of words generate ERP association effects with a different time course and distribution when separated by

a clause boundary than when they occur within the same clause.  When the associated words occurred in the same clause, the second word elicited a canonical N400 priming effect.  However, when the words were separated by a clause boundary, the priming effect was later and more frontal in distribution.

Taken together, these findings suggest that the impact of lexical variables such as frequency and lexical association is greater when higher-level contextual information is impoverished.  Frequency affects N400 amplitude more in the initial words of a sentence than at the end.  When lexical association and contextual congruity both help the reader to construct the message-level representation, they each affect N400 amplitude (Van Petten, 1993). However, when lexical association and contextual congruity conflict in their utility for the construction of the message-level representation, contextual congruity overrides the effect of lexical association.  In sum, putatively lexical effects assumed to be automatic and independent seem to be modulated by higher-order aspects of context, either semantic (as in Coulson et al., 2000) or syntactic (as in Camblin, et al., 2000).

Moreover, these findings in the ERP literature are buttressed by similar findings from psycholinguists using more conventional methods. For example, Cutler, Norris, McQueen, and Butterfield (2000), using a manipulation similar to that employed by Coulson et al. (2000), found that cross-modal associative priming disappears in sentence contexts. Murray (2000) has presented eye tracking data collected for sentences where lexical associations are pitted against grammatical cues.  He reports clear effects of structural factors, but no lexical priming, echoing Camblin et al.'s finding that grammatical factors affect lexical priming.  Finally, Traxler and colleagues (submitted) have presented a series of eye tracking studies designed to uncover the nature of lexical priming effects.  They crossed a variety of lexical-level variables with plausibility, and found that, while plausibility always affected reading times, among the lexical variables only repetition affected early processing measures, such as first fixation and gaze duration.  Traxler and colleagues argue that their data undermine models that advocate an important role for intralexical spreading activation, and instead suggest that priming effects reflect facilitated integration into the discourse model.

 

Conclusions

 

We began with a reasonable assumption, seen often in the psycholinguistic literature, that language is constructed hierarchically, with the word as the basic unit of analysis.  On such a view, a very basic, perhaps automatic, lexical access process is needed to identify words and call up word-related information.  Only after lexical access has taken place can higher order semantic and

syntactic representations be built up, via a variety of post-lexical mechanisms linking linguistic and nonlinguistic information.

However, the data reviewed above call into question the assumption that there are mechanistically and temporally distinct stages for "lexical access" and "post-lexical" context-based, integrative effects. The amplitude of the N400 component has been modulated by variables such as frequency, repetition, associative priming, lexicality, and orthographic legality, that are known to influence word recognition. Moreover, N400 amplitude is also modulated by variables, such as predictability in the sentence, that are thought to influence post-lexical integration, as well as by variables that influence still higher levels of comprehension, such as congruency with the larger discourse context.  Further, when lexical (e.g. frequency, association) and post-lexical (e.g. congruency) variables have been jointly manipulated, they both affect the ERPs beginning around 200 ms after stimulus onset and seem to similarly modulate the neural generators whose activity is visible at the scalp as the N400 effect.

The fact that activity in the same brain areas is modulated by lexical and post-lexical variables is not consistent with models that posit discrete mechanisms for lexical and post-lexical priming effects. Further, the similarity in the time course of the effects elicited by manipulations of lexical and post-lexical variables indicates that the underlying process or processes operate at nearly the same time.  Moreover, the serial model, in which lexical processing necessarily precedes post-lexical contextual integration is undermined by Van Petten et al.'s (1999) finding that the N400 effect can begin before word recognition is even complete.  Far from being impervious to outside influence, lexical effects can be modulated by contextual factors, and can even be eliminated.  Such findings argue against an underlying process that is automatic.

Moreover, while most disputes about whether a particular variable operates lexically or post-lexically are rooted in claims about the influence of these processes on the experimental task, findings reviewed here concern electrophysiological measures collected without the imposition of potentially confounding task demands. Consequently, N400 priming effects cannot be dismissed merely because they are post-lexical.  While so-called post-lexical influences on, for instance, the lexical decision task, may be presumed irrelevant to normal language comprehension, the effects of high- as well as low- level information on N400 amplitude are a natural consequence of the word processing mechanisms related to its elicitation.  Although effects of frequency, repetition, and lexical associates have traditionally been considered relevant to the initial stages of word recognition, data reviewed above suggest these factors are actually relevant to the integration of word meaning with contextual information (see also Traxler et al., submitted).

            In other words, the interaction of low-level variables such as word association and frequency with higher-level manipulations of context suggests the brain processes that subserve language comprehension do not make the same distinctions as the psycholinguist.  Rather, the brain seems to use all the information that it can as early as it can to constrain the comprehension task, from the analysis of the perceptual signal through word recognition and the construction of pragmatic and discourse models.  This way of thinking about language processing mirrors in some ways a recent shift in theories of visual scene processing.  In these theories, it was once thought that the process of understanding a scene was completely hierarchical, involving an initial stage in which the raw sensory input was analyzed so that objects could be identified, and a subsequent set of post-identification processes that computed the relationships between identified objects in order to construct the scene.  However, the growing consensus is that processing actually takes place at multiple levels of analysis in parallel, such that object identification and scene analysis are to some extent interdependent (e.g., review in Henderson & Hollingworth, 1999).  On the modern view of high level visual processing, object recognition and scene analysis, though separable, are interactive and mutually constraining.

            In the end, a scene is more than a collection of objects, just as a sentence is more than a collection of words.  While processing language surely involves retrieving information associated with incoming words, it also requires the integration of lexical information with higher-order grammatical representations, with local, contextual information, and with the background knowledge activated by discourse-level representations to fully understand what those words convey (Coulson, in press).  For example, to understand the significance of “water” in “Everyone had so much fun diving from the tree into the pool, we decided to put in a little water,” it is not sufficient to know just that “water” is a mass noun, and that it represents a colorless, odorless liquid essential for plant and animal life.  To integrate “water” into the context and appreciate the joke, it is necessary to exploit grammatical knowledge (i.e. gapping) to infer that the water goes into the pool, contextual knowledge to know that the agents are diving from the tree into a pool that either does or does not have water in it, and background knowledge to know that pools without water are a lot less fun to dive into.  Given the complexity of contextual integration and the necessity of rapidly combining information from multiple levels of analysis, the brain’s approach – concurrent utilization of information about orthography, recency, frequency, lexical association, and congruity at the sentence- and discourse- levels – seems well-suited for the task of constructing the message.  In sum, emerging data suggest the brain does not process words and context, but words in context.  

 

 

 

References

 

Balota, D. (1994). Visual word recognition.  In M. Gernsbacher (Ed.), Handbook of Psycholinguistics.  New York: Academic Press, Inc., pp. 303-358.

Balota, D.A. & Chumbley, J.I. (1984). Are lexical decisions a good measure of lexical access? The role of word-frequency in the neglected decision stage.  Journal of Experimental Psychology: Human Perception and Performance 10, 340-357.

Bentin, S., McCarthy, G., & Wood, C. C. (1985). Event-related potentials associated with semantic             priming. Electroencephalography and Clinical Neurophysiology, 60, 343-355.

Camblin, C.C., Federmeier, K.D., & Kutas, M. (2000, March).  The effect of clause boundaries on the processing of related words: An electrophysiological analysis.  Poster presented at the 13th Annual CUNY Conference, San Diego, CA.

Coulson, S. (2000, in press). Semantic Leaps: Frame-shifting and Conceptual Blending in Meaning Construction.  Cambridge and New York: Cambridge University Press.

Coulson, S., Van Petten, C., Federmeier, K., Folstein, J., Weckerly, J., & Kutas, M. (2000, March).  Lexical and sentential context effects: An ERP study of the difference between life and death and life in prison.  Paper presented to the 13th Annual CUNY Conference, San Diego, CA.

Cutler, A., Norris, D., McQueen, J., & Butterfield, S. (2000, March). Cross-modal associative priming which disappears in sentence context.  Poster presented at the 13th Annual CUNY Conference, San Diego, CA.

Duffy, S. A., Henderson, J. M., & Morris, R. K. (1989). Semantic facilitation of lexical access during sentence processing. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15(5), 791-801.

Fodor, J. (1983). The Modularity of Mind.  Cambridge, MA: MIT Press.

Forbach, G., Stanners, R., & Hochhaus, L. (1974). Repetition and practice effects in a lexical decision task. Memory & Cognition 2, 337-339.

Forster, K.I. (1979). Levels of processing and the structure of the languagae processor.  In W.E. Cooper & E. Walker (Eds.), Sentence processing.  Hillsdale, NJ: Erlbaum Associates Limited, pp. 27-85.

Forster, K.I. (1981).  Priming and the effects of sentence and lexical contexts on naming time: Evidence for autonomous lexical processing.  Quarterly Journal of Experimental Psychology, 33a, 465-495.

Forster, K.I. & Chambers, S.M. (1973). Lexical access and naming time. Journal of Verbal Learning and Verbal Behavior 12, 627-635.

Forster, K.I. & Davis, C. (1984). Repetition priming and frequency attenuation in lexical access.  Journal of Experimental Psychology: Learning, Memory, and Cognition 10, 680-698.

Foss, D.J. & Speer, S.R.  (1991).  Global and local context effects in sentence processing.  In R.R. Hoffman & D.S. Palermo (Eds.), Cognition and Symbolic Processes: Applied and Ecological Perspectives, pp. 115-139.  Hillsdale, NJ: Erlbaum.

Hess, D.J.,  Foss, D.J. & Carroll, P. (1995). Effects of local and global context on lexical processing during language comprehension.  Journal of Experimental Psychology: General 124, 62-82.

Holcomb, P. J. (1988). Automatic and attentional processing: An event-related brain potential analysis of semantic priming. Brain & Language, 35(1), 66-85.

Holcomb, P. J., & Neville, H. J. (1990). Auditory and visual semantic priming in lexical decision: A comparison using event-related brain potentials. Language & Cognitive Processes, 5(4), 281-312.

Kirsner, K. & Smith, M.C. (1974). Modality effects in word recognition. Memory & Cognition 2, 637-640.

Kutas, M. (1993). In the company of other words: Electrophysiological evidence for single-word and sentence context effects. Special Issue: Event-related brain potentials in the study of language. Language & Cognitive Processes, 8(4), 533-572.

Kutas, M., & Dale, A. (1997). Electrical and magnetic readings of mental functions. In M. D. Rugg (Ed.), Cognitive Neuroscience (pp. 197-242). Hove, East Sussex: Psychology Press.

Kutas, M., & Hillyard, S. A. (1980). Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207(4427), 203-205.

Kutas, M., Lindamood, T. E., & Hillyard, S. A. (1984). Word expectancy and event-related brain potentials during sentence processing. In S. Kornblum & J. Requin (Eds.), Preparatory States and Processes (pp. 217-237). Hillsdale, NJ: Lawrence Erlbaum Associates.

Kutas, M., Neville, H. J., & Holcomb, P. J. (1987). A preliminary comparison of the N400 response to semantic anomalies during reading, listening, and signing. Electroencephalography and Clinical Neurophysiology, Supplement 39, 325-330.

Kutas, M., Van Petten, C., & Besson, M. (1988). Event-related potential asymmetries during the reading of sentences. Electroencephalography & Clinical Neurophysiology, 69(3), 218-233.

Meyer, D. & Schvaneveldt, R. (1971).  Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations.  Journal of Experimental Psychology 90, 227-234.

Morton, J. (1969).  Interaction of information in word recognition.  Psychological Review 76, 165-178.

Murray, W.S. (2000, March). Meaning, structure, and the interpretive process.  Poster presented at the 13th Annual CUNY Conference, San Diego.

Neely, J. H. (1991). Semantic priming effects in visual word recognition: A selective review of current findings and theories. In D. Besner & G. W. Humphreys (Eds.), Basic processes in reading: Visual word recognition. (pp. 264-336). Hillsdale, NJ: Lawrence Erlbaum Associates.

Posner, M.I. & Snyder, C.R. (1975). Attention and cognitive control. In R.L. Solso (Ed.), Information Processing and Cognition: The Loyola Symposium. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc, pp. 55-85.

Rubenstein, H., Garfield, L., & Milikan, J.A. (1970). Homographic entries in the internal lexicon. Journal of Verbal Learning and Verbal Behavior 18, 757-767.

Rugg, M. D. (1985). The effects of semantic priming and word repetition on event-related potentials. Psychophysiology, 22(6), 642-647.

Seidenberg, M.S. & McClelland, J.L. (1989). A distributed, developmental model of word recognition and naming. Psychological Review 96, 523-568.

Seidenberg, M.S., Tanenhaus, M.K., Leiman, J.M., & Bienkowski, M.  (1982). Automatic access of the meanings of ambiguous words in context: Some limitations of knowledge-based processing.  Cognitive Psychology 14, 489-537.

Smith, M.E. & Halgren, E. (1989). Dissociation of recognition memory components following temporal lobe lesions. Journal of Experimental Psychology: Learning, Memory, and Cognition 15, 50-60.

Stanners, R.F. & Forbach, G.B. (1973). Analysis of letter strings in word recognition.  Journal of Experimental Psychology 98, 31-35.

Stanners, R.F., Forbach, G.B., & Headley, D.B. (1971). Decision and search processes in word-nonword classification. Journal of Experimental Psychology 90, 45-50.

Swinney, D. (1991).  The resolution of indeterminacy during language comprehension: Perspectives on modularity in lexical, structural and pragmatic processing.  In G.B. Simpson (Ed.), Understanding Word and Sentence.  Advances in Psychology, Vol 77.  New York: North-Holland.  Pp. 367-385.

Taft, M. (1979). Recognition of affixed words and the word frequency effect. Memory & Cognition 7, 263-272.

Traxler, M.J., Foss, D.J., Seely, R.E., Kaup, B., & Morris, R.K. (2000, submitted). Priming in sentence processing: Intralexical spreading activation, schemas, and situation models.

van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (1999). Semantic integration in sentences and discourse: Evidence from the N400. Journal of Cognitive Neuroscience, 11(6), 657-671.

Van Petten, C. (1993). A comparison of lexical and sentence-level context effects in event-related potentials. Special Issue: Event-related brain potentials in the study of language. Language & Cognitive Processes, 8(4), 485-531.

Van Petten, C., Coulson, S., Rubin, S., Plante, E., & Parks, M. (1999). Time course of word identification and semantic integration in spoken language. Journal of Experimental Psychology: Learning, Memory, & Cognition, 25(2), 394-417.

Van Petten, C., & Kutas, M. (1990). Interactions between sentence context and word frequency in event-related brain potentials. Memory & Cognition, 18(4), 380-393.

Van Petten, C., Kutas, M., Kluender, R., Mitchiner, M., & McIsaac, H. (1991). Fractionating the word repetition effect with event-related potentials. Journal of Cognitive Neuroscience, 3(2), 131-150.

Whaley, C.P. (1978). Word-nonword classification time. Journal of Verbal Learning and Verbal Behavior 17, 143-154.