At first blush this seems to be a surprising claim. If the primate has acquired the rich conceptual system of humans then presumably its preexisting inference system should allow it to use these newly acquired systems to construct more sophisticated theories that it can then use to, say, better navigate a complex terrain or make better and more complex inferences about its world. But this is not the case. What prevents this fictitious primate from making use of the new systems and concepts it has acquired is the fact that our inference system operates on propositions , not on concepts.
The primate can of course communicate its preexisting concepts to its fellows, and it can make inferences typical of primates, but since it does not have the ability to create recursive and hierarchically structured expressions it cannot construct or comprehend propositions necessary for higher-order inference.
In other words, this imagined primate has concepts and knowledge of first-order logic, which it can use and comprehend, but that is not enough to produce and comprehend propositions nor to make second-order and higher-order inferences. In order to be able to do the latter, the primate in the thought experiment must — but does not — possess recursion. A fortiori, this primate cannot comprehend the entailment relations between propositions — it cannot think those sorts of thoughts. Now compare this fictitious primate to real world humans: we can think those sorts of thoughts.
This is because the way in which the underlying mechanisms of language work in humans is by providing us with higher-order logic cf. Crain on the relation between natural language and classical logic , by providing us with a computational system that creates recursive and hierarchically structured expressions that display productivity and systematicity and that we use to, amongst other uses, talk and think about the world cf.
In what follows, then, I want to pursue the stronger claim in regard to language being an instrument of thought, and the evidence that may be adduced in its favour. The type of evidence and sorts of arguments can be divided into two kinds: the first is the argument from linguistics , according to which the externalisation of language — in, say, verbal communication — is a peripheral phenomenon because the phonological features of expressions in linguistic computations are secondary and perhaps irrelevant to the conceptual-intentional features of the expressions.
The second is the design-features argument , according to which the design features of language, especially when seen from the perspective of their internal structure, suggest that language developed and functions for purposes that are not primarily those of communication. A strong argument in favour of language being primarily an instrument of thought has to do with the phonological properties of lexical items. Briefly, the idea is that the internal computational processes of the language faculty syntax in a broad sense generate linguistic objects that are employed by the conceptual-intentional systems systems of thought and the sensorimotor systems to yield language production and comprehension.
Notice that on this view the language faculty is embedded within, but separate from, the performance systems. Phon contains information in a form interpretable by the sensorimotor systems, including linear precedence, stress, temporal order, prosodic and syllable structure, and other articulatory features. Sem contains information interpretable by the systems of thought, including event and quantification structure, and certain arrays of semantic features.
The expression Exp is generated by the operation Merge, which takes objects already constructed and constructs from them a new object. If two objects are merged, and principles of efficient computation hold, then neither will be changed — this is indeed the result of the recursive operation that generates Exp. Such expressions are not the same as linguistic utterances but rather provide the information required for the sensorimotor systems and the systems of thought to function, largely in language-independent ways.
- Vegetable Gardening: How to Build a Vegetable Garden.
- Moral Writings (British Moral Philosophers).
- Optimality-Theoretic Syntax (Language, Speech, and Communication)!
- Optimality Theory.
- Linguistics Publications?
- Skin Lightening Facts Book: Discover The Shocking Truth about Skin Lightening Products;
In other words, the sensorimotor systems and the systems of thought operate independently of but at times in close interaction with the faculty of language. A mapping to two interfaces is necessary because the systems have different and often conflicting requirements. That is, the systems of thought require a particular sort of hierarchical structure in order to, for example, calculate relations such as scope; the sensorimotor systems, on the other hand, often require the elimination of this hierarchy because, for example, pronunciation must take place serially. The instructions at the Sem interface that are interpreted by the performance systems are used in acts of talking and thinking about the world — in, say, reasoning or organising action.
- another day another dawn;
- American King!
- Bruce Morén's Curriculum Vitae.
- Curriculum Vitae.
- Bslo - Keywordlist — Brill!
On this view, then, linguistic expressions provide a perspective in the form of a conceptual structure on the world, for it is only via language that certain perspectives are available to us and to our thought processes. This is the sense in which I take language to be an instrument of thought. Language does not structure human thought in a Whorfian way, nor does it merely express pre-formed thoughts; rather, language with its expressions arranged hierarchically and recursively provides us with a unique way of thinking and talking about the world.
Psycholinguistics/Theories and Models of Language Acquisition
Lexical items, then, and all expressions generated from them, are linguistic objects with a double interface property: they have phonological and semantic features through which the linguistic computations can interact with other cognitive systems — indeed, the only principles allowed under the minimalist program are those that can function at the interfaces. Thus, if one were to imagine an order of operations, the process would be as follows: first a lexical item is created with syntactic, phonological, and semantic features. Then, in the process known as Spell Out, the phonological features are sent to the sensorimotor interface, leaving the syntactic and semantic features together to be sent to the conceptual-intentional interface cf.
Burton-Roberts This is strong evidence in favour of the thesis that language is an instrument of thought, for the central computations in which lexical meanings are produced are carried out independently of any consideration as to how or whether they are to be communicated. Thus, the externalisation of language is a peripheral phenomenon in the sense that the phonological features of expressions in linguistic computations are peripheral to the syntactic and semantic features of these expressions.
- Navigation menu.
- 2 editions of this work.
- Optimality Theory | Psychology Wiki | FANDOM powered by Wikia.
- Learnability - Linguistics - Oxford Bibliographies.
- 2 editions of this work;
In addition to the above, we have independent evidence from comparative, neuropathological, developmental, and neuroscientific research that supports the existence of an asymmetry between the interfaces in favour of the semantic side, pushing externalisation to the periphery. The work of Laura-Ann Petitto, for example, has shown that speech per se is not critical to the human language acquisition process. That is, the acquisition of language occurs in the same way in all healthy children, irrespective of the modality in which the child is exposed to language speech in hearing children, sign in deaf children, and even the tactile modality.
This suggests that the brain is hardwired to tune in to the structure and meaning of what is expressed, but that the modality through which this is transmitted is irrelevant Petitto In other words, the syntax and semantics of language are processed in the same brain site regardless of the modality in which they are expressed and perceived. Such evidence gives weight to the biolinguistic argument that syntax and semantics are computed together without recourse to the way in which if at all the product of this computation say, lexical meanings is to be externalised.
There is further evidence of this sort: it appears that the neural specialization for processing language structure is not modifiable, whereas the neural pathways for externalising language are highly modifiable Petitto et al. This again suggests that the language areas of the brain are optimized for processing linguistic structures and meaning, and that their externalisation is not only secondary but also that their type is not fixed — any modality would do as long as the brain can interpret the required linguistic patterns in the input.
Recent work by Ding et al. These cortical circuits track abstract linguistic structures that are internally constructed and that are based on syntax.
Psycholinguistics/Theories and Models of Language Acquisition
Further evidence of the modality independence of language, indeed the condition under which it is most acute, comes from cases where there is practically no externalisation perhaps only the ability to say a few phonemes but where the receptive language ability is completely intact. This form of developmental speech dyspraxia suggests that the ability to comprehend language and make normal grammaticality judgments does not depend on normal language production Stromswold In other words, as the work of Caplan et al.
That is, the linguistic competence at the syntactic and semantic levels remains intact but these patients have difficulty in linking this competence with the performance systems — they have difficulty in externalising the internally constructed expressions. The above is direct evidence in support of the claim that there exists a separation in the underlying mechanisms of language between, on the one hand, the processing of structure and meaning, and, on the other hand, their externalisation. That is, not only is the processing of non-language information dissociated from the processing of information used in language, but also that the processing of the language information itself is separated into Phon and Sem , just as biolinguistics predicts.
Note that this asymmetry regards the underlying mechanisms of language and thus does not apply in the same way to natural languages. So whilst it makes sense to separate Phon from Sem when one studies the underlying mechanisms of language, specific natural languages are a different matter. That is, a natural language encapsulates the use of the Phon and Sem interfaces — in conjunction with other modules — in the act of communication via sound or sign, and so the Phon interface is inseparable from what a natural language is and the way it is used.
In contrast to this, the claim that language is an instrument of thought regards the part of the underlying mechanisms of natural languages that creates the hierarchical and recursive expressions that provide humans with a unique way of thinking about the world. This part on its own is of course not yet a particular natural language, for it is not yet in a form in which it can be externalised.
In order to become a natural language it needs to be paired with the Phon interface and then, together with other systems, be used in the act of communication. Returning to the double interface object, one might wonder why the asymmetry between the interfaces is in favour the semantic side, pushing externalisation to the periphery. I think the answer to this comes in the form of the design-features argument. If one does not share the general framework of biolinguistics, then they will perhaps be unconvinced by the argument from linguistics above. The design-features argument , on the other hand, has much wider scope and is not entirely dependent upon a particular linguistics school of thought.
By design features I mean the kind of features one discovers upon investigating language as a system in its own right. Such features include, amongst many others, displacement, linear order, agreement, and anaphora. One may then investigate the communicative and computational efficiency of these features as they relate to language as a whole system, and ask whether these features are better optimised for communication or for computation. Of course, many comparisons of this sort can be made, and some particular selection that depicts a conflict between communicative efficiency and computational efficiency might seem tendentious, but I think that the conflicts of the sort highlighted below, in which computational efficiency wins out, represent one of several chinks in the armour of the orthodoxy that assumes that the function of language is communication.
Let us now consider the case of the explanation of the linear order of expressions. The linear order imposed on verbal expressions is not a language-specific constraint: it is not a consequence of the structure of the language faculty. Rather, it is a necessary consequence of the structure of the sensorimotor systems and the obvious fact that expressions cannot be produced or comprehended in parallel.
Assuming this is the case, then, what is the effect of such constraints on, say, the computations involved in parsing sound inputs into linguistic representations? If language is optimised for communication and if sound is our main source of externalisation, then one would predict that many of the features of language would respect linear order and favour operations that support it even if they conflict with computational efficiency.
Closer investigation, however, suggests that this is not the case.
Optimality-theoretic Syntax - Google Livros
Consider, for example, how co-reference is interpreted in sentences such as In her study, Jane is mostly productive , where her and Jane are interpreted as being co-referential. It was initially thought Langacker ; Jackendoff ; Lasnik that in order to explain the difference between, say, 1 and 2 below, a linear relationship of precede-and-command was needed, according to which the pronoun cannot both precede and command its antecedent. The explanation used to be that in 1 the pronoun precedes and commands the full noun phrase and therefore the co-referential interpretation is blocked.
In 2 , conversely, it was claimed that the pronoun precedes but does not command the full noun phrase and therefore a co-referential interpretation is permitted. However, as Reinhart shows, the domains over which the precede-and-command operations are defined are quite arbitrary; the parts of the expressions that are preceded or commanded by other parts often do not correspond to independently characterisable syntactic units.
On independent grounds, then, it would be surprising if such an arbitrary linear relationship would turn out to be the operative co-referential explanation. This is clear in 3 and 4 below, which cannot be explained by precede-and-command operations cf. Reinhart 36ff. In 3a the pronoun cannot refer to Mary , whereas in 3b the co-referential interpretation is permitted.
However, when we consider 4 , which is the pre-preposed version of the sentences in 3 , the co-referential interpretation is blocked in both 4a and 4b. Thus, no ordering explanation such as precede-and-command can account for the difference between 3a and 3b. Or compare 5a and 5b , both of which are allowed by the relation of precede-and-command but only one of which has an acceptable co-referential reading.
As Reinhart shows with a range of other examples, there is good reason to think that, instead of a linear order operation, the explanation of co-reference has to do with the structural properties of the expressions. According to the structure-dependent analysis, coreferential interpretations are only permitted when anaphors are bound by another nominal.
This binding is a structure sensitive and asymmetric relation according to which a subject can bind an object, but an object cannot bind a subject. In regard to the above examples, there is an asymmetry between the coreference options of subjects and those of objects or non-subjects , for in cases with preposed constituents forward pronominalisation is impossible where the pronoun is the subject — as in 3a and 5a — but possible where the pronoun is not the subject — as in 3b and 5b.