Blog

How the limits of the mind shape human language

(Rebogged from The Conversation)

Pathdoc/Shutterstock.com

When we speak, our sentences emerge as a flowing stream of sound. Unless we are really annoyed, We. Don’t. Speak. One. Word. At. A. Time. But this property of speech is not how language itself is organised. Sentences consist of words: discrete units of meaning and linguistic form that we can combine in myriad ways to make sentences. This disconnect between speech and language raises a problem. How do children, at an incredibly young age, learn the discrete units of their languages from the messy sound waves they hear?

Over the past few decades, psycholinguists have shown that children are “intuitive statisticians”, able to detect patterns of frequency in sound. The sequence of sounds rktr is much rarer than intr. This means it is more likely that intr could occur inside a word (interesting, for example), while rktr is likely to span two words (dark tree). The patterns that children can be shown to subconsciously detect might help them figure out where one word begins and another ends.

One of the intriguing findings of this work is that other species are also able to track how frequent certain sound combinations are, just like human children can. In fact, it turns out that that we’re actually worse at picking out certain patterns of sound than other animals are.

Linguistic rats

One of the major arguments in my new book, Language Unlimited, is the almost paradoxical idea that our linguistic powers can come from the limits of the human mind, and that these limits shape the structure of the thousands of languages we see around the world. 

One striking argument for this comes from work carried out by researchers led by Juan Toro in Barcelona over the past decade. Toro’s team investigatedwhether children learned linguistic patterns involving consonants better than those involving vowels, and vice versa. 

Vowels and consonants. Monkey Business Images/Shutterstock

They showed that children quite easily learned a pattern of nonsense words that all followed the same basic shape: you have some consonant, then a particular vowel (say a), followed by another consonant, that same vowel, yet one more consonant, and finally a different vowel (say e). Words that follow this pattern would be dabalelitinonuduto, while those that break it are dutonebitado and tulabe. Toro’s team tested 11 month old babies, and found that the kids learned the pattern pretty well.

But when the pattern involved changes to consonants as opposed to vowels, the children just didn’t learn it. When they were presented with words like dadenobobine, and lulibo, which have the same first and second consonant but a different third one, the children didn’t see this as a rule. Human children found it far easier to detect a general pattern involving vowels than one involving consonants.



The team also tested rats. The brains of rats are known to detect and processdifferences between vowels and consonants. The twist is that the rat brains were too good: the rats learned both the vowel rule and the consonant rule easily. 

Children, unlike rats, seem to be biased towards noticing certain patterns involving vowels and against ones involving consonants. Rats, in contrast, look for patterns in the data of any sort. They aren’t limited in the patterns they detect, and, so they generalise rules about syllables that are invisible to human babies.

Rat language. Maslov Dmitry/Shutterstock

This bias in how our minds are set up has, it seems, influenced the structure of actual languages.

Impossible languages

We can see this by looking at the Semitic languages, a family that includes Hebrew, Arabic, Amharic and Tigrinya. These languages have a special way of organising their words, built around a system where each word can be defined (more or less) by its consonants, but the vowels change to tell you something about the grammar.

For example, the Modern Hebrew word for “to guard” is really just the three consonant sounds sh-m-r. To say, “I guarded”, you put the vowels a-a in the middle of the consonants, and add a special suffix, giving shamarti. To say “I will guard”, you put in completely different vowels, in this case e-o and you signify that it’s “I” doing the guarding with a prefixed glottal stop giving `eshmor. The three consonants sh-m-r are stable, but the vowels change to make past or future tense. 

We can also see this a bit in a language like English. The present tense of the verb “to ring” is just ring. The past is, however, rang, and you use yet a different form in The bell has now been rung. Same consonants (r-ng), but different vowels.



Our particularly human bias to store patterns of consonants as words may underpin this kind of grammatical system. We can learn grammatical rules that involve changing the vowels easily, so we find languages where this happens quite commonly. Some languages, like the Semitic ones, make enormous use of this. But imagine a language that is the reverse of Semitic: the words are fundamentally patterns of vowels, and the grammar is done by changing the consonants around the vowels. Linguists have never found a language that works like this. 

We could invent a language that worked like this, but, if Toro’s results hold up, it would be impossible for a child to learn naturally. Consonants anchor words, not vowels. This suggests that our particularly human brains are biased towards certain kinds of linguistic patterns, but not towards other equally possible ones, and that this has had a profound effect on the languages we see across the world.

Charles Darwin once said that human linguistic abilities are different from those of other species because of the higher development of our “mental powers”. The evidence today suggests it is actually because we have different kinds of mental powers. We don’t just have more oomph than other species, we have different oomph.

Where Do the Rules of Grammar Come From?

Reblogged from Psychology Today

 David Adger
Those three big oranges

When we speak our native language we unconsciously follow certain rules. These rules are different in different languages. For example, if I want to talk about a particular collection of oranges, in English I say 

Those three big oranges

In Thai, however, I’d say 

Sôm jàj sǎam-lûuk nán

This is, literally, Oranges big three those, the reverse of the English order. Both English and Thai are pretty strict about this, and if you mess up the order, speakers will say that you’re not speaking the language fluently.

The rule of course doesn’t mention particular words, like big, or nán, it applies to whole classes of words. These classes are what linguists call grammatical categories: things like Noun, Verb, Adjective and less familiar ones like Numeral (for words like three or five) and Demonstrative (the term linguists use for words like this and those). Adjectives, Numerals and Demonstratives give extra information about the noun, and linguists call these words modifiers. The rules of grammar tell us the order of these whole classes of words. 

Where do these rules come from? The common-sense answer is that we learn them: as English speakers grow up, they hear the people around them saying many, many sentences, and from these they generalize the English order; Thai speakers, hearing Thai sentences, generalize the opposite order (and English-Thai bilinguals get both). That’s why, if you present an English or Thai speaker with the wrong order, they’ll immediately detect something is wrong.

But this common-sense answer raises an interesting puzzle. The rules in English, from this kind of perspective, look roughly as follows:

The normal order of words that modify nouns is:

  • Demonstratives precede Numerals, Adjectives and Nouns
  • Numerals precede Adjectives and Nouns
  • Adjectives precede Nouns

There are 24 possible ways of ordering the four grammatical categories. If the order in different languages is simply a result of speakers learning their languages, then we’d expect to see all 24 possibilities available in languages around the world. Everything else being equal, we’d also expect the various orders to be pretty much equally common.

However, in the 1960s, the comparative linguist, Joseph Greenberg, looked at a bunch of unrelated languages and found that this simply wasn’t true. Some orders were much more common than others, and some orders simply didn’t seem to be present in languages at all. More recent work by Matthew Dryer looking at a much larger collection of languages has confirmed Greenberg’s original finding. It turns out that the English and Thai orders are by far the most common, while other orders—such as that of Kîîtharaka, a Bantu language spoken in Kenya, in which you say something like Oranges those three big—are much rarer. Dryer, like Greenberg, found that some orders just didn’t exist at all.article continues after advertisement

This is a surprise if speakers learn the order of words from what they hear (which they surely do), and each order is just as learnable as the others. It suggests either that some historical accident has led to certain orders being more frequent in the world’s languages, or, alternatively, that the human mind somehow prefers some orders to others, so these are the ones that, over time, end up being more common.

To test this, my colleagues Jennifer CulbertsonAlexander MartinKlaus Abels, and I are running a series of experiments. In these, we teach people an invented language, with specially designed words that are easily learnable, whether the native language of our speakers is English, Thai or Kîîtharaka. We call this language Nápíjò and we tell the people who are doing the experiments that it’s a language spoken by about 10,000 people in rural South-East Asia. This is to make Nápíjò as real as possible to people. 

Though we keep the words in Nápíjò the same for all our speakers, we mess around with the order of words. When we test English speakers, we teach them a version of Nápíjò with the Adjectives, Numerals and Demonstratives after the Noun (unlike in English); we also teach Nápíjò to Thai speakers, but we use a version with all these classes of words before the noun (unlike in Thai).

The idea behind the experiments is this: We teach speakers some Nápíjò, enough so they know the order of the noun and the modifiers, but not enough so they know how the modifiers are ordered with respect to each other. Then we have them guess the order of the modifiers. Their guesses tell us about what kind of grammar rule they are learning, and the pattern of the guesses should tell us something about whether that grammar rule is coming from their native language, or from something deeper in their minds.article continues after advertisement

In the experiments, we show speakers pictures of various situations–for example, a girl pointing to a collection of feathers—and we give our speakers the Nápíjò phrase that corresponds to these. The Nápíjò phrases that the participants hear for the diagrams below would correspond to red feather on the left and to that feather, on the right. We give the speakers enough examples so that they learn the pattern.

Alexander Martin, Klaus Abels, David Adger and Jennifer Culbertson

Red Feather and That Feather

Once the participants in the experiment have learned the pattern, we then test to see what they would do when you have two modifiers: for example, an Adjective and a Numeral, or a Demonstrative and an Adjective. We have of course, somewhat evilly, not given our speakers any clue as to what they should do. They have to guess!

Now, if the speakers are simply using the order of words from their native language, then they should be more likely to guess that the order of words in Nápíjò would follow it. That’s not, however, what English or Thai speakers do.

For a picture that shows a girl pointing at “those red feathers”, English speakers don’t guess that the Nápíjò order is Feathers those red, which would keep the English order of modifiers but is very rare cross-linguistically when the noun comes first. Instead, they are far more likely to go for Feathers red those, which is very common cross-linguistically (it is the Thai order), but is definitely not the English order of the modifiers.article continues after advertisement

Thai speakers, similarly, didn’t keep the Thai order of Adjective and Demonstrative. Instead, they were more likely to guess the English order, which, when the noun comes last (as it did for them), is very common across languages.  

This suggests that the patterns of language types discovered by Greenberg, and put on an even firmer footing by Dryer, may not be the result of chance or of history. Rather the human mind may have an inherent bias towards certain word orders over others.

Our experiments are not quite finished, however. Both English and Thai speakers use patterns in their native language that are very common in languages generally. But what would happen if we tested speakers of one of the very rare patterns? Would even they still guess that Nápíjò uses an order which is more frequent in languages of the world?

David Adger

The U.K. team with local team member Patrick Kanampiu

To find out, we’ve just arrived in Kenya, where we will do a final set of experiments. We’ll test speakers of Kîîtharaka (where the order is roughly Oranges those three big). We’ll test them on a version of Nápíjò where the noun comes last, unlike in their own language. If they behave just like Thai speakers, who we also taught a version of Nápíjò where the noun comes last, we will have really strong evidence that the linguistic nature of the human mind shapes the rules we learn. Who knows though, science is, after all, a risky business!    

(Many thanks to my co-researchers, Jenny Culbertson (Principal Investigator), Alexander Martin and Klaus Abels for comments, and to the UK Economic and Social Research Council for funding this research)

What Invented Languages Can Tell Us About Human Language

(Reblogged from Psychology Today)

Hash yer dothrae chek asshekh?

This is how you ask someone how they are in Dothraki, one of the languages David Peterson invented for the phenomenally successful Game of Thrones series. It is an idiom in Peterson’s constructed language, meaning, roughly “Do you ride well today?” It captures the importance of horse-riding to the imaginary warriors of the land of Essos in the series.

Invented, or constructed languages, are definitely coming into their own. When I was a young nerd, about the age of eleven or twelve, I used to make up languages. Not very good ones, as I knew almost nothing then about how languages work. I just invented words that I thought looked cool on the page (lots of xs and qs!), and used these in place of English words.  

I wasn’t the only person that did this. J.R.R Tolkien wrote an essay with the alluring title of A Secret Vice, all about his love of creating languages: their words, their sounds, their grammar and their history, and the internet has created whole communities of ‘conlangers’, sharing their love of invented languages, as Arika Okrent documents in her book In the Land of Invented Languages

For the teenage me, what was fascinating was how creating a language opened up worlds of the imagination and how it allowed me to create my own worlds. I guess it’s not surprising that I eventually ended up doing a PhD in Linguistics. Those early experiments with inventing languages made me want to understand how real languages work. So I stopped creating my own languages and, over the last three decades, researched how  Gaelic, Kiowa, Hawaiian, Kiitharaka, and many other languages work.  

A few years back, however, I was asked by a TV producer to create some languages for a TV series, Beowulf, and that reinvigorated my interest in something I hadn’t done since my early 20s. It also made me realise that thinking about how an invented language could work actually helps us to tackle some quite deep questions in both linguistics and in the psychology of language.

To see this, let me invent a small language for you now. 

This is how you say “Here’s a cat.” in this language:

              Huna lo.

and this is how you say “The cat is big.”

              Huna shin mek.

This is how you say “Here’s a kitten.”

              Tehili lo.

Ok. Your turn. How do you say “The kitten is big.”?

Easy enough, right? It’s:

              Tehili shin mek.

You’ve spotted a pattern, and generalized that pattern to a new meaning.

David Adger
The cat’s tail is big

Ok, now we can get a little bit more complicated. To say that the cat’s kitten is big, you say

              Tehili ga huna shin mek.

That’s it. Now let’s see how well you’ve learned the language. If I tell you that the word for “tail” is loik, how do you think you’d say: “The cat’s tail is big.”?

Well, if “The cat’s kitten is big” is  Tehili ga huna shin mek, you might guess,

              Loik ga huna shin mek

Well done (or Mizi mashi as they say in this language!). You’ve learned the words. You’ve also learned some of the grammar of the language: where to put the words. We’re going to push that a little further, and I’ll show you how inventing a language like this can cast interesting facts about human languages into new light.

The fragments of the constructed language you’ve learned so far have come from seeing the patterns between sound (well, actually written words) and meaning. You learned that cat is huna and kitten is tehili by seeing them side by side in sentences meaning “Here’s a cat.” and “Here’s a kitten.”. You learned that the possessive meaning between cat and kitten (or cat and tail) is signified by putting the word for what is possessed first, followed by the word ga, then the word for the possessor.  

This is a little like how linguists begin to find out how a language that is new to them works. I’ve learned how many languages work in this way: by consulting with native speakers, finding out the basic words, seeing how the speaker expresses whole sentences, and figuring out what the patterns are that connect the words and the meanings. This technique allows you to discover how a language functions: what its sounds and words are, and how the words come together to make up the meanings of sentences. 

Now, how do you think you’d say: “The cat’s kitten’s tail is big.”?

You’d probably guess that it would be:

              Loik ga tehili ga huna shin mek.

The reasoning works a little like this: if the cat’s kitten is tehili ga huna, and the kitten is the possessor of the tail, then the cat’s kitten’s tail should be loik ga tehili ga huna. Similarly, if the word for “tip” is mahia, then you should be able to say

              Mahia ga loik ga tehili ga huna shin mek.

That’s a pretty reasonable assumption. In fact, it’s how many languages work. But say I tell you that there’s a rule in my invented language: there’s a maximum of two gas allowed. So you can say “The cat’s kitten’s tail is big.”, but you can’t say “The cat’s kitten’s tail’s tip is big.” My language imposes a numerical limit. Two is ok, but three is just not allowed. 

Would you be surprised to know that we don’t know of a single real language in the whole world that works like this? Languages just don’t use specific numbers in their grammatical rules.

I’ve used this invented language to show you what real languages don’t  ever do.  I’ll come back in the next blog to how we can use invented languages to understand what real languages can’t do.

We can see the same property of language in other areas too. Think of the child’s nursery rhyme about Jack’s house. It starts off with This is the house that Jack built. In this sentence we’re talking about a house, and we’re saying something about it: Jack built it. The dramatic tension then builds up, and we meet … the malt!

              This is the malt that lay in the house that Jack built.

Now we’re talking about malt (which is what you get when you soak grain, let it germinate, then quickly dry it with hot air). We’re saying something about the malt: it lay in the house that Jack built. We’ve said something about the house (Jack built it) and something about the malt (it lay in the house). English allows us to combine all this into one sentence. If English were like my invented language, we’d stop. There would be a restriction that you can’t do this more than twice, so the poor rat, who comes next in the story, would go hungry.

              This is rat that ate the malt that lay in the house that Jack built.

But English doesn’t work like my invented language. In English, we can keep on doing this same grammatical trick, eventually ending up with the whole story, using one sentence.

              This is the farmer sowing his corn, 

              That kept the cock that crow’d in the morn, 

              That waked the priest all shaven and shorn,

              That married the man all tatter’d and torn, 

              That kissed the maiden all forlorn, 

              That milk’d the cow with the crumpled horn,

              That tossed the dog, 

              That worried the cat, 

              That killed the rat, 

              That ate the malt 

              That lay in the house that Jack built.

This is actually a very strange, and very persistent property of all human languages we know of: the rules of language can’t count to specific numbers. Once you do something once, you either stop, or you can go on without limit.

What makes this particularly intriguing is that there are other psychological abilities that are restricted to particular numbers. For example, humans (and other animals) can immediately determine the exact number of small amounts of things, up to 4 in fact. If you see a picture with either two or three dots, randomly distributed, you know immediately the exact number of dots without counting. In contrast, if you see a picture with five or six dots randomly distributed, you actually have to count them to know the exact number.

The ability to immediately know exact amounts of small numbers of things is called subitizingPsychologists have shown that we do it in seeing, hearing and feeling things. We can immediately perceive the number if it’s under 4, but not if it’s over. In fact, even people who have certain kinds of brain damage that makes counting impossible for them still have the ability to subitize: they know immediately how many objects they are perceiving, as long as it’s fewer than 4.

But languages don’t do this. Some languages do restrict a rule so it can only apply once, but if it can apply more than once, it can apply an unlimited number of times.

This property makes language quite distinct from many other areas of our mental lives. It also raises an interesting question about how our minds generalize experience when it comes to language.

A child acquiring language will rarely hear more than two possessors, as I document in my forthcoming book Language Unlimited, following work by Avery Andrews. Why then do children not simply construct a rule based on what they experience? Why don’t at least some of them decide that the language they are learning limits the number of possessors to two, or three, like my invented language does?

Children’s ability to subitize should provide them with a psychological ability to use as a limit. They hear a maximum of three possessors, so why don’t they decide their language only allows three possessors. But children don’t do this, and no language we know of has such a limit.

Though our languages are unlimited, our minds, somewhat paradoxically, are tightly constrained in how they generalize from our experiences as we learn language as infants. This suggests that the human mind is structured in advance of experience, and it generalizes from different experiences differently, and idea which goes against a prevailing view in neurosciencethat we apply the same kinds of generalizing capacity to all of our experiences.

Inventing Languages – how to teach linguistics to school students

Last week was  a busy week at Queen  Mary Linguistics.  Coppe van Urk and  I ran a week long summer school aimed at Year 10 students from schools in East and South London on Constructing a Language. We were brilliantly assisted by two student ambassadors (Dina and Sharika) who, although their degrees are in literature rather than linguistics, are clearly linguists at heart!  We spent about 20 hours with the students, and Sharika and Dina gave them a break from us and took them for lunch. The idea behind the summer school, which was funded as part of Queen Mary’s Widening Participation scheme, was to introduce some linguistics into the experience of school students.

In the summer school, we talked about sounds (phonetics), syllable structures (phonology), how words change for grammatical number and tense (morphology), and word order, agreement and case (syntax). We did this mainly through showing the students examples of invented languages (Tolkien’s Sindarin, Peterson‘s Dothraki, Okrand’s Klingon, my own Warig, Nolan‘s Parseltongue, and various others). Coppe and I had to do some quick fieldwork on these languages (using the internet as our consultant!) to get examples of the kinds of sounds and structures we were after. The very first day saw the students creating a cacophony of uvular stops, gargling on velars, and hissing out pharyngeal fricatives. One spooky, and somewhat spine-chilling, moment was the entire class, in chorus, eerily whispering Harry Potter’s Parseltongue injunction to the snake attacking Seumas:

saihaʕassi ħeθ haʃeaʕassa ʃiʔ

leave.2sg.erg him go.2sg.abs away

“Leave him! Go away!”

During the ensuing five days, the students invented their own sound systems and syllable structures, their own morphological and syntactic rules. As well as giving them examples from Constructed Languages, we also snuck in examples of natural languages which did weird things (paucals, remote pasts, rare word orders, highly complex (polysynthetic)  word structures). Francis Nolan, Professor of Phonetics at Cambridge, and inventor of Parseltongue, gave us a special guest lecture on his experiences of creating the language for the Harry Potter films, and how he snuck a lot of interesting linguistics into it (we got to see Praat diagrams of a snake language!). In addition to all this, Daniel Harbour, another colleague at Queen Mary, did a special session on how writing systems develop, and the students came up with their own systems of writing for their languages.

The work that the students did was amazing. We had languages with only VC(C) syllable structures, including phonological rules to delete initial vowels under certain circumstances; writing systems designed to match the technology and history of the speakers (including ox-plough (boustrophedon) systems that zigzagged back and forth across the page); languages where word order varied depending on the gender of the speaker; partial infixed reduplication for paucal with full reduplication for plural; writing systems adapted to be maximally efficient in how to represent reduplication (the students loved reduplication!); circumfixal tense marking with incorporated directionals; independent tense markers appearing initially in verb-initial orders, and a whole ton of other, linguistically extremely cool, features. The most impressive aspect of this, for me at least, was just how creative and engaged the students were in taking quite abstract concepts and using them to invent their language.

For me, and for Coppe, the week was exhausting, but hugely worthwhile. I was really inspired to see what the students could do, and it made me realise more clearly than ever, that linguistics, often thought of as remote, abstract, and forbidding, can be a subject that school students can engage with. For your delectation, here are the posters that the students made for their languages.

 

 

 

Syntax: still autonomous after all these years!

Another day, another paper. This time a rumination on Chomsky’s Syntactic Structures arguments about the autonomy of syntax. I think, despite Fritz Newmeyer’s excellent attempts to clear this issue up over many years, it’s still reflexively misunderstood by many people outside of generative grammar. Chomsky’s claim that syntax is autonomous is really just a claim that there is syntax. Not that there’s not semantics intimately connected to that syntax. Not that syntactic structures aren’t susceptible to frequency or processing effects in use. Just that syntax exists.

Current alternatives to the generative approach to dealing with language still, as far as I can tell, attempt to argue that syntactic phenomena can be reduced to some kind of stochastic effect, or to some kind of extra-linguistic cognitive semantic structures, or to both. This paper attempts to look at the kinds of arguments that Chomsky gave back in the 1950s and to examine whether the last 60 years have given us any evidence that the far more powerful stochastic and/or cognitive semantic systems now available can do the job, and eliminate syntax. I guess most people that know me will be unsurprised by my conclusion: even the jazziest up-to-the-minute neural net processors that Google uses still don’t come close to doing what a 3 year old child does, and even appealing to rich cognitive structures of the sort that there is good evidence for from cognitive psychology misses a trick when trying to explain even the simplest syntactic facts. I look at recent work by Tal Linzen and colleagues that shows that neural net learners may mimic some aspects of syntactic hierarchy, but fail to capture the syntactic dependencies that are sensitive to such structure. I then reprise and extend an argument that Peter Svenonius and I gave a few years back about bound variable pronouns.

One area where I do signal a disagreement with the Chomsky of 60 years ago is in the semantics of grammatical categories. Chomsky argued that these lack semantics, but, since my PhD thesis back in the early 1990s, I’ve been arguing that grammatical categories have interpretations. Here I try to show that the order of Merge of these categories is a side effect not of their interpretations, but of whether the kind of computational task they are put to is more easily handled with one order or the other.

The idea goes like this (excerpted from section 4 of the paper).

“Take an example like the following:

(20) a. Those three green balls

b. *Those green three balls

As is well known, the order of the demonstrative, numeral and descriptive adjective in a noun phrase follow quite specific typological patterns arguing for a hierarchy where the adjective occurs closest to the noun, the numeral occurs further away and the demonstrative is most distant (Greenberg 1963, Cinque 2005). Why should this be? It seems implausible for this phenomenon to appeal to a mereological semantic structure. I’d like to propose a different way of thinking about this that relies on the way that a purely autonomous syntax interfaces with the systems of thought. Imagine we have a bowl which has red and green ping pong balls in it. Assume a task (a non-linguistic task) which is to identify a particular group of three green balls. Two computations will allow success in this task:

(21) a. select all the green balls

b. take all subsets of three of the output of (a)

c. identify one such subset.

(22) a. take all subsets of three balls

b. for each subset, select only those that have green balls in them

c. identify one such subset

Both of these computations achieve the desired result. However, there is clearly a difference in the complexity of each. The second computation requires holding in memory a multidimensional array of all the subsets of three balls, and then computing which of these subsets involve only green balls.

The first simply separates out all the green balls, and then takes a much smaller partitioning of these into subsets involving three. So applying the semantic function of colour before that of counting is a less resource intensive computation. Of course, this kind of computation is not specific to colour—the same argument can be made for many of the kinds of properties of items that are encoded by intersective and subsective adjectives.

If such an approach can be generalized, then there is no need to fix the order of adjectival vs. numeral modifiers in the noun phrase as part of an autonomous system. It is the interface between a computational system that delivers a hierarchy, and the use to which that system is put in an independent computational task of identifying referents, plus a principle that favours systems that minimize computation, that leads to the final organization. The syntax reifies the simpler computation via a hierarchy of categories.

This means that one need not stipulate the order in UG, nor, in fact, derive the order from the input. The content and hierarchical sequence of the elements in the syntax is delivered by the interface between two distinct systems. This can take place over developmental timescales, and is, of course, likely to be reinforced by the linguistic input, though not determined by it.

Orders that are not isomorphic to the easiest computations are allowed by UG, but are pruned away during development because the system ossifies the simpler computation. Such an explanation relies on a generative system that provides the structure which the semantic systems fill with content.

The full ordering of the content of elements in a syntactic hierarchy presumably involves a multiplicity of sub ordering effects, some due to differences in what variable is being elaborated as in Ramchand and Svenonius’s proposal, others, if my sketch of an approach to the noun phrase is correct, due to an overall minimizing of the computation of the use of the structure in referring, describing, presenting etc. In this approach, the job of the core syntactic principles is to create structures which have an unbounded hierarchical depth and which are composed of discrete elements combined in particular ways. But the job of populating these structures with content is delegated to how they interface with other systems.”

The rest of the paper goes on to argue that even though the content of the categories that syntax works with may very well come from language external systems, how they are coopted by the linguistics system, and which content is so coopted, still means that there is strong autonomy of syntax.

The paper, which is to appear in a volume marking the 60th anniversary of the publication of syntactic structures is on Lingbuzz here.

A Menagerie of Merges

I’ve been railing on for a while about this issue, but have just finished a brief paper which I’ve Lingbuzzed, so thought it deserved a blogette. My fundamental concern is about the relationship between restrictiveness and simplicity in syntactic theory. An easy means of restricting the yield of a generative system is to place extra conditions on its operation with the result that the system as a whole becomes more complex. Simplifying a system typically involves reducing or removing these extra conditions, potentially leading to a loss of restrictiveness.

Chomsky’s  introduction of the operation Merge, and the unification of displacement and structure building operations that it accomplishes, was a marked step forward in terms of simplifying the  structure building component of generative grammar. But the simplicity of the standard inductive definition of syntactic objects that incorporates Merge has opened up a vast range of novel derivational types. Recent years have seen for example, derivations that involve rollup head movement, head-movement to specifier followed by morphological merger (Matushansky), rollup phrasal movement (Koopman, Sportiche, Cinque, Svenonius and many others), undermerge (Pesetsky. Yuan), countercyclic tucking-in movements (Richards), countercyclic late Merge (Takahashi, Hulsey, and the MIT crowd in general), and, the topic of this brief paper, sidewards movement, or, equivalently, Parallel Merge (Nunes, Hornstein, Citko, Johnson).

An alternative to adding conditions to a generative system as a means of restricting its outputs is to build the architecture of the system in such a way that it allows only a restricted range of derivational types, that is, to aim for an architecture that embodies the constraints rather than representing them explicitly (cf. Pylyshyn’s Razor). This opens up the possibility of both restricting a system and simplifying it. In my Syntax of Substance book for example, I argued for a system that does not project functional categories as heads, following Brody’s Telescoped Trees idea. This immediately removes derivational types involving certain kinds of head movement from the computational system. Apparent head movement effects have to be, rather, a kind of direct morphologization of syntactic units in certain configurations. No heads means no rollup head movement, no head to specifier movement followed by morphological merger, no `undermerge’ and no parallel merge derivations for head movement (a la Bobaljik and Brown). That same system (Adger 2013) also rules out roll-up phrasal movements via an interaction between the structure building and labelling components of the grammar (essentially, roll-up configurations lead to structures with two complements). It follows that the kinds of roll-up remnant derivations argued for by Kayne and Cinque are ungenerable and the empirical effects they handle must be dealt with otherwise. In all of these cases the concern was to reduce the range of derivational types by constructing a system whose architecture simply does not allow them. Adger 2013 makes the argument that the system presented there is at least no more complex than standard Bare Phrase Structure architectures.

In the draft paper I just posted, I’ve tried to tackle the issue of Sidewards Movement/Parallel Merge derivations, by attributing a memory architecture to Merge. The basic idea, which I presented in my Baggett lectures last year, is to split the workspace into two, mimicking a kind of cache/register structure that we see in the architecture of many computers. One workspace contains the resources for the derivation (I call it the Resource Space) and the other is a smaller (indeed binary) space that is where Merge applies, which I call the Operating Space. So a syntactic derivation essentially involves reading and writing things to and from the Operating Space, where the actual combination takes place.

This architecture makes Parallel Merge derivations impossible, as there is just not enough space/memory in the Operating Space to have the three elements that are needed for such a derivation. This is really just a way of formally making good on Chomsky’s observation that Parallel Merge/Sideways Movement derivations are in some sense ternary.

In the paper I define the formal system that has this result, and argue that it makes sense of the fact that the two gaps in a parasitic gap construction do not behave interpretively identical, extending some old observations of Alan Munn’s. But the main point is really to try to reduce the range of derivational types, and hence the restrictiveness of the system, without explicitly constraining the computational operations themselves. The extra complexity, such as it is, is actually a means of simplifying or economising memory in the computational system.

The paper is here.

 

 

How on earth do they do it? An extra-terrestrial view of Language

With all the interesting discussions about alien languages emerging from the excellent film Arrival, I thought I’d reblog  this piece I wrote for E-Magazine in 2004 (can you reblog something which has never been blogged?). It was aimed at getting A-level students of English Language to think a little differently about syntax and phonology.


There you are, in the library, studying for your English Language A Level. You’ve done all the interesting bits: language and gender, sociolinguistics, discourse, and now it’s come to phonology and syntax. Why do I have to study this, you think, as your eyes begin to droop, and your head begins to nod and you begin to dream of a galaxy far, far away …

FROM: The Director, Gargoplex Institute, Alpha Centauri TO: Chief Exo-Scientist Jenl Itstre
VIA: Direct Telepathocrystal Network
RE: New Mission

New discovery made on planet 3, system 3.387. Species (designation Yu-Man), highly successful within biosphere, developed intelligence (level 2), culture (level 2.1) and technology (level 2.2), yet apparently no telepathic ability. Raises severe theoretical problems for doctrine 6.8 of Gargoplexic Code: Intelligence Culture Technology (ICT) count above level 2 requires communication; sophisticated communication is possible only via telepathy; therefore level 2 intelligence requires telepathy.

Council is worried about ethical questions relating to experimentation on non-telepathic beings, and about apparent support the existence of Yu-Mans gives for Anti-Vivisectionist Movement. Please investigate immediately.

FROM: C.E.S Jenl Itstre
TO: Director, GI, Alpha Centauri
VIA: DTN
RE: re: New Mission – Report 1
Have entered orbit. Confirm intelligence, culture and technology levels. Species seems to have highly developed communication abilities. No telepathy observed. Hypothesis: communication system is simple symbolic, associating one external sign with one thought (cf. Report ‘Octihydras of Glarg: 7, 203 tentacular positions for 7, 203 distinct messages’. Octihydras classified below level 2 on general ICT count.). Perhaps Yu-Mans simply have many external signs? Will determine nature of external physical signs relevant to species. No tentacles observed.

FROM: C.E.S Jenl Itstre
TO: Director, GI Alpha Centauri
VIA: DTN 6.8.9
RE: re: New Mission – Report 2
Exciting new discovery. Like many other species on planet 3, species Yu-Man’s physical manifestation of thought is not visual nor olfactory, as is usual for lower species across the galaxy (cf. Report ‘Lower Xenomorph Communication’, subsection 3.3.45). Unbelievably, it involves instead the manipulation of orifices for oxygen intake and food intake to create pressure waves in the air. Such systems have been hypothesised before, but it has always been assumed that creatures would find it too difficult to extract the relevant air pressure modulations from general noise and that such physical signs would thereby be impossible to use as the basis of communication systems (see Itstre and Itstre ‘On the impossibility of sound based communication’. Report GI).

RE: re: New Mission – Report 3
Troubling discovery. As is well known, simple symbolic systems are capable of only having a finite number of possible messages (Istre and Grofr ‘Finiteness and Communication’, GIM Monographs, AC1). Species Yu-Man have no such limitation – they can communicate an apparently infinite number of sophisticated thoughts between each other without telepathy, apparently just by physically changing the air pressure around them. First hypothesis must be rejected. This gets more perplexing by the hour.

RE: re: New Mission – Report 4
Psychological profile completed: Yu-Mans have complex thoughts in usual multi-dimensional, non-linear form. Communication profile: all members of this species (and none others in the biosphere as far as can be determined) have this curious ability to communicate an infinite number of thoughts without telepathy. Suggestion: Yu-Mans somehow map from their thought structures into physical structures directly so that air pressure modulations directly mirror the structure of thought. No, this can’t be correct. Air pressure structures cannot bear such a load of information. Am not thinking clearly. Obviously turning native.

 

RE: re: New Mission – Report 5
New evidence which is most perplexing: I have established that Yu-Mans can extract patterns from different air structures as hypothesized in previous communications. Some of these patterns clearly relate to thoughts, in a simple symbolic way. For example, a special pattern of sound waves (somewhat abstracted) links the symbol ‘planet-3’ to the right thought and so on. But there is also something we have never seen before. Connecting the symbols in different ways allows Yu-Mans to create complex patterns of symbols that mirror the structure of the relevant thought. It turns out that for some groups of Yu-Mans the order in which the symbols come matters. So even though the pattern ‘the sun orbits planet-3’ has exactly the same symbols as ‘planet-3 orbits the sun’, the fact the parts come in a different order means that it relates to a different thought. Even more peculiar, not all combinations of symbols are possible. For the same group ‘orbits planet-3 the sun’ has no corresponding thought. How on Alpha-Centauri do they manage?

RE: re: New Mission – Report 6
Have new inspiration: Yu-Mans have some sort of an internal symbol manipulation device, part of their psychological makeup, just as telepathic abilities are part of our psychology. Perhaps the usual telepathic development mutated in this species. (I hope it wasn’t as a result of the Interstellar Radiation Dump we set up 20,000 years back in nearby Cygnus 3!) My idea is that this internal symbol manipulation device can be used to build complicated structures, which mirror the structure of thought well enough to be used for communication purposes. Bizarre, I know. Need advice.

FROM: Director, GI Alpha Centauri
TO: CES Jenl Itstre
RE: previous report
Intriguing: you are suggesting that Yu-Mans can manipulate symbols in their minds to mirror the structure of thoughts and then they turn these same symbols into movements of their breathing and food intake orifices which eventually create puffs of air in the surrounding atmosphere! And you are suggesting further that other Yu-Mans can sense these different air-puffs, turn them back into mental symbols and then use that to work out what the original thought was. Amazing abilities, but I can see how telepathic communication could be mimicked in this way. How do you propose to conceptualise these processes?

FROM: CES Jenl Itstre
RE: Hypothesis 2
Following Concept Processing Protocols I propose the following: Yu-Mans’ ability to manipulate symbols to approximate the structure of their thoughts will be syntax, and their capacity to turn these symbols into physical instructions for their mouths and lungs will be phonology.

FROM: Director, GI Alpha Centauri
TO: CES Jenl Istre
Make the study of syntax and phonology your prime concern. It is clearly the key to the communication that makes these Yu-Mans so successful in their non-telepathic environment. It’s also perhaps a good route to understanding the structure of Yu-Mans’ thought processes and the ways in which the turn these thoughts into actions. Repeat: make the study of syntax and phonology your prime …

… ow! As your head hits the table, you wake up. Hmmm, so syntax is about mirroring the structure of thought itself, and phonology involves the amazing ability to glean meanings from how the air moves around you. No wonder they make us study this stuff.