Home » Articles posted by davidadger

Author Archives: davidadger

What Invented Languages Can Tell Us About Human Language

(Reblogged from Psychology Today)

Hash yer dothrae chek asshekh?

This is how you ask someone how they are in Dothraki, one of the languages David Peterson invented for the phenomenally successful Game of Thrones series. It is an idiom in Peterson’s constructed language, meaning, roughly “Do you ride well today?” It captures the importance of horse-riding to the imaginary warriors of the land of Essos in the series.

Invented, or constructed languages, are definitely coming into their own. When I was a young nerd, about the age of eleven or twelve, I used to make up languages. Not very good ones, as I knew almost nothing then about how languages work. I just invented words that I thought looked cool on the page (lots of xs and qs!), and used these in place of English words.  

I wasn’t the only person that did this. J.R.R Tolkien wrote an essay with the alluring title of A Secret Vice, all about his love of creating languages: their words, their sounds, their grammar and their history, and the internet has created whole communities of ‘conlangers’, sharing their love of invented languages, as Arika Okrent documents in her book In the Land of Invented Languages

For the teenage me, what was fascinating was how creating a language opened up worlds of the imagination and how it allowed me to create my own worlds. I guess it’s not surprising that I eventually ended up doing a PhD in Linguistics. Those early experiments with inventing languages made me want to understand how real languages work. So I stopped creating my own languages and, over the last three decades, researched how  Gaelic, Kiowa, Hawaiian, Kiitharaka, and many other languages work.  

A few years back, however, I was asked by a TV producer to create some languages for a TV series, Beowulf, and that reinvigorated my interest in something I hadn’t done since my early 20s. It also made me realise that thinking about how an invented language could work actually helps us to tackle some quite deep questions in both linguistics and in the psychology of language.

To see this, let me invent a small language for you now. 

This is how you say “Here’s a cat.” in this language:

              Huna lo.

and this is how you say “The cat is big.”

              Huna shin mek.

This is how you say “Here’s a kitten.”

              Tehili lo.

Ok. Your turn. How do you say “The kitten is big.”?

Easy enough, right? It’s:

              Tehili shin mek.

You’ve spotted a pattern, and generalized that pattern to a new meaning.

David Adger
The cat’s tail is big

Ok, now we can get a little bit more complicated. To say that the cat’s kitten is big, you say

              Tehili ga huna shin mek.

That’s it. Now let’s see how well you’ve learned the language. If I tell you that the word for “tail” is loik, how do you think you’d say: “The cat’s tail is big.”?

Well, if “The cat’s kitten is big” is  Tehili ga huna shin mek, you might guess,

              Loik ga huna shin mek

Well done (or Mizi mashi as they say in this language!). You’ve learned the words. You’ve also learned some of the grammar of the language: where to put the words. We’re going to push that a little further, and I’ll show you how inventing a language like this can cast interesting facts about human languages into new light.

The fragments of the constructed language you’ve learned so far have come from seeing the patterns between sound (well, actually written words) and meaning. You learned that cat is huna and kitten is tehili by seeing them side by side in sentences meaning “Here’s a cat.” and “Here’s a kitten.”. You learned that the possessive meaning between cat and kitten (or cat and tail) is signified by putting the word for what is possessed first, followed by the word ga, then the word for the possessor.  

This is a little like how linguists begin to find out how a language that is new to them works. I’ve learned how many languages work in this way: by consulting with native speakers, finding out the basic words, seeing how the speaker expresses whole sentences, and figuring out what the patterns are that connect the words and the meanings. This technique allows you to discover how a language functions: what its sounds and words are, and how the words come together to make up the meanings of sentences. 

Now, how do you think you’d say: “The cat’s kitten’s tail is big.”?

You’d probably guess that it would be:

              Loik ga tehili ga huna shin mek.

The reasoning works a little like this: if the cat’s kitten is tehili ga huna, and the kitten is the possessor of the tail, then the cat’s kitten’s tail should be loik ga tehili ga huna. Similarly, if the word for “tip” is mahia, then you should be able to say

              Mahia ga loik ga tehili ga huna shin mek.

That’s a pretty reasonable assumption. In fact, it’s how many languages work. But say I tell you that there’s a rule in my invented language: there’s a maximum of two gas allowed. So you can say “The cat’s kitten’s tail is big.”, but you can’t say “The cat’s kitten’s tail’s tip is big.” My language imposes a numerical limit. Two is ok, but three is just not allowed. 

Would you be surprised to know that we don’t know of a single real language in the whole world that works like this? Languages just don’t use specific numbers in their grammatical rules.

I’ve used this invented language to show you what real languages don’t  ever do.  I’ll come back in the next blog to how we can use invented languages to understand what real languages can’t do.

We can see the same property of language in other areas too. Think of the child’s nursery rhyme about Jack’s house. It starts off with This is the house that Jack built. In this sentence we’re talking about a house, and we’re saying something about it: Jack built it. The dramatic tension then builds up, and we meet … the malt!

              This is the malt that lay in the house that Jack built.

Now we’re talking about malt (which is what you get when you soak grain, let it germinate, then quickly dry it with hot air). We’re saying something about the malt: it lay in the house that Jack built. We’ve said something about the house (Jack built it) and something about the malt (it lay in the house). English allows us to combine all this into one sentence. If English were like my invented language, we’d stop. There would be a restriction that you can’t do this more than twice, so the poor rat, who comes next in the story, would go hungry.

              This is rat that ate the malt that lay in the house that Jack built.

But English doesn’t work like my invented language. In English, we can keep on doing this same grammatical trick, eventually ending up with the whole story, using one sentence.

              This is the farmer sowing his corn, 

              That kept the cock that crow’d in the morn, 

              That waked the priest all shaven and shorn,

              That married the man all tatter’d and torn, 

              That kissed the maiden all forlorn, 

              That milk’d the cow with the crumpled horn,

              That tossed the dog, 

              That worried the cat, 

              That killed the rat, 

              That ate the malt 

              That lay in the house that Jack built.

This is actually a very strange, and very persistent property of all human languages we know of: the rules of language can’t count to specific numbers. Once you do something once, you either stop, or you can go on without limit.

What makes this particularly intriguing is that there are other psychological abilities that are restricted to particular numbers. For example, humans (and other animals) can immediately determine the exact number of small amounts of things, up to 4 in fact. If you see a picture with either two or three dots, randomly distributed, you know immediately the exact number of dots without counting. In contrast, if you see a picture with five or six dots randomly distributed, you actually have to count them to know the exact number.

The ability to immediately know exact amounts of small numbers of things is called subitizingPsychologists have shown that we do it in seeing, hearing and feeling things. We can immediately perceive the number if it’s under 4, but not if it’s over. In fact, even people who have certain kinds of brain damage that makes counting impossible for them still have the ability to subitize: they know immediately how many objects they are perceiving, as long as it’s fewer than 4.

But languages don’t do this. Some languages do restrict a rule so it can only apply once, but if it can apply more than once, it can apply an unlimited number of times.

This property makes language quite distinct from many other areas of our mental lives. It also raises an interesting question about how our minds generalize experience when it comes to language.

A child acquiring language will rarely hear more than two possessors, as I document in my forthcoming book Language Unlimited, following work by Avery Andrews. Why then do children not simply construct a rule based on what they experience? Why don’t at least some of them decide that the language they are learning limits the number of possessors to two, or three, like my invented language does?

Children’s ability to subitize should provide them with a psychological ability to use as a limit. They hear a maximum of three possessors, so why don’t they decide their language only allows three possessors. But children don’t do this, and no language we know of has such a limit.

Though our languages are unlimited, our minds, somewhat paradoxically, are tightly constrained in how they generalize from our experiences as we learn language as infants. This suggests that the human mind is structured in advance of experience, and it generalizes from different experiences differently, and idea which goes against a prevailing view in neurosciencethat we apply the same kinds of generalizing capacity to all of our experiences.

Inventing Languages – how to teach linguistics to school students

Last week was  a busy week at Queen  Mary Linguistics.  Coppe van Urk and  I ran a week long summer school aimed at Year 10 students from schools in East and South London on Constructing a Language. We were brilliantly assisted by two student ambassadors (Dina and Sharika) who, although their degrees are in literature rather than linguistics, are clearly linguists at heart!  We spent about 20 hours with the students, and Sharika and Dina gave them a break from us and took them for lunch. The idea behind the summer school, which was funded as part of Queen Mary’s Widening Participation scheme, was to introduce some linguistics into the experience of school students.

In the summer school, we talked about sounds (phonetics), syllable structures (phonology), how words change for grammatical number and tense (morphology), and word order, agreement and case (syntax). We did this mainly through showing the students examples of invented languages (Tolkien’s Sindarin, Peterson‘s Dothraki, Okrand’s Klingon, my own Warig, Nolan‘s Parseltongue, and various others). Coppe and I had to do some quick fieldwork on these languages (using the internet as our consultant!) to get examples of the kinds of sounds and structures we were after. The very first day saw the students creating a cacophony of uvular stops, gargling on velars, and hissing out pharyngeal fricatives. One spooky, and somewhat spine-chilling, moment was the entire class, in chorus, eerily whispering Harry Potter’s Parseltongue injunction to the snake attacking Seumas:

saihaʕassi ħeθ haʃeaʕassa ʃiʔ

leave.2sg.erg him go.2sg.abs away

“Leave him! Go away!”

During the ensuing five days, the students invented their own sound systems and syllable structures, their own morphological and syntactic rules. As well as giving them examples from Constructed Languages, we also snuck in examples of natural languages which did weird things (paucals, remote pasts, rare word orders, highly complex (polysynthetic)  word structures). Francis Nolan, Professor of Phonetics at Cambridge, and inventor of Parseltongue, gave us a special guest lecture on his experiences of creating the language for the Harry Potter films, and how he snuck a lot of interesting linguistics into it (we got to see Praat diagrams of a snake language!). In addition to all this, Daniel Harbour, another colleague at Queen Mary, did a special session on how writing systems develop, and the students came up with their own systems of writing for their languages.

The work that the students did was amazing. We had languages with only VC(C) syllable structures, including phonological rules to delete initial vowels under certain circumstances; writing systems designed to match the technology and history of the speakers (including ox-plough (boustrophedon) systems that zigzagged back and forth across the page); languages where word order varied depending on the gender of the speaker; partial infixed reduplication for paucal with full reduplication for plural; writing systems adapted to be maximally efficient in how to represent reduplication (the students loved reduplication!); circumfixal tense marking with incorporated directionals; independent tense markers appearing initially in verb-initial orders, and a whole ton of other, linguistically extremely cool, features. The most impressive aspect of this, for me at least, was just how creative and engaged the students were in taking quite abstract concepts and using them to invent their language.

For me, and for Coppe, the week was exhausting, but hugely worthwhile. I was really inspired to see what the students could do, and it made me realise more clearly than ever, that linguistics, often thought of as remote, abstract, and forbidding, can be a subject that school students can engage with. For your delectation, here are the posters that the students made for their languages.




Syntax: still autonomous after all these years!

Another day, another paper. This time a rumination on Chomsky’s Syntactic Structures arguments about the autonomy of syntax. I think, despite Fritz Newmeyer’s excellent attempts to clear this issue up over many years, it’s still reflexively misunderstood by many people outside of generative grammar. Chomsky’s claim that syntax is autonomous is really just a claim that there is syntax. Not that there’s not semantics intimately connected to that syntax. Not that syntactic structures aren’t susceptible to frequency or processing effects in use. Just that syntax exists.

Current alternatives to the generative approach to dealing with language still, as far as I can tell, attempt to argue that syntactic phenomena can be reduced to some kind of stochastic effect, or to some kind of extra-linguistic cognitive semantic structures, or to both. This paper attempts to look at the kinds of arguments that Chomsky gave back in the 1950s and to examine whether the last 60 years have given us any evidence that the far more powerful stochastic and/or cognitive semantic systems now available can do the job, and eliminate syntax. I guess most people that know me will be unsurprised by my conclusion: even the jazziest up-to-the-minute neural net processors that Google uses still don’t come close to doing what a 3 year old child does, and even appealing to rich cognitive structures of the sort that there is good evidence for from cognitive psychology misses a trick when trying to explain even the simplest syntactic facts. I look at recent work by Tal Linzen and colleagues that shows that neural net learners may mimic some aspects of syntactic hierarchy, but fail to capture the syntactic dependencies that are sensitive to such structure. I then reprise and extend an argument that Peter Svenonius and I gave a few years back about bound variable pronouns.

One area where I do signal a disagreement with the Chomsky of 60 years ago is in the semantics of grammatical categories. Chomsky argued that these lack semantics, but, since my PhD thesis back in the early 1990s, I’ve been arguing that grammatical categories have interpretations. Here I try to show that the order of Merge of these categories is a side effect not of their interpretations, but of whether the kind of computational task they are put to is more easily handled with one order or the other.

The idea goes like this (excerpted from section 4 of the paper).

“Take an example like the following:

(20) a. Those three green balls

b. *Those green three balls

As is well known, the order of the demonstrative, numeral and descriptive adjective in a noun phrase follow quite specific typological patterns arguing for a hierarchy where the adjective occurs closest to the noun, the numeral occurs further away and the demonstrative is most distant (Greenberg 1963, Cinque 2005). Why should this be? It seems implausible for this phenomenon to appeal to a mereological semantic structure. I’d like to propose a different way of thinking about this that relies on the way that a purely autonomous syntax interfaces with the systems of thought. Imagine we have a bowl which has red and green ping pong balls in it. Assume a task (a non-linguistic task) which is to identify a particular group of three green balls. Two computations will allow success in this task:

(21) a. select all the green balls

b. take all subsets of three of the output of (a)

c. identify one such subset.

(22) a. take all subsets of three balls

b. for each subset, select only those that have green balls in them

c. identify one such subset

Both of these computations achieve the desired result. However, there is clearly a difference in the complexity of each. The second computation requires holding in memory a multidimensional array of all the subsets of three balls, and then computing which of these subsets involve only green balls.

The first simply separates out all the green balls, and then takes a much smaller partitioning of these into subsets involving three. So applying the semantic function of colour before that of counting is a less resource intensive computation. Of course, this kind of computation is not specific to colour—the same argument can be made for many of the kinds of properties of items that are encoded by intersective and subsective adjectives.

If such an approach can be generalized, then there is no need to fix the order of adjectival vs. numeral modifiers in the noun phrase as part of an autonomous system. It is the interface between a computational system that delivers a hierarchy, and the use to which that system is put in an independent computational task of identifying referents, plus a principle that favours systems that minimize computation, that leads to the final organization. The syntax reifies the simpler computation via a hierarchy of categories.

This means that one need not stipulate the order in UG, nor, in fact, derive the order from the input. The content and hierarchical sequence of the elements in the syntax is delivered by the interface between two distinct systems. This can take place over developmental timescales, and is, of course, likely to be reinforced by the linguistic input, though not determined by it.

Orders that are not isomorphic to the easiest computations are allowed by UG, but are pruned away during development because the system ossifies the simpler computation. Such an explanation relies on a generative system that provides the structure which the semantic systems fill with content.

The full ordering of the content of elements in a syntactic hierarchy presumably involves a multiplicity of sub ordering effects, some due to differences in what variable is being elaborated as in Ramchand and Svenonius’s proposal, others, if my sketch of an approach to the noun phrase is correct, due to an overall minimizing of the computation of the use of the structure in referring, describing, presenting etc. In this approach, the job of the core syntactic principles is to create structures which have an unbounded hierarchical depth and which are composed of discrete elements combined in particular ways. But the job of populating these structures with content is delegated to how they interface with other systems.”

The rest of the paper goes on to argue that even though the content of the categories that syntax works with may very well come from language external systems, how they are coopted by the linguistics system, and which content is so coopted, still means that there is strong autonomy of syntax.

The paper, which is to appear in a volume marking the 60th anniversary of the publication of syntactic structures is on Lingbuzz here.

A Menagerie of Merges

I’ve been railing on for a while about this issue, but have just finished a brief paper which I’ve Lingbuzzed, so thought it deserved a blogette. My fundamental concern is about the relationship between restrictiveness and simplicity in syntactic theory. An easy means of restricting the yield of a generative system is to place extra conditions on its operation with the result that the system as a whole becomes more complex. Simplifying a system typically involves reducing or removing these extra conditions, potentially leading to a loss of restrictiveness.

Chomsky’s  introduction of the operation Merge, and the unification of displacement and structure building operations that it accomplishes, was a marked step forward in terms of simplifying the  structure building component of generative grammar. But the simplicity of the standard inductive definition of syntactic objects that incorporates Merge has opened up a vast range of novel derivational types. Recent years have seen for example, derivations that involve rollup head movement, head-movement to specifier followed by morphological merger (Matushansky), rollup phrasal movement (Koopman, Sportiche, Cinque, Svenonius and many others), undermerge (Pesetsky. Yuan), countercyclic tucking-in movements (Richards), countercyclic late Merge (Takahashi, Hulsey, and the MIT crowd in general), and, the topic of this brief paper, sidewards movement, or, equivalently, Parallel Merge (Nunes, Hornstein, Citko, Johnson).

An alternative to adding conditions to a generative system as a means of restricting its outputs is to build the architecture of the system in such a way that it allows only a restricted range of derivational types, that is, to aim for an architecture that embodies the constraints rather than representing them explicitly (cf. Pylyshyn’s Razor). This opens up the possibility of both restricting a system and simplifying it. In my Syntax of Substance book for example, I argued for a system that does not project functional categories as heads, following Brody’s Telescoped Trees idea. This immediately removes derivational types involving certain kinds of head movement from the computational system. Apparent head movement effects have to be, rather, a kind of direct morphologization of syntactic units in certain configurations. No heads means no rollup head movement, no head to specifier movement followed by morphological merger, no `undermerge’ and no parallel merge derivations for head movement (a la Bobaljik and Brown). That same system (Adger 2013) also rules out roll-up phrasal movements via an interaction between the structure building and labelling components of the grammar (essentially, roll-up configurations lead to structures with two complements). It follows that the kinds of roll-up remnant derivations argued for by Kayne and Cinque are ungenerable and the empirical effects they handle must be dealt with otherwise. In all of these cases the concern was to reduce the range of derivational types by constructing a system whose architecture simply does not allow them. Adger 2013 makes the argument that the system presented there is at least no more complex than standard Bare Phrase Structure architectures.

In the draft paper I just posted, I’ve tried to tackle the issue of Sidewards Movement/Parallel Merge derivations, by attributing a memory architecture to Merge. The basic idea, which I presented in my Baggett lectures last year, is to split the workspace into two, mimicking a kind of cache/register structure that we see in the architecture of many computers. One workspace contains the resources for the derivation (I call it the Resource Space) and the other is a smaller (indeed binary) space that is where Merge applies, which I call the Operating Space. So a syntactic derivation essentially involves reading and writing things to and from the Operating Space, where the actual combination takes place.

This architecture makes Parallel Merge derivations impossible, as there is just not enough space/memory in the Operating Space to have the three elements that are needed for such a derivation. This is really just a way of formally making good on Chomsky’s observation that Parallel Merge/Sideways Movement derivations are in some sense ternary.

In the paper I define the formal system that has this result, and argue that it makes sense of the fact that the two gaps in a parasitic gap construction do not behave interpretively identical, extending some old observations of Alan Munn’s. But the main point is really to try to reduce the range of derivational types, and hence the restrictiveness of the system, without explicitly constraining the computational operations themselves. The extra complexity, such as it is, is actually a means of simplifying or economising memory in the computational system.

The paper is here.



How on earth do they do it? An extra-terrestrial view of Language

With all the interesting discussions about alien languages emerging from the excellent film Arrival, I thought I’d reblog  this piece I wrote for E-Magazine in 2004 (can you reblog something which has never been blogged?). It was aimed at getting A-level students of English Language to think a little differently about syntax and phonology.

There you are, in the library, studying for your English Language A Level. You’ve done all the interesting bits: language and gender, sociolinguistics, discourse, and now it’s come to phonology and syntax. Why do I have to study this, you think, as your eyes begin to droop, and your head begins to nod and you begin to dream of a galaxy far, far away …

FROM: The Director, Gargoplex Institute, Alpha Centauri TO: Chief Exo-Scientist Jenl Itstre
VIA: Direct Telepathocrystal Network
RE: New Mission

New discovery made on planet 3, system 3.387. Species (designation Yu-Man), highly successful within biosphere, developed intelligence (level 2), culture (level 2.1) and technology (level 2.2), yet apparently no telepathic ability. Raises severe theoretical problems for doctrine 6.8 of Gargoplexic Code: Intelligence Culture Technology (ICT) count above level 2 requires communication; sophisticated communication is possible only via telepathy; therefore level 2 intelligence requires telepathy.

Council is worried about ethical questions relating to experimentation on non-telepathic beings, and about apparent support the existence of Yu-Mans gives for Anti-Vivisectionist Movement. Please investigate immediately.

FROM: C.E.S Jenl Itstre
TO: Director, GI, Alpha Centauri
RE: re: New Mission – Report 1
Have entered orbit. Confirm intelligence, culture and technology levels. Species seems to have highly developed communication abilities. No telepathy observed. Hypothesis: communication system is simple symbolic, associating one external sign with one thought (cf. Report ‘Octihydras of Glarg: 7, 203 tentacular positions for 7, 203 distinct messages’. Octihydras classified below level 2 on general ICT count.). Perhaps Yu-Mans simply have many external signs? Will determine nature of external physical signs relevant to species. No tentacles observed.

FROM: C.E.S Jenl Itstre
TO: Director, GI Alpha Centauri
VIA: DTN 6.8.9
RE: re: New Mission – Report 2
Exciting new discovery. Like many other species on planet 3, species Yu-Man’s physical manifestation of thought is not visual nor olfactory, as is usual for lower species across the galaxy (cf. Report ‘Lower Xenomorph Communication’, subsection 3.3.45). Unbelievably, it involves instead the manipulation of orifices for oxygen intake and food intake to create pressure waves in the air. Such systems have been hypothesised before, but it has always been assumed that creatures would find it too difficult to extract the relevant air pressure modulations from general noise and that such physical signs would thereby be impossible to use as the basis of communication systems (see Itstre and Itstre ‘On the impossibility of sound based communication’. Report GI).

RE: re: New Mission – Report 3
Troubling discovery. As is well known, simple symbolic systems are capable of only having a finite number of possible messages (Istre and Grofr ‘Finiteness and Communication’, GIM Monographs, AC1). Species Yu-Man have no such limitation – they can communicate an apparently infinite number of sophisticated thoughts between each other without telepathy, apparently just by physically changing the air pressure around them. First hypothesis must be rejected. This gets more perplexing by the hour.

RE: re: New Mission – Report 4
Psychological profile completed: Yu-Mans have complex thoughts in usual multi-dimensional, non-linear form. Communication profile: all members of this species (and none others in the biosphere as far as can be determined) have this curious ability to communicate an infinite number of thoughts without telepathy. Suggestion: Yu-Mans somehow map from their thought structures into physical structures directly so that air pressure modulations directly mirror the structure of thought. No, this can’t be correct. Air pressure structures cannot bear such a load of information. Am not thinking clearly. Obviously turning native.


RE: re: New Mission – Report 5
New evidence which is most perplexing: I have established that Yu-Mans can extract patterns from different air structures as hypothesized in previous communications. Some of these patterns clearly relate to thoughts, in a simple symbolic way. For example, a special pattern of sound waves (somewhat abstracted) links the symbol ‘planet-3’ to the right thought and so on. But there is also something we have never seen before. Connecting the symbols in different ways allows Yu-Mans to create complex patterns of symbols that mirror the structure of the relevant thought. It turns out that for some groups of Yu-Mans the order in which the symbols come matters. So even though the pattern ‘the sun orbits planet-3’ has exactly the same symbols as ‘planet-3 orbits the sun’, the fact the parts come in a different order means that it relates to a different thought. Even more peculiar, not all combinations of symbols are possible. For the same group ‘orbits planet-3 the sun’ has no corresponding thought. How on Alpha-Centauri do they manage?

RE: re: New Mission – Report 6
Have new inspiration: Yu-Mans have some sort of an internal symbol manipulation device, part of their psychological makeup, just as telepathic abilities are part of our psychology. Perhaps the usual telepathic development mutated in this species. (I hope it wasn’t as a result of the Interstellar Radiation Dump we set up 20,000 years back in nearby Cygnus 3!) My idea is that this internal symbol manipulation device can be used to build complicated structures, which mirror the structure of thought well enough to be used for communication purposes. Bizarre, I know. Need advice.

FROM: Director, GI Alpha Centauri
TO: CES Jenl Itstre
RE: previous report
Intriguing: you are suggesting that Yu-Mans can manipulate symbols in their minds to mirror the structure of thoughts and then they turn these same symbols into movements of their breathing and food intake orifices which eventually create puffs of air in the surrounding atmosphere! And you are suggesting further that other Yu-Mans can sense these different air-puffs, turn them back into mental symbols and then use that to work out what the original thought was. Amazing abilities, but I can see how telepathic communication could be mimicked in this way. How do you propose to conceptualise these processes?

FROM: CES Jenl Itstre
RE: Hypothesis 2
Following Concept Processing Protocols I propose the following: Yu-Mans’ ability to manipulate symbols to approximate the structure of their thoughts will be syntax, and their capacity to turn these symbols into physical instructions for their mouths and lungs will be phonology.

FROM: Director, GI Alpha Centauri
TO: CES Jenl Istre
Make the study of syntax and phonology your prime concern. It is clearly the key to the communication that makes these Yu-Mans so successful in their non-telepathic environment. It’s also perhaps a good route to understanding the structure of Yu-Mans’ thought processes and the ways in which the turn these thoughts into actions. Repeat: make the study of syntax and phonology your prime …

… ow! As your head hits the table, you wake up. Hmmm, so syntax is about mirroring the structure of thought itself, and phonology involves the amazing ability to glean meanings from how the air moves around you. No wonder they make us study this stuff.

How alien can language be?

Update: a longer and more linguistically focussed review of Arrival has appeared in Inference.
 Last night I went to an advanced showing of Denis Villeneuve’s new film Arrival, which is showing as part of the London Film Festival (a perk of being on the committee of the Linguistics Association of Great Britain that was a little unexpected!). Linguistics is central to the film, and, it’s very well done. Based on a Ted Chiang short story, the film tells of the arrival of enigmatic alien ships on Earth, and the involvement of Louise Banks, a professor of linguistics, in figuring out the aliens’ language. It’s an intelligent, beautifully designed, and thought provoking film. And the linguistics in it is a real step above what linguists have come to expect of cinematic portrayals of our discipline (thanks in no small part to Jessica Coon acting as a consultant).

The film turns on the visual language of the heptapods, the name given to the aliens because of their seven tentacular feet. In Chiang’s short story, the spoken language looks pretty familiar to Dr Banks; nouns have special markers, similar to the grammatical cases of Latin or German, that signify meaning; there are words, and they seem to come in particular orders depending on what their function is in the grammar of the sentence. But it is the visual language that is at the heart of the story. This language, as presented in the film, is just beautiful; the aliens squirt some kind of squid-like ink into the air which resolves holistically into a presentation of the thought they want to express. It looks like a circular whorl drawn with complex curlicues twisting off of the main circumference. The form of the language is not linear in any sense. The whorls emerge simultaneously as wholes. The orientation, shape, modulation, and direction of the tendrils that build the whorls serve to convey the meaningful connections of the parts to the whole. Multiple sentences can all be combined into more and more complex forms that, in the film, require GPS style computer analysis. The atemporality and multidimensionality of the heptapods’ written language is a core part of the plot.

So, could a human language work like this, or is that just too alien?
There are two big competing ideas in linguistics about what a language is. One is that  it’s the outcome of an evolutionary process of expansion and refinement of a basic system of communication. It has evolved culturally, buffeted by the pressures of its use as a communicative and social artefact. Language is a cultural object. The other idea is that a language is what happens when the human mind is faced with certain experiences that it can understand as linguistic. The human mind cant help but build a general system, the language, that allows it to connect speech or sign (or, in Dr Banks’ case, alien ink shapes hovering in thin air) to meaning. Language is a cognitive object.
The linguistics of Arrival sits a little uneasily between these poles. For the story, the heptapod language has to be profoundly different from human language. There’s a Sapir-Whorf idea underlying the plot. The language creates, in fact forces, a new way of understanding reality, an understanding that is, until the events of the film, truly alien. But, at the same time, Louise Banks’ linguistic methods have to work, or there is no story, no way of connecting with the heptapods. There has to be a means to segment the whorls, to codify the visual syntax, to connect the forms to the meanings. In fact, to do the kind of fieldwork that linguists do, when they are faced with a new language.
If language were a cultural object, and the heptapods’ culture was as alien as one would expect from their technology, appearance and biology, there’s no reason to expect Dr Banks could segment, classify and analyse the system using the techniques of linguistics. But she did. If language were a cognitive object, then, assuming the heptapods’ mental set-up to be profoundly different from ours, the task of coming up with a theory that could guide the fieldwork in profitable ways would be insurmountable. But Dr Banks generates new whorls to communicate with the aliens, using a kind of iPad type device to select and combine the convoluted shapes. This means that their language has a syntax, a way of systematically connecting the shapes to the meaning of the whole. But if syntax is an attribute of the human mind, why would we expect the heptapods to have it?
In fact, human language, just like the heptapod language, is multidimensional. Syntax is in a different dimension from the words we speak one after another. It is a mostly invisible scaffolding that holds words together, even though they are pronounced far apart, and keeps them structurally apart, even though they are pronounced together. In a sentence like Louise knew which heptapod the captain of the soldiers likes best, the phrase the soldiers is right next to likes but while soldiers is plural, likes is a singular verb (because captain is singular). Things pronounced together are grammatically distinct. And which heptapod is of course the object of likes. The captain likes some heptapod. Things which are far apart, are, in terms of meaning and structure, closely connected. There’s an invisible set of connections, in a different dimension from the linear way the words are pronounced one after the other. Human syntax is just as multidimensional as the heptapods visual language. The  heptapods, and their language, far from being too alien, are, in fact, very human. Why?
An answer is available to this question. Perhaps syntax, the system that infinitely extends the abstract connection between form and meaning beyond the simplest cases, is the only solution, cultural or biological, that is consistent with rational thought, and hence, eventually, advanced technology. Across the galaxy, different species can only connect because, independently,  we have all evolved syntax.
So, NASA, you know where your next big funding push has to be!

Syntax is not a Custom

I had a brief twitter exchange with @david_colquhoun the other day. Prof Colquhoun tweeted a response to a UCL press release about how learning something about grammar could be good for school children (a point made by Bas Aarts). Colquhoun’s view was that teaching children things about formal grammar was ‘daft’, and I’m sure he’s not alone in this view. When I suggested that learning about a fundamental attribute of human beings was a good thing for children, Colquhoon responded that ‘syntax isn’t an attribute. It’s a custom and it changes’.

I thought this exchange was interesting, though I did find it irksome. A well-respected scientist (a Fellow of the Royal Society), who comes from outside of linguistics, thinks that syntax is trivial enough that it’s legitimate to make categorical (and quite incorrect) pronouncements about it. Would he make such pronouncements about, say, palaeolimnology, or astrophysics? Why are we syntacticians doing such a bad job that academics from other fields think they know enough about syntax to say that it’s a ‘custom’? I don’t know, but I want to just give a few arguments here that syntax does not derive from culture and it is not a custom. The syntax of languages changes as Colquhoun noted, of course, but it does in ways that we have been describing quantitatively for years, and that we have have some theoretical grasp of. None of that work involves thinking of it as a custom.

In fact, one of the important findings of syntactic research over the last 50 years (if not longer) is that structure doesn’t reduce to custom or culture. The idea that the structure of a language is intimately connected to its culture is, I think, quite a common view amongst people who don’t study languages. Though it’s a stronger position than `syntax is custom’, it is, therefore, worthwhile to address first.

This view abjectly fails. Take how languages ask questions about things. In English, if you bought something, and your partner sees the shopping bag, they can ask What did you buy? There’s something funny going on with the syntax here. The part of the sentence that you are asking a question about appears, not where it would go if you weren’t asking a question, but right at the front. I bought a book but Which book did you buy? Linguists call this kind of syntax question-movement (or, more commonly, wh-movement, though that’s not such a good term). There are question-movement languages. English is one, so is Inuktitut, so is Mohawk. But there aren’t question movement cultures. There’s nothing about the culture of the Anglo Saxons and the Mohawks that leads to them having the syntax of question-movement in their languages.

There are also languages that are wh-in-situ. When speakers of these languages ask a question like what did you buy? they say it as you bought what?, leaving the question word in the same place (‘in situ’) as it would be in a declarative statement like you bought a book. This is a property shared by Turkish, Chinese, Malayalam, and many other languages. But again these are not wh-in-situ cultures. There are other languages that adopt a mixed strategy. Indeed, languages with very similar cultures can vary quite wildly in what they do with questions, with some dialects of Italian moving question words and others leaving them in situ.

There is a weaker version of the idea that syntax is custom, which is, I think, what Colquhoun was meaning. If someone comes visiting you, you tell them `the custom in London is to stand on the right hand side of a moving staircase’, so they avoid being mown down by irate Londoners. You can see people obeying customs like this, and once you know the custom, you can tell others what it is. But the syntactic rules of a language are not like this. Take movement of question words in English. Now I’ve told you what that is, perhaps you could think of it like a custom: you know it when you see it, and you can tell me about it. But that won’t work. Though you can say David likes the man you gave the book to, you can’t ask a question about the book, following this `custom’. You can’t say in English Which book did David like the man you gave to? Why not? There are hidden patterns, discovered by syntactic research over the years, that capture when this syntactic pattern works, and when it doesn’t. Unlike standing on the right, English speakers are neither aware of the syntactic patterns they use, nor can they say what they are. In fact, it required a lot of research to find out how this bit of syntax works, and we now have a good understanding of the abstract generalizations that predict whether a speaker of English will react to a sentence positively or not. That speaker has no idea of what these patterns in their use of language are, but syntactic theory can predict them. That’s not how customs work. Syntax is not a custom: it’s a complex, highly abstract system of rules generating patterns in the sentences we use. Different languages have different syntax because, as children acquire a language, there is to-and-fro between the set up of their brains (how human brains process linguistic data they are exposed to and what they do with it) and the linguistic acts of the people around them. Syntax is constrained by the ways our mind works, and it’s within these limits that historical change takes place.

The whole of syntax is like this: there are tiny scraps of it that are similar to customs (don’t end a sentence with a preposition, etc.), and those tend to be what people know about. But these are close to scientifically trivial, though they act as cultural shibboleths. The rest of syntax goes unnoticed; because we find it so effortless, we are unaware of the rich, abstract, and complex flow of syntax in our language.

The study of syntax is a straightforward scientific enterprise. There are many complex facts and phenomena that you need to do a lot of descriptive research to find. There are many possible hypotheses about what is going on, most falsified by new data that comes from observation or experiment. And there are fairly good explanations of many of these in terms of basic theoretical primitives and formalised theories of how they relate and combine. The theories are, without doubt, in a primitive state (maybe around about the level of chemical theory pre-Dalton). Indeed, we are very possibly not even thinking about this stuff in the right way. Nevertheless, research in syntax has been extremely intellectually fertile for years now, revealing many new discoveries about how the grammar of many different languages work, and uncovering broad laws and principles that govern these. There are areas of great controversy, but the basic phenomena, generalisations, and concepts are well researched.

I’ll finish on what the initial tweet was about. Is it a good idea to teach grammar to kids? My own view is that it’s a good idea to teach syntax to kids, looking at many languages, and showing them what some of the basic ideas are. Linguistics, and syntax particularly, is an excellent way to teach the basics of the scientific method. Children can go very quickly from observation to hypothesis to experiment to (dis)confirmation (though less easily to theory, it’s true). All using sentences of their own languages. From this they can learn precision in thinking, the rudiments of science, and, it must be said, some facts about the grammar of the language(s) they speak. None of these is daft.