David Adger

David Adger

Queen Mary University of London

David Adger is a Professor of Linguistics at Queen Mary University of London. He is primarily interested in the human capacity for syntax, the cognitive system that underlies the patterns found in the grammar of human languages. The core questions are: what creates the patterns? how do they relate to meaning on the one hand, and sound, on the other? What governs the range of variation in the  patterns? Answering these questions allows us to tackle the issue of what the nature of the syntactic system is.

Interview

OASIS: Thanks for sitting down with us. What's your favorite ontological entity and why?

DA: I had a big dinnertime argument about this with Peter Svenonius and Artemis Alexiadou and a bunch of other people after a conference up in Tromsø once. And I just thought I was saying normal things about what syntactic objects are, and it turns out that they both disagreed with what I was saying. They were insistent that essentially syntactic objects are symbols, and they're computed over with a computational device, while I thought of them like objects, built by a device. A symbol needs to represent something else, right? And I couldn't get out of them what they thought it represented. 

I know there are philosophers out there, for example, the philosopher Georges Rey, who thinks that linguistic entities are definitely symbols and represent things. But I don’t think those linguistic entities are symbolic or representational. 

OASIS: So we're talking about syntactic features. 

DA: Syntactic objects, really. Not the symbols in the theory that represent those objects, but the objects themselves. Those things,  some people think of them as symbols. Which I don't understand. Philosophers like Rey think they represent some kind of abstract object. I don't think linguists believe this when they really think about it. I don't think Chomsky believes it, for example. 

I don't think the notion of representation is what's going on with the structure for John walks.

That doesn't represent anything. Sure, it has interpretations, but it's not by virtue of it being a symbol. It's just a structure with no inherent representational nature to it. It only gets its connection to other things via some system that says, ah, I'm going to make you mean this.

OASIS: Yeah, I mean, compositional semantics, right?

DA: Yeah, semantics, or phonology or sign phonology or whatever - just a system that can interpret the syntactic object. But as far as the actual things that the theory is about, they are, I think, non-symbolic. And I think that means that the theory of them therefore has to be a constructive theory, that tells you how to construct structures.

OASIS: If I were to use the word instructions, are they instructions? I mean, are features instructions?

DA: Well, there are only instructions inasmuch as there is a device which can interpret them as instructions. They're just a kind of thing, I think. So ontologically, they're just the kind of thing that can be manipulated by internal systems to create larger things, and they can be interpreted by systems external to that internal system, but still mind-internal, which might interpret them as instructions.

But imagine there was no such system. Imagine that there was no phonology and imagine there was no semantics. I still think that you could still have a syntax of bits that would just put the bits together. I always just understood that this is what we were doing. We're just building structures that are accessed by systems of form and meaning..

OASIS: What hangs on this question? 

DA: There was an interview with Chomsky where he had said something like "well, you know, we just use sets to build syntactic structures with, because we understand sets, even if they're abstract entities and they're probably not in the human mind and don’t really make sense". And I shared the intuition that it seems unlikely we have set like objects in our heads. So what do we have in nature? We know that there are fundamental bits that can be built into structures. These structures don't have the properties of a set, it's a molecule or something like that, with relationships between the elements in the molecule. So can we use this to think of something that's more realistic as a mental object?

And so that's what took me on to thinking of, this notion of parthood as a fundamental relationship between elements, so that when you build something complex, you build a part-whole construction, rather than building an abstract set theoretic construction.

If you look at the history of logic, early on people were really unhappy with sets as a way of building maths, because of the paradoxes. There are solutions to those, but one way people went in the early days (Lesniewsky, and Goodman and Leonard) was by building mereological logics, based on the part whole relationship rather than the set membership  relationship. And of course mereological ideas have been really big in semantics and very useful for us. The reason no one ever thought of it for syntax is because it gives you an undifferentiated structure. But say we differentiate the structure in a minimal way, what can we get out of that? 

And so the system that I built up [in my upcoming book, MIT Press] is one that does that and it gives you essentially an extended projection complement relation as one way of being a part. And what you might think of as a specifier (though it's not quite a specifier), as another way of being a part, so you have differentiated part whole structures. That seemed to me to be intuitively less weird than sets for syntactic objects. It makes them look awfully like trees. You can build them in a similar way to how Chomsky's Merge works, you take two things and get a part whole relationship between them rather than a new object.  And then you can get Internal Merge exactly the same way that Chomsky did, applying that operation to two things, one of which is part of another. 

So yeah, fundamentally I think that part of the motivation for me for my new approach was sort of ontological. It was like, sets are not great for syntax.

OASIS: So you've also answered my next question, "What is everyone wrong about?"

DA: Once when I was chatting with Chris Collins about the notion of copy in syntactic theory he was like, you must remember, David, we're doing formal language theory. And I thought, not really! We're building a constructive system that licenses an infinite number of objects, rather than doing formal language theory, and that makes a material difference in how we  think about the properties of those objects. In the case at hand it was about what copies are, which I think gets muddled when we think of them as computational symbols as opposed to actual objects.

OASIS: It feels like asking mathematicians if numbers really exist.

DA: Yeah, a little bit. But I take these things to exist as part of our minds. So the way I like to think of the syntactic object that the semantic and phonological system can interpret as instructions,  is like a gesture, like this [wiggles his fingers]. I've just made a bunch of gestures with my hand. The potentiality is in my hand. It's limited by the anatomy of my hand. And obviously it's not quite like a generative system because it has that limit but I think a sentence, or rather the structure underlying one,  is like a gesture, but of the mind.

OASIS: That's a nice metaphor. To finish up, did you have any thoughts about the significance of LLMs? 

DA: One of the really fascinating things about LLMs is about how different they are to humans in doing things that are so similar to what humans do. I think I talked about this in my 2019 popsci book [Language Unlimited] —I was interested in a newspaper story that had come out a few years before where there were computers who were set to essentially communicate with each other via a language-like system. So a little bit like an LLM generating something with English input, without human interference. And so they kind of started to communicate the things they needed to communicate to each other qua computational units. And what they did started off looking quite like English because that was what they were fed. But their posts had little errors in them which then fed into that process and quite  quickly  you got things that looked totally non-human!  And it's really fascinating because, of course, their interests and the tasks they were doing are quite non-human, since they can do things in ways that humans are really very bad at, right? So they just developed their own set of conventions and it looked all very weird.

If you look at far more recent LLMs, I mean, it's really fascinating in so many ways, but we still see something a bit similar. One of the things I think is really interesting about LLMs is that the generalizations they make are good ones—well, we don't really know what those are, but they appear to deliver the same output as humans—but they are getting to those generalizations in a way that’s quite different to what we do.

So there's that really interesting work by Tal Linzen and others where they look at attraction errors,  you know, like the keys to the cabinet are versus the keys to the cabinet is. And they show that a particular kind of LLM—LSTMs, I think they are—give rise to these attraction errors in a way that's not massively dissimilar to humans, but what conditions the errors appears to be quite distinct. The LSTMs are sensitive to the linear distance between the NP that triggers the error and the NP that should trigger the right version, whereas humans seem to be sensitive to structural distance. So if you embed the NP inside a PP, for example, you get a different outcome for the two, right? You get these different effects, even though you have the same actual output. 

And if we take that further, you can imagine that LLMs are going to be way better than we are at lots of language tasks, right? I mean, Google NotebookLM, you can just stick a paper in there and it'll immediately tell you what it's about and even generate a podcast for it. We couldn't do that with such efficiency—LLMs are literally better at that language related task than we are. So that's quite interesting. What are the differences between what language models are capable of doing via generalization over massive data sets and what we are capable of doing?

And also connected to that is the fact that the requirements that humans have to learn a language, and the requirements that a LLM has, are quite different. They really need billions of bytes of data, whereas we need a lot less than that to learn a language as a kid. 

I think a lot of this debate around  "oh, linguistics is now rubbish because we can do everything with LLMs" misses a difference in perspective. People who say this are  seeing language as E-language, a set of texts over which you can make generalizations, not as I-language, the internal cognitive system. LLM technology  has done  something really interesting with E-language,  way more interesting than I ever thought you'd see,  and way more useful than I ever thought you'd see. So that's absolutely something that's worth paying attention to. But I think it's doing something really pretty  different from what we're doing.

OASIS: If semantics is about the relationships humans make between language and the world—well, there's no world with LLMs.

DA: There's a fascinating thing about humans that’s come to prominence more recently, which is that much of our world has become an online world. And then there is almost no actual world. Truth is social in a kind of  Trumpian fashion, right? Suddenly the world, for many people, is not their interaction with the physical, but with the digital, so it’s a vast network of propositions not experiences. And that’s why AIs hallucinate, right? That’s all they have access to too.

OASIS:Yeah, except we're not in the matrix, right? We're not looking at the falling green lines, we're having experiences.

Yeah. And even totally online people still come with human psychology. I mean, you're right. Obviously, we live in the world, I'm drinking tea, we're chatting, right? But my drinking tea? I kind of know what I'm doing. Are you an AI? 

OASIS: I don't know, maybe we're actually brains in vats? But we have to soldier on as if we're not!

DA: So I think that the way forward with that is a maintenance of multiple critical perspectives on whatever you're doing, right? We can probably get a better view by being critical about all of these kinds of things and figuring out for ourselves rather than just going down that kind of rather barren skepticism about reality.

OASIS: Let's assume we're not brains in vats and we'll see you in York in January, looking forward!

DA: Thanks, I'm looking forward to it too. 

Other invited speakers

Michelle Sheehan

Michelle Sheehan

Newcastle University

more
Phillip Wolff

Phillip Wolff

Emory University

more

Sponsors