General Linguistics Must Be Based On Universals (Or Non-Conventional Aspects of Language)
General Linguistics Must Be Based On Universals (Or Non-Conventional Aspects of Language)
General Linguistics Must Be Based On Universals (Or Non-Conventional Aspects of Language)
Martin Haspelmath*
General linguistics must be based on
universals (or non-conventional aspects of
language)
https://doi.org/10.1515/tl-2021-2002
Received March 29, 2021; accepted April 22, 2021
1 Introduction
In this paper, I address a core foundational aspect of linguistics: the difference
between the study of a PARTICULAR LANGUAGE (spoken at a particular time by a
particular community) and the study of HUMAN LANGUAGE IN GENERAL. I argue that this
distinction is important both for particular linguistics and for general linguistics,
and I note that it has often been unduly neglected.
*Corresponding author: Martin Haspelmath, Max Planck Institute for Evolutionary Anthropology,
Deutscher Platz 6, 04103 Leipzig, Germany, E-mail: [email protected]
2 Haspelmath
1 I use the unusual spelling “Human Language” (with capitalization) in order to emphasize that
this is a distinct phenomenon from the various particular languages that we can observe and study
directly.
4 Haspelmath
Clearly, what traditional generative linguists have been doing is (2b), (3b), and
(4b), whereas many other linguists have been describing languages without
making any mentalist claims (2a), have been comparing them without any pre-
supposition of specific innate mechanisms (3a), and have proposed functional-
adaptive explanations (4a). But crucially, most linguists would accept that the
other subtypes of theoretical linguistics exist: Traditional generativists accept that
there is such a thing as social norms (calling them or their products “E-language”),
they recognize that one can generalize over languages in the Greenbergian way
research. All of science has both theoretical and empirical aspects, and there is no non-empirical
science. (Non-empirical works, such as the present article, are not really part of science, but of
meta-science, or philosophy of science.)
6 Haspelmath
and find intriguing patterns, and they would not rule out the possibility of
functional-adaptive explanations, even though they may be more interested in
biocognitive explanations (e.g. Jackendoff 2002: §2.5; Newmeyer 1994, 2005).
Likewise, non-generativists recognize the existence of mental grammars (and
those of the Cognitive Linguistics community are even primarily interested in
these), and they of course recognize that many aspects of our species-specific
language capacity (human linguisticality) are biologically determined.6 What
is probably controversial is the existence of biocognitive universals that can be
uncovered by comparing a range of mental grammars, i.e. (3b). But even Holger
Diessel, a vocal advocate of cognitive usage-based linguistics, accepts, for
example, that demonstratives are universal because of a species-specific ability for
communicating joint attention (Diessel 2014). So while some theoretical linguists
might downplay several aspects of the entire enterprise in (2)–(4) and concentrate
on some parts at the expense of others, the overall picture is widely shared.
What is not widely shared, however, is the ways in which we talk about these
distinctions, and I think that this is what often leads to confusion. Linguists do not
use the term “general linguistics” very much (any more), let alone “particular
linguistics”, and there is widespread misunderstanding of what divides the
different approaches. Thus, one goal of the present paper is to argue for the
reintroduction of the terms general linguistics and particular linguistics.7
6 Haspelmath (2020a) uses the term linguisticality (on the analogy of musicality) instead of lan-
guage faculty (or capacity for language), in order to make it clear that there is no controversy here
(language faculty has sometimes been used in the sense of a domain-specific biocognitive module,
or an innate blueprint for grammar, ideas which are controversial).
7 To make the distinction even more salient, one may use the abbreviated terms g-linguistics and p-
linguistics. I am not sure whether particular linguistics has been used in English before, but at least
in German, Einzelsprachlinguistik ‘particular-language linguistics’ is an established term, at least
in informal usage. Chomsky (1986: 1) mentioned “particular grammar”, even in a historical
context, so the concept has long been familiar to many linguists.
8 A particularly clear earlier exposition of the notion of general linguistics can be found in Georg
Curtius’s (1862) inaugural lecture at Leipzig University, on the relation between philology and
General linguistics based on universals 7
linguistics. He notes that the various regional disciplines (the study of the languages and cultures
of Greece, of Ancient Rome, of France, etc.) cross-cut the various general disciplines (general art
history, general religious studies, general linguistics, and so on). He also remarks that general
linguistics is more advanced than other general disciplines and that it is important for the study of
particular languages (Curtius 1862: 7).
9 There was a journal of general linguistics in the 19th century (Friedrich Techmer’s Internationale
Zeitschrift für Allgemeine Sprachwissenschaft, 1884–1890), but this was almost forgotten in the 20th
century (see Koerner 1973). (The journal Language was founded even before Lingua, in 1925, but its
mission was originally much broader, and in the first few decades, it mostly published studies on
particular languages.)
8 Haspelmath
“general linguistics” was clearly on its way out.10 But this was not a necessary
consequence of the generative approach. On the one hand, Chomsky (1957) says
very clearly that a linguistic theory may deal with a particular language:11
Syntactic investigation of a given language has as its goal the construction of a grammar that
can be viewed as a device of some sort for producing the sentences of the language under
analysis. (1957: 11)
Foley, James. 1979. Theoretical morphology of the French verb. (Lingvisticæ Investigationes
Supplementa.) Amsterdam/Philadelphia: John Benjamins Publishing Company. 296pp.
Shaw, Patricia A. 1980. Theoretical issues in Dakota phonology and morphology. New York:
Garland Publishing.
10 The term general linguistics has survived in the names of acedemic departments and study
programmes in Europe and beyond, and is probably more widely used in non-English contexts
(Russian obščee jazykoznanie, German allgemeine Linguistik, etc.). But Robins (1964) seems to have
been the last prominent English-language book with “general linguistics” in its title.
11 This is also found in more recent textbooks, e.g. “a grammar is a linguist’s explicit theory of a
speaker’s tacit knowledge of their language” (Adger 2003: 11).
12 When the journal Theoretical Linguistics started in 1974, its founder Helmut Schnelle (1932–
2015) characterized its subject matter as “concerned with the development of theories about
general aspects of particular languages or of language and its uses in general…” (Schnelle 1974: 1).
So like Chomsky, Schnelle envisaged language-particular theories as well as general theories.
General linguistics based on universals 9
Saxon, Leslie. 1986. The syntax of pronouns in Dogrib: Some theoretical consequences.
Doctoral dissertation, University of California at San Diego.
It seems that the term “theoretical” in these titles is meant to emphasize that the
language-particular studies are relevant to general linguistic theorizing – so the
term “theoretical” came to mean “general (theoretical)” by the 1980s at the latest.
The situation has not changed much between then and today, though there is less
need to include “theoretical” in paper and book titles nowadays because the idea
that language-particular research should make a contribution to our under-
standing of Human Language is now very widely accepted among theoretical
linguists.13
However, there is a serious, and widely ignored, problem with the view that
language-particular studies can make direct contributions to general linguistics: It
presupposes that grammatical systems are constructed from a rich set of innate
building blocks of universal grammar, and this is a highly contentious idea. I
elaborate on this point in the next section.
Of course, from a biological perspective, we may say that when we observe the
behaviours of chimpanzees, vervet monkeys and humans, we can see some
striking differences in the structure and function of the vocalizations of these
species, so in a very coarse-grained fashion, it is perhaps possible to observe
Human Language, glossing over the differences between particular languages.
People speak, but other animals do not. However, this does not give us much
further insight, because “no one speaks FL [faculty of language]: people either
speak a specific language or they do not speak at all” (Mendívil-Giró 2019: 2). Thus,
linguists normally take a much closer look and study words and sentences with
their meanings, almost all of which are different in different languages.
So linguists need a solution for this paradox, and in this section, I briefly
describe three solutions. The first corresponds to the part that is in parentheses in
the title of this paper, and the other two are different ways in which universals help
us understand Human Language.
Crucially, the study of a particular language cannot contribute directly to
general linguistics without further assumptions, because a particular language
represents historically accidental conventions of a speech community. To be sure,
we can say somewhat trivially that everything that occurs in a particular language
must be possible in Human Language and thus has general relevance. But we
generally want to ask more ambitious questions. The situation is basically the
same in other disciplines that study culturally variable behaviours of human
populations: The study of the history of Mexico cannot directly contribute to
general history, and the study of the economics of China cannot directly contribute
to general economics. These disciplines need something else – namely worldwide
comparative studies – in order to arrive at general conclusions.14 (In the natural
sciences, the situation is different, as will be noted in §6.2.)
The problems that I will highlight in §5 below do not concern those areas of
linguistics where non-conventional behaviours are studied. In particular, general
inferences from reaction times in psycholinguistic experiments, from electro-
physiological experiments in neuroscience, or from slips of the tongue are not
problematic because the aspects of behaviour that are of interest here are not
socially learned, i.e. are not conventional. Of course, the differences between
14 Like general linguistics (cf. §4.1), general economics also sometimes makes use of experimental
methods, as in the subfield of behavioural economics.
General linguistics based on universals 11
An obvious way in which one could learn about Human Language in general is by
comparing languages worldwide and by finding general tendencies in represen-
tative samples of these languages. This is the well-known Greenbergian approach
(Croft 2003; Greenberg 1963; Song 2018), and I do not need to say much more about
it here, except perhaps for reminding the reader that Greenbergian comparison
does not rely on mental grammars, but can be based entirely on the social rules
15 Chomsky (1981: 6) expressed it as follows: “A great deal can be learned about UG [= universal
grammar] from the study of a single language, if such study achieves sufficient depth to put forth
rules or principles that have explanatory force but are underdetermined by evidence available to
the language learner. Then it is reasonable to attribute to UG those aspects of these rules or
principles that are uniformly attained but underdetermined by evidence.” (See also Mendívil-Giró
2019 for some recent discussion.)
12 Haspelmath
(cf. (2a) and (3a) above). Thus, there is no need for a “deep” or “true” analysis, but
basically any description that gets the facts right (that is observationally adequate)
is sufficient as a basis for comparison (cf. Haspelmath 2004, 2014). The compar-
ative concepts on which the comparisons are based are not the same as the
descriptive categories used for describing the languages, so that different lan-
guages can be approached non-aprioristically, i.e. with no preconceived idea of
what the possible categories might be (Haspelmath 2010b; see also §6.3 below).
Explanation in the Greenbergian programme is typically of the functional-
adaptive type: Cross-linguistic tendencies are explained with reference to the
adaptation of language structures to their users’ needs. For example, special re-
flexive pronouns tend to be used when coreference is unexpected to help the
hearers establish the correct reference (Comrie 1999; Haspelmath 2008), and word
order tends to be consistently right-branching or left-branching because this
minimizes constituent recognition domains (Hawkins 2014).16
Finally, we get to a third solution to the general linguistics paradox: We could learn
about Human Language by finding the innate building blocks from which all the
mentally represented grammars are constructed. This is the principles and pa-
rameters programme of Chomsky (1981), most clearly laid out (for a wider audi-
ence) by Baker (2001) (see also Baker 2010; Huang and Roberts 2016; and many
others). D’Alessandro (2019: 10) observes that if there are such innate building
blocks, then
the constraints that are discovered about one language could be used to describe a different
language. This, I think, is the key difference between generative grammar and other linguistic
enterprises, such as typology: while typologists assume that, say, the existence of wh-
movement in English cannot tell us anything about Chinese, generativists assume that this
isn’t the case.
The “innate building blocks” include both architectures (e.g. the distinction be-
tween D-structure and S-structure, between syntactic computation and spell-out,
between Merge and Agree, between level I and level II, between Gen and Eval in
OT) and innate features, categories and constraints (e.g. ±N, ±V, vP, CP, SUBJ, OBJ,
16 I use the term “functional-adaptive explanation” to emphasize that the level of the explanation
is the cultural evolution of human languages in general, not a correspondence between particular
functions of a language and particular forms.
General linguistics based on universals 13
+WH, ±coronal, NOCODA). There is a very rich literature with proposals about the
kinds of entities that might be innate building blocks of this kind, which are
thought to be part of universal grammar. Chomsky (1965) called the architectural
building blocks “formal universals”, and the features and categories “substantive
universals” (there were no OT constraints in the 1960s, but these would surely also
fall under substantive universals).
As noted by Baker (2001), this research programme is not unlike the pro-
gramme pursued for chemistry in the 19th century that resulted in Mendeleyev’s
Periodic Table of Elements. Theoretical chemists found that all chemical com-
pounds are built from about 80 to 100 building blocks, and that there are a limited
number of ways in which these can combine to form chemical compounds.
If linguists found that there are just a few dozen (or maybe a few hundred)
features or categories (or OT constraints) that recur across languages and from
which more complex structures can be constructed in a limited number of ways,
then it would indeed be plausible to attribute them to the innate universal
grammar, and they might also solve the problem of language acquisition despite
the poverty of the stimulus (called “Plato’s Problem” by Chomsky 1986).
I call this the NATURAL-KINDS PROGRAMME (or naturalistic programme) for general
linguistics,17 because chemical elements are the primary example of what are
called natural kinds in philosophy.18 Natural kinds are categories or classes of
entities that exist in nature independently of any scientific observation. The cat-
egories are given in advance of observations, and are thus available a priori. In
addition to chemical elements, examples of natural kinds are particles in physics,
and (more controversially) species in biology. Clearly, some aspects of human
behaviour and cognition are given by nature, e.g. the fact that we can distinguish
five basic tastes (sweet, sour, salty, bitter, umami), and maybe that there are six
basic emotions (anger, disgust, fear, happiness, sadness, surprise; see Barrett
2006). So it is readily conceivable that our biology might give us four basic parts of
speech (e.g. noun, verb, adjective, preposition), or three components of grammar
(e.g. phonology, morphology, syntax), or 27 distinctive features for spoken
phonology, or 65 semantic primitives (see also Aronoff 2016). There could also be
hundreds of different optimality-theoretic constraints, just as there are hundreds
of different cell types in the human body. For phonology, linguists are fairly close
17 Another term is the more colourful expression “Mendeleyevian Vision”, which also expresses
the fact that so far, no widely accepted results of this programme exist, so that it remains a vision
for the time being. (And since Baker (2001, 2010) is a prominent proponent of this programme, it
can also be called “Bakerian programme”.)
18 Noam Chomsky has repeatedly called for a “naturalistic approach” to the study of human
language (e.g. Chomsky 1995), which seems to mean more or less what I mean here by natural-
kinds programme.
14 Haspelmath
to a consensus of what the innate features might be, as can be seen in any
phonology textbook (though even in phonology, there are dissenting voices of
linguists who do not think that the features are universal and innate, e.g. Mielke
2008). In morphology and syntax, there is no such consensus, but it could still be
that the search will ultimately be successful.
However, there is a serious problem: In contrast to the Greenbergian pro-
gramme for cross-linguistic comparison, the natural-kinds programme has no
clear methodology for progressing, and no clear criteria for success.19 There are a
large number of new proposals about the building blocks of the innate grammar
blueprint, but there is little (if any) convergence among them. There is no
agreement about serial versus parallel architectures, lexicalism, DP versus NP,
antisymmetry, phases, cartography, and many other core aspects of grammar.
Those new ideas and generalizations that have been widely accepted belong to
the level of phenomena (D’Alessandro’s 2019 “mid-level generalizations”), not
the explanatory level of innate natural kinds. And of course, the existence of a
large number of domain-specific innate elements in just one species is inherently
unlikely, given the relative recency of our capacity for language (perhaps only
200,000 years old). Moreover, in the 21st century, the natural-kinds programme
has basically been given up by some influential authors, as I will briefly note in
§5.1.
This leads me to the last section of this paper – the question whether it is
possible to do general linguistics without large-scale cross-linguistic comparison
and without a natural-kinds programme. I will end up with a very skeptical answer,
resulting in the claim of the title of this paper: General linguistics must be based on
universals.
19 Huang and Roberts (2016) still advocate the principles and parameters programme, but they
recognize that the original conception from the 1980s (based on macroparameters) was not really
successful. The examples that they give (the head parameter, the null subject parameter, the wh-
movement parameter, the non-configurationality parameter, the polysynthesis parameter, and a
few others) have been largely abandoned by generative syntacticians.
General linguistics based on universals 15
which all grammatical systems are constructed (§4.3). But few linguists study
many languages in a worldwide perspective, and over the last two decades, not
many linguists have explicitly advocated (let alone actively pursued) the Bakerian
natural-kinds programme. The current section highlights these contradictions,
leading to the conclusion (in §7) that the general linguistics paradox can only
be resolved by establishing a coherent comparative methodology (of the non-
aprioristic kind or of the natural-kinds type), and by embedding research on
particular languages in such a methodology. General linguistics must be based on
universals.
What I described as the natural-kinds programme (§4.3) was developed in the last
few decades of the 20th century, and it was widely adopted, also by linguists who
did not agree with Chomsky’s particular proposals for syntactic building blocks in
the 1970s and 1980s. Syntactic frameworks such as Relational Grammar (Blake
1990) and Lexical-Functional Grammar (Bresnan 2001) saw themselves as making
use of universal innate building blocks as well, and the situation in phonology and
morphology has been rather similar. A widespread view has been that linguists
should try to find a descriptive framework that allows us to describe all languages,
but is at the same time restrictive enough to explain language acquisition and the
limits on worldwide linguistic diversity (this describes what I called the RESTRICTIVIST
APPROACH in Haspelmath 2014).
But since Hauser et al. (2002) and Chomsky (2005), this programme has been
basically abandoned by many linguists as an explicit goal. The 21st century
Chomskyan idea is that the innate universal grammar contains only rather minimal
building blocks, perhaps only the operation Merge, which explains that languages
can have recursion. In Chomsky et al. (2019: 230), the authors even say that
“universal grammar” is merely the label for whatever biocognitive differences
there are between humans and other animals. One of the motivations for this much
less ambitious view of what is innate seems to be that a rich set of innate building
blocks has come to be seen as implausible from the perspective of biological
evolution (and thus the abandonment of a rich UG would help solve “Darwin’s
Problem”; cf. Berwick and Chomsky 2016).
So on this view of Human Language, there is no substantial natural-kinds
programme anymore. The general theory is no longer restrictive and cannot
explain observed limits on cross-linguistic variation, because with Merge alone, a
large number of unattested grammars are possible. Now what does this mean for
the practice of the linguist who studies the structures of particular languages? It
16 Haspelmath
Even though the idea of a rich innate blueprint for grammars was given up by
influential authors (§5.1), this is not reflected in the practice of mainstream
generative grammar. Journals such as Linguistic Inquiry, Syntax and Glossa
continue to publish many papers on particular languages that work with a highly
technical metalanguage for describing/analysing the morphosyntactic phenom-
ena of languages. It is quite common for research articles to consist of two parts:
One part lays out the phenomena in a way that is generally comprehensible to any
linguist, and another part (typically called “analysis”) describes the phenomena a
second time, using the highly technical metalanguage of current mainstream
generative grammar (or more rarely, of some other generative approach, such as
Distributed Morphology or Lexical Functional Grammar).
For example, Welch (2016) first describes various conditions for the use of
copulas in Dogrib (a Dene language) in a generally accessible way, and then in his
§6 (“Analysis”) describes the same facts using technical generative vocabulary
such as “merge”, “AspP”, and “φ-agreement”. And Holmberg et al. (2019) describe
a generalization about the interaction of question formation and passivization in
ditransitives in some Germanic and some Bantu languages, and then in their §3
(“Analysis”) describe the same facts again using technical vocabulary such as
“phase”, “specifier” and “ApplP”. Anyone who has a certain amount of experience
in this field will confirm that this is very typical: Studies of particular languages
make use of highly specific concepts that are thought to be universally applicable.
How does this approach contribute to general linguistics? Clearly, these au-
thors build on the assumption of a rich innate grammar blueprint, because
otherwise there would be no reason to have the “analysis” section in addition to
the generally comprehensible description of the phenomena in the first part of
their papers. The “analysis” adds something, because it redescribes the same facts
The P&P model is a very powerful model of both linguistic diversity and language universals.
More specifically, it provides a solution to Plato’s Problem, the logical problem of language
acquisition, in that the otherwise formidable task of language acquisition is reduced to a
matter of parameter-setting. Moreover, it makes predictions about language typology: pa-
rameters make predictions about (possible) language types, as we will see in more detail
[below]. Furthermore, it sets the agenda for research on language change, in that syntactic
change can be seen as parameter change … Finally, it draws research on different languages
together as part of a general enterprise of discovering the precise nature of UG: we can
discover properties of the English grammatical system (a particular set of parameter values)
by investigating Chinese (or any other language), without knowing a word of English at all
(and vice versa, of course).
So despite the fact that the natural-kinds programme has generated few generally
accepted results (as noted by authors such as Newmeyer 2005 and Boeckx 2014,
and as admitted by Baker 2008), these authors continue to pursue this idea, and
they are not alone.
Regardless of how promising it may be to look for innate building blocks, this
is a coherent position (cf. also Dryer 2016: 314). But many other generative linguists
are adopting what appears to be an incoherent stance: They work with the tech-
nical vocabulary and rule notation of mainstream generative grammar which is
supposed to apply to all languages, but at the same time, they do not endorse the
natural-kinds programme of an innate grammar blueprint. This view is articulated,
for example, by Koeneman and Zeijlstra in a blogpost (https://dlc.hypotheses.org/
1082), where they explain the choices made in their (2017) syntax textbook:
although we build up the theory using technicalities that are adopted from current mini-
malism, we do not adhere to or to try to persuade students about most of its philosophical or
biological underpinnings, such as innateness claims or conjectures about the biological
function of language
It seems that they think of their complex formal apparatus (which can account for
only a very small part of English grammar) as somehow merely a notation, but they
do not discuss alternative notations, of which there are many different ones that
are much simpler and would be able to describe a much larger part of English.
So one can make sense of what they actually do only if they adopt the traditional
18 Haspelmath
Despite the fact that the analysis of nominals has for a long time received a certain amount of
attention (e.g. Chomsky 1970; Grimshaw 1990), argument realization and case assignment in
the nominal domain have been primarily viewed through the lens of verbal syntax. Here we
analyze nominals on their own terms, proposing a lexicalist, constraint-based approach to
case assignment in Russian nominals, couched within the simpler syntax framework (Culi-
cover and Jackendoff 2005).
Smirnova and Jackendoff only discuss Russian, and they simply presuppose that a
discussion of Russian Nominalizations must be relevant to the English phenomena
discussed earlier by Chomsky and Grimshaw, which is clearly the case only if
“nominalization” is somehow part of the innate grammar blueprint. If it were not,
then it could be that Russian is entirely irrelevant to understanding English. Or it
could be that it is only historically relevant, because the similarities are due to a
shared history (either inheritance or borrowing via French). Since there is no
consensus about these matters in the discipline, the authors would have to spell
out their assumptions in terms of natural kinds in order to be understood also by
linguists who do not share these assumptions. The reviewers of the journal
apparently did not deem this necessary, apparently because the traditional (20th
century) generative position is still an implicit default position.22
So the answer to the question at the beginning of this section is: General
linguistics can be based on a single language if one adopts the natural-kinds
Describing each language entirely on its own terms is a noble and galvanizing task, but unless
grammarians orient their findings to what typologists know about the world’s other lan-
guages, their grammars can all too easily become obscure, crabbed and solipsistic.
The tension between a kind of description that is faithful to the categories of each
language and a description that makes the language appear relevant to general
concerns is just a special case of the “general linguistics paradox” that we saw
earlier in (5). The solution must consist of two parts.
First, language description is true to the categories of each language, but is
inspired by the accumulated knowledge of comparative linguistics (Haspelmath
2020b: §4). So if there is an affix on the verb that is very similar to an applicative or
an evidential, it should be given this label, rather than some other idiosyncratic
label. But at the same time, it cannot be assumed that what we know about other
languages (or about Human Language) determines the language-particular
20 Haspelmath
One colleague objected that it is odd to say that comparative grammar in the
Greenbergian tradition is based on “social conventions”, and that reaction time
experiments involve “non-conventional aspects of language”. And indeed, lin-
guists do not often talk like this, because there is a strong mentalist bias in the field.
We talk about acceptability judgements as if they involved “introspection” into our
mental grammars, whereas what we actually do is assess the social acceptability of
a possible sentence (in terms of linguistic norms). While it is of course true that our
knowledge of the social conventions must be mentally represented, the conven-
tions themselves are “social facts”, and when we describe a language, we describe
it as a social rule system. This is particularly clear in the case of child multilin-
gualism, where one of the tasks that the child faces is to link sets of linguistic
conventions to sets of social situations. And since all languages have different
registers, monolinguals are not in a very different situation. Thus, all grammatical
regularities and all the words and meanings of a language are its conventional
aspects. The two main sources of data for studying conventional aspects of lan-
guage are corpora and elicitation (including self-elicitation). Until a few decades
General linguistics based on universals 21
Regarding the “general linguistics paradox” of §4, one colleague objected that they
do not see “how this is any more of a paradox than scientists in other fields face”.
For example, biologists want to explore and understand the nature of bat echo-
location or crab locomotion, and what they can observe directly is only particular
bats or crabs.
But how is it that biologists can generalize from a single bat to all members of
the species? If species are natural kinds (as I said in Haspelmath 2018: 90,
admittedly simplifying matters), then the answer is simple: By their nature, all bats
and all crabs have the same essential properties, so by studying one specimen, one
learns about the entire species. This view of biological species is the basis of
taxonomists’ practice of attaching scientific names to “types”, i.e. specific speci-
mens kept in a research collection.23 This approach is possible because the
23 At least since Ernst Mayr’s widely known proposals about essentialist versus population
thinking (e.g. Mayr 1959), many biologists have emphasized the population nature of species. Of
course, to the extent that species do not share essential features, they are more like languages, and
22 Haspelmath
properties of different specimens do not vary by historical accident the way that the
observable properties of languages vary. The same goes for other disciplines that
work with natural kinds such as chemistry and physics, but there are also natural
sciences that study their phenomena using comparative concepts just like lin-
guistics (e.g. the study of clouds in meteorology, or the study of topographic
features in geomorphology).
While language structures are often similar, each language is structurally unique
(e.g. Haspelmath 2020b). Phoneme systems carve up the phonetic space in
different ways, and semantic systems are often different even in closely related
languages (cf. German fahren vs. gehen, which work quite differently from English
drive vs. go). Syntactic classes are often strikingly different, as can be seen in the
very different behaviour of English auxiliaries and German auxiliaries, in the very
different behaviour of Polish person clitics and French person clitics, and the very
different classifications of Arabic gender (masculine vs. feminine) and Swedish
gender (neuter vs. non-neuter). Perhaps most notoriously, what is meant by the
syntactic term “subject” differs from language to language in confusing ways. In
English, even an expletive like there can behave as a “subject” (cf. I believe there to
be two unicorns in the garage), and in Icelandic, a “subject” can be in Dative case,
while this is never so with Latin “subjects”. There are usually enough similarities
between different languages to make it tempting (and in some sense useful) to
reuse the same terms (e.g. “auxiliary” both for English modal auxiliaries and for
German tense auxiliaries), but the categories are really defined by their language-
particular structural behaviour (e.g. the lack of non-finite forms of English modal
auxiliaries), not by instantiating some general (aprioristic) category.
These differences mean that language systems are incommensurable, so that
making them comparable requires extra effort. In most cases, we cannot simply
translate from one language to the next by substituting different morphs. The most
straightforward way of making languages comparable is by creating comparative
concepts to which the structural elements of each language are mapped. For the
case of fahren/gehen/drive/go, we can start with the comparative concepts ‘go’, ‘go
by car’, ‘go by vehicle’, and ‘go by foot’. The simple equation of German fahren with
it is not enough to describe a single specimen. In effect, if we adopt a radical population view of
species (as sets of specimens with no essential shared properties) we must study them in the same
way as languages, by creating comparative concepts and trying to find universals.
General linguistics based on universals 23
English drive fails because the latter means ‘go by car’, whereas fahren is also used
for going by bicycle or by train. And while gehen seems to correspond to go, it can
actually only be used for going on foot. So while there are clear similarities here, we
need extra concepts to describe how the languages are similar or different. The
same goes for phonological categories, where we need phonetics-based compar-
ative concepts, and for syntactic categories, where our comparative concepts must
be based on a combination of semantic and phonetic concepts. For example, an
ergative case marker is a marker that occurs on a nominal expressing a transitive
agent but not on a nominal expressing an intransitive agent (this must be based on
a clear comparative definition of “(in)transitive”, see Haspelmath 2011).
So comparative concepts are necessarily based on phonetic and semantic
substance that is independent of language-particular structural distinctions, while
language-particular categories are based on contrasts, not on substance. It has
long been recognized, for example, that phonemes are not defined by their pho-
netic properties, but by their place in the system of oppositions. But we still want to
compare phoneme systems, which must be done in terms of the phonetic features
of the inventories.
One colleague asked how – if there are no innate categories – we can decide on
what is a particle, or an adposition, or a subject, or a reflexive pronoun. And I agree
with them that “this is the number one problem that needs to be addressed”.
Indeed, if we want to compare language structures in the world’s languages and
find universals, we need to have uniform “yardsticks” for comparison, analogous
to measurement in other sciences (cf. Round and Corbett (2020) on the “mea-
surement” metaphor). In Greenberg’s (1963) pioneering work, this issue was
mostly left aside, and Greenberg limited himself to saying that he was employing
semantic criteria in identifying phenomena like “subject”, “verb” and “genitive
construction” (1963: 59).
The issue of cross-linguistic comparability was not widely discussed until
Dryer (1997) and Croft (2001) pointed out that language-particular categories are
defined by language-particular criteria and thus cannot be compared directly.
Since Haspelmath (2010b), the idea of defining comparative terms carefully in such
a way that they can be applied to all languages uniformly has been steadily gaining
ground. If a category were innate and thus given a priori, it would be reasonable to
think that it is identified by different criteria in different languages, but if not, we
24 Haspelmath
need to provide definitions that focus on phonetic and semantic substance and
make no reference to language-particular structures.
For example, an adpossessive construction (Greenberg’s term was “genitive
construction”) is defined as a nominal with a possessed noun and an adpossessor
nominal or person index where the possessor referent is in an ownership, kinship,
or part-whole relation to the referent of the possessed noun (e.g. Koptjevskaja-
Tamm 2001). This fairly complex definition presupposes other terms that are
themselves not straightforward (noun, nominal, person index) and that need to be
defined carefully in turn. At each stage, it must be ensured that the definitions do
not contain concepts that cannot be applied to all languages uniformly. For
example, as noted in Haspelmath (2008: 43), the binding conditions of Chomsky
(1981) cannot be tested in languages worldwide with a measurement approach
because there are no generally applicable definitions of terms like “anaphor” and
“pronominal”, and concepts like “c-command” presuppose specific analyses,
which cannot be arrived at in an objective manner. For Chomsky’s approach at the
time, this was not a problem, because he assumed that his notions were innate
building blocks, so they did not have to be defined in such a way that the definition
can be applied to all languages uniformly (and it becomes possible to use different
criteria for different languages; Haspelmath 2018: §7). But for the Greenbergian
approach, the requirements are different, because language comparison is based
on uniform measurements of grammatical patterns (cf. 3a), not on true analyses of
mental grammars (cf. 2b). The descriptive meta-language is not assumed to be an
innate framework, so it cannot simultaneously serve for cross-linguistic compar-
ison and for explanation (Bickel 2015: §2).
In the earlier sections I have repeatedly talked about a “rich set of innate building
blocks”, or a “rich UG”, but why should this be relevant here? Isn’t it an empirical
question how rich our innate knowledge of grammar is?
The reason this is relevant is that de facto, many linguists assume a very rich
set of innate building blocks, because each time they use a category that was
motivated for one language for another one, this category must be innate. For
example, as D’Alessandro (2019) notes (in the quotation of §4.3), wh-movement in
English is thought to be informative for Chinese, because a notion such as
“question pronoun”, as well as a notion such as “movement”, is thought to be
innately given. And the same applies to many other building blocks (CP, ±coronal,
anaphor, AUX, specifier, and so on) which are routinely made use of in technical
General linguistics based on universals 25
analyses (for example, Welch (2016) uses notions like AspP, Asp’, TP, spell-out,
[PERSON], which must be assumed to be innate).
As noted earlier (§5.1), some influential authors have suggested that perhaps
much less is innate than has traditionally been thought, and it has specifically
been suggested that substantive features are not innate (e.g. Hornstein 2018). This
is what I would call “minimal UG”, and the more minimal it is, the more it is
compatible with (or even indistinguishable from) the Boasian/Greenbergian
approach (as I noted in §2, everyone accepts the existence of biocognitive pre-
requisites for language). For this reason, it is not sufficient to contrast “UG” versus
“non-UG” approaches. It is specifically the rich-UG approach of Chomsky (1981),
Baker (2001), and Huang and Roberts (2016) that allows one to combine p-lin-
guistics with g-linguistics, and that can be seen as a competitor of the Green-
bergian approach.24
In §5.1, I also said that the 21st century minimalist view of what is innate is “less
ambitious” than the traditional generative approach, and I was asked by a
colleague whether it wasn’t the other way round: “It is more ambitious to assume
fewer innate constraints and still derive the same results.” Now that would
certainly be the case, but de facto, leading 21st century generativists have largely
given up on the Bakerian parametric programme (cf. Boeckx 2014). They do not
derive the same results that authors like Roberts, Cinque and Baker hoped to derive
from innate parameters and principles. A lot of 21st century work has been
studying particular languages (e.g. Adger et al. 2009; Holmberg et al. 2019;
Pesetsky 2013; Welch 2016), but the kinds of explanatorily ambitious proposals
that were characteristic of the 1980s and 1990s seem to be largely absent. Thus, by
eliminating “richness” of UG, generative grammarians have also tended to reduce
the explanatory scope of their analyses. There is no clear contribution to general
linguistics in this kind of work.
24 It is true, as one colleague observed, that if UG is “too rich”, it will likewise allow almost any
language, just like a UG that is impoverished. This can be seen in Optimality Theory, where many
authors posited very specific constraints, thus reducing the restrictiveness of the approach. The
challenge of the Bakerian programme is to have a set of innate building blocks that allows just
those languages that are actually attested. Like the “impoverished-UG” approach, such a “su-
perrich-UG” approach is hard to distinguish from the non-aprioristic approach (and indeed, some
OT phonologists have basically become functionalists, e.g. Hayes et al. 2004).
26 Haspelmath
general hypothesis and then look for confirming or disconfirming evidence, or one
may start in a bottom-up way with a survey of the phenomena. In practice, most
comparative work represents a mixture of these styles, but it is indeed the case that
some linguists spend more energy on top-down proposals, while others spend
more energy on bottom-up research. This contrast has also sometimes been called
“deductive” versus “inductive”, and some people may want to describe it as
“theoretical” versus “empirical” (see note 5). For example, the book series
Empirical Approaches to Language Typology (De Gruyter, 65 volumes since 1987)
presumably intends “empirical” in the bottom-up sense. This represents a different
dimension from the “theoretical-applied” dimension that I focus on in this paper,
and crucially, it is not relevant to language-particular analyses (the kind of
research that the vast majority of linguists are concerned with). P-linguistics is not
more or less “theoretical” or “empirical”; it is only more or less “general” (either
focussing on general implications, or leaving these aside).
7 Conclusion
To conclude, let me reiterate the three main points of this paper:
First, there is an important distinction between general and theoretical lin-
guistics (§2). The non-applied study of a particular language (“descriptive
linguistics”) is no less theoretical than the study of Human Language. And it is not
immediately clear how one could learn about Human Language in general by
studying a particular language (§4). Particular languages are to a large extent
historically accidental cultural attributes of human populations, and they vary
enormously just like other aspects of cultures. This problem is what I called “the
general linguistics paradox”.
Second, there are two ways in which one can solve this paradox (if we leave
aside the study of non-conventional aspects of language and language use): One
can apply a hypothesized set of innate building blocks to particular languages
(the “natural-kinds programme”), or one can study a wide range of languages on
the basis of uniform yardsticks of comparison. The former is associated with
Chomsky’s traditional generative grammar ideas between the 1960s and 1980s,
and the latter became prominent with Greenberg’s work of the 1960s and 1970s
(§4.2–3).
And third, I pointed out that while ordinary working linguists most typically
study particular languages, particular linguistics (“p-linguistics”) no longer has
the prestige that it had in the 19th and early 20th century, so there is a strong
incentive to make one’s work relevant to general linguistics. But this is confronted
with difficulties: For the natural-kinds approach, the difficulty lies in the fact that
General linguistics based on universals 27
this programme was given up by some leading authors in the 21st century (§5.1).
And for the Greenbergian approach, the difficulty lies in the fact that worldwide
comparison of languages has revealed a great diversity of categories, so that the
categories of description cannot be used for comparison (§5.3).
The solution for the natural-kinds approach must thus lie in continuing the
search for universal building blocks and for evidence of their innateness, as
practiced in the 1980s and the 1990s (e.g. Cinque 1999; Roberts 1997). For the non-
aprioristic comparative approach, the solution consists in recognizing that the
categories of description are not the same as the yardsticks for comparison, so that
language-particular studies contribute to general linguistics only in an indirect
way. No linguist can simply pretend that the description of a particular language
will automatically contribute to general linguistics.
Thus, whatever one’s hunches about the best path leading to deeper under-
standing of Human Language: All general linguists need a clearer methodology for
language comparison. Despite many obvious similarities between languages, they
appear to have different categories and features, and we need something extra to
make the study of particular languages fruitful for general linguistics.
For linguists working in the generative tradition, this means figuring out
which features and categories (and architectures) must be innate and can be ex-
pected to occur in any language. For linguists working in the Boasian/Green-
bergian tradition, this means being careful about their characterization of
comparative concepts as uniform yardsticks for comparison.
If all goes well, the two approaches should eventually converge, i.e. evidence
for innateness should converge with the empirical universals found through the
non-aprioristic approach.
References
Adger, David. 2003. Core syntax: A minimalist approach. Oxford: OUP.
Adger, David, Daniel Harbour & Laurel J. Watkins. 2009. Mirrors and microparameters: Phrase
structure beyond free word order. Cambridge: CUP.
Aronoff, Mark. 2016. Unnatural kinds. In Ana R. Luís & Ricardo Bermúdez-Otero (eds.), The
morphome debate, 11–32. Oxford: OUP.
Bak, Thomas H. 2016. Cooking pasta in La Paz: Bilingualism, bias and the replication crisis.
Linguistic Approaches to Bilingualism 6(5). 699–717.
28 Haspelmath
Baker, Mark C. 2001. The atoms of language. New York: Basic Books.
Baker, Mark C. 2008. The macroparameter in a microparametric world. In Theresa Biberauer (ed.),
The limits of syntactic variation, 351–373. Amsterdam: Benjamins.
Baker, Mark C. 2010. Formal generative typology. In Bernd Heine & Heiko Narrog (eds.), The Oxford
handbook of linguistic analysis, 285–312. Oxford: OUP.
Baker, Mark C. & James McCloskey. 2007. On the relationship of typology to theoretical syntax.
Linguistic Typology 11. 285–296.
Barrett, Lisa Feldman. 2006. Are emotions natural kinds?. Perspectives on Psychological Science
1. 28–58.
Berwick, Robert C. & Noam Chomsky. 2016. Why only us: Language and evolution. Cambridge, MA:
MIT Press.
Bickel, Balthasar. 2015. Distributional Typology. In Heiko Narrog & Bernd Heine (eds.), The Oxford
handbook of linguistic analysis. Oxford: OUP.
Blake, Barry J. 1990. Relational grammar. London: Routledge.
Bloomfield, Leonard. 1933. Language. New York: H. Holt and Company.
Boeckx, Cedric. 2014. What principles and parameters got wrong. In Carme Picallo (ed.), Linguistic
variation in the minimalist framework, 155–178. Oxford: OUP.
Bornkessel-Schlesewsky, Ina & Matthias Schlesewsky. 2009. Processing syntax and morphology:
A neurocognitive perspective. Oxford: OUP.
Bresnan, Joan. 2001. Lexical-functional syntax. Oxford: Blackwell.
Chomsky, Noam. 1995. Language and nature. Mind 104. 1–61.
Chomsky, Noam A. 1957. Syntactic structures. ’s-Gravenhage: Mouton.
Chomsky, Noam A. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam A. 1970. Remarks on nominalization. In Roderick A Jacobs & Peter S. Rosenbaum
(eds.), Readings in English transformational grammar, 184–221. Waltham, MA: Ginn.
Chomsky, Noam A. 1981. Lectures on government and binding. Dordrecht: Foris.
Chomsky, Noam A. 1986. Knowledge of language: Its nature, origin, and use. New York: Praeger.
Chomsky, Noam A. 2005. Three factors in language design. Linguistic Inquiry 36. 1–22.
Chomsky, Noam, Ángel J. Gallego & Dennis Ott. 2019. Generative grammar and the faculty of
language: Insights, questions, and challenges. Catalan Journal of Linguistics (special issue).
229–261. https://doi.org/10.5565/rev/catjl.288.
Cinque, Guglielmo. 1999. Adverbs and functional heads: A cross-linguistic approach. New York:
OUP.
Comrie, Bernard. 1999. Reference-tracking: Description and explanation. Sprachtypologie und
Universalienforschung 52. 335–346.
Cristofaro, Sonia. 2009. Grammatical categories and relations: Universality vs. language-
specificity and construction-specificity. Language and Linguistics Compass 3. 441–479.
Croft, William. 2001. Radical construction grammar: Syntactic theory in typological perspective.
Oxford: OUP.
Croft, William. 2003. Typology and universals. Cambridge: CUP.
Culicover, Peter W. & Ray S. Jackendoff. 2005. Simpler syntax. Oxford: OUP.
Curtius, Georg. 1862. Philologie und Sprachwissenschaft: Antrittsvorlesung gehalten am 30. April
1862. Leipzig: Teubner.
D’Alessandro, Roberta. 2019. The achievements of Generative Syntax: a time chart and some
reflections. Catalan Journal of Linguistics (special issue). 7–26. https://doi.org/10.5565/
rev/catjl.232.
General linguistics based on universals 29
Diessel, Holger. 2014. Demonstratives, frames of reference, and semantic universals of space.
Language and Linguistics Compass 8. 116–132.
Dryer, Matthew S. 1997. Are grammatical relations universal? In Joan L. Bybee, John Haiman &
Sandra A. Thompson (eds.), Essays on language function and language type, 115–143.
Amsterdam: Benjamins.
Dryer, Matthew S. 2005. Order of subject, object and verb. In Martin Haspelmath, Matthew S. Dryer,
David Gil & Bernard Comrie (eds.), The World Atlas of Language Structures, 330–333. Oxford:
OUP. https://wals.info/chapter/81.
Dryer, Matthew S. 2006. Descriptive theories, explanatory theories, and basic linguistic theory. In
Felix K. Ameka, Alan Dench & Nicholas Evans (eds.), Catching language: The standing
challenge of grammar writing, 207–234. Berlin: Mouton de Gruyter.
Dryer, Matthew S. 2016. Crosslinguistic categories, comparative concepts, and the Walman
diminutive. Linguistic Typology 20. 305–331.
Evans, Nicholas & Alan Dench. 2006. Introduction: Catching language. In Felix K. Ameka,
Alan Dench & Nicholas Evans (eds.), Catching language: The standing challenge of grammar
writing, 1–39. Berlin: Mouton de Gruyter.
Gordon, Matthew Kelly. 2016. Phonological typology. Oxford: OUP.
Greenberg, Joseph H. 1963. Some universals of grammar with particular reference to the order of
meaningful elements. In Joseph H. Greenberg (ed.), Universals of language, 73–113.
Cambridge, MA: MIT Press.
Grimshaw, Jane. 1990. Argument structure. Cambridge, MA: MIT Press.
Haspelmath, Martin. 2004. Does linguistic explanation presuppose linguistic description?.
Studies in Language 28. 554–579.
Haspelmath, Martin. 2008. A frequentist explanation of some universals of reflexive marking.
Linguistic Discovery 6. 40–63.
Haspelmath, Martin. 2010a. Framework-free grammatical theory. In Bernd Heine & Heiko Narrog
(eds.), The Oxford handbook of linguistic analysis, 341–365. Oxford: OUP.
Haspelmath, Martin. 2010b. Comparative concepts and descriptive categories in crosslinguistic
studies. Language 86. 663–687.
Haspelmath, Martin. 2011. On S, A, P, T, and R as comparative concepts for alignment typology.
Linguistic Typology 15. 535–567.
Haspelmath, Martin. 2014. Comparative syntax. In Andrew Carnie, Yosuke Sato & Dan Siddiqi
(eds.), The Routledge handbook of syntax, 490–508. London: Routledge.
Haspelmath, Martin. 2018. How comparative concepts and descriptive linguistic categories are
different. In Daniël Van Olmen, Tanja Mortelmans & Frank Brisard (eds.), Aspects of linguistic
variation, 83–113. Berlin: De Gruyter Mouton.
Haspelmath, Martin. 2019. Ergativity and depth of analysis. Rhema 4. 108–130.
Haspelmath, Martin. 2020a. Human linguisticality and the building blocks of languages. Frontiers
in Psychology 10(3056). 1–10.
Haspelmath, Martin. 2020b. The structural uniqueness of languages and the value of comparison
for description. Asian Languages and Linguistics 1. 346–366.
Hauser, Marc D., Noam Chomsky & W. Tecumseh Fitch. 2002. The faculty of language: What is it,
who has it, and how did it evolve?. Science 298(5598). 1569–1579.
Hawkins, John A. 2014. Cross-linguistic variation and efficiency. New York: OUP.
Hayes, Bruce, Robert Kirchner & Donca Steriade (eds.). 2004. Phonetically based phonology.
Cambridge: CUP.
30 Haspelmath
Holmberg, Anders, Michelle Sheehan & Jenneke van der Wal. 2019. Movement from the double
object construction is not fully symmetrical. Linguistic Inquiry 50. 677–722.
Hornstein, Norbert. 2018. Universals: Structural and substantive. Faculty of Language (blog).
https://facultyoflanguage.blogspot.com/2018/02/universals-structural-and-substantive.
html.
Huang, C.-T. James & Ian Roberts. 2016. Principles and parameters of universal grammar. In
Ian Roberts (ed.), The Oxford handbook of universal grammar. Oxford: OUP.
Jackendoff, Ray. 2002. Foundations of language: Brain, meaning, grammar, evolution. Oxford:
OUP.
Jaeger, T. Florian & Elisabeth J. Norcliffe. 2009. The cross-linguistic study of sentence production.
Language and Linguistics Compass 3. 866–887.
Joseph, John Earl. 2012. Saussure. Oxford: OUP.
Koeneman, Olaf & Hedde Zeijlstra. 2017. Syntax. Cambridge: CUP.
Koerner, E.F.K. 1973. The importance of Techmer’s “Internationale Zeitschrift für Allgemeine
Sprachwissenschaft” in the development of general linguistics. Amsterdam: Benjamins.
Koptjevskaja-Tamm, Maria. 2001. Adnominal possession. In Martin Haspelmath, Ekkehard König,
Wulf Oesterreicher & Wolfgang Raible (eds.), Language typology and language universals:
An international handbook, 960–970. Berlin: Mouton de Gruyter.
Krifka, Manfred. 2008. Functional similarities between bimanual coordination and topic/
comment structure. In Regine Eckardt, Gerhard Jäger & Tonjes Veenstra (eds.), Variation,
selection, development, 307–336. Berlin: Mouton de Gruyter.
Larson, Richard K. 2010. Grammar as science. Cambridge, Mass: MIT Press.
Lasnik, Howard & Jeffrey L. Lidz. 2016. The argument from the poverty of the stimulus. In
Ian Roberts (ed.), The Oxford handbook of universal grammar. Oxford: OUP.
Lazard, Gilbert. 2005. What are we typologists doing? In Zygmunt Frajzyngier, Adam Hodges &
David S. Rood (eds.), Linguistic diversity and language theories, 1–23. Amsterdam:
Benjamins.
Levinson, Stephen C. & Nicholas Evans. 2010. Time for a sea-change in linguistics: Response to
comments on ‘The myth of language universals’. Lingua 120. 2733–2758.
Lyons, John. 1968. Introduction to theoretical linguistics. Cambridge: CUP.
Mayr, Ernst. 1959. Darwin and the evolutionary theory in biology. In Evolution and anthropology: A
centennial appraisal. Washington, DC: The Anthropological Society of Washington.
Mendívil-Giró, José-Luis. 2019. How much data does linguistic theory need? On the tolerance
principle of linguistic theorizing. Frontiers in Communication 3. https://doi.org/10.3389/
fcomm.2018.00062.
Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: OUP.
Moravcsik, Edith A. 2011. Explaining language universals. In Jae Jung Song (ed.), The Oxford
handbook of language typology, 69–89. Oxford: OUP.
Newmeyer, Frederick J. 1994. A note on Chomsky on form and function. Journal of Linguistics 30.
245–251.
Newmeyer, Frederick J. 2005. Possible and probable languages: A generative perspective on
linguistic typology. Oxford: OUP.
Parry, Richard. 2020. Episteme and Techne. In Edward N. Zalta (ed.), The Stanford Encyclopedia of
Philosophy. Summer 2020. Stanford: Stanford University.
Paul, Hermann. 1880. Principien der Sprachgeschichte. Halle: Niemeyer.
Pedersen, Holger. 1931. The discovery of language: Linguistic science in the nineteenth century.
Cambridge, MA: Harvard University Press.
General linguistics based on universals 31
Percival, W. Keith. 1995. The genealogy of general linguistics. In Kurt R. Jankowsky (ed.), History of
linguistics 1993, 47–54. Amsterdam: Benjamins.
Pesetsky, David. 2013. Russian case morphology and the syntactic categories. Cambridge, MA:
MIT Press.
Reichling, Anton. 1949. What is general linguistics?. Lingua 1. 8–24.
Rivarol, Antoine de. 1784. Discours sur l’universalité de la langue française. In Berlin: Prussian
Academy of Sciences. https://en.wikipedia.org/wiki/The_Universality_of_the_French_
Language.
Roberts, Ian G. 1997. Comparative syntax. London: Arnold.
Robins, Robert H. 1964. General linguistics: An introductory survey. London: Routledge.
Round, Erich & Greville G. Corbett. 2020. Comparability and measurement in typological science:
The bright future for linguistics. To appear.
Sapir, Edward. 1921. Language: An introduction to the study of speech. New York: Harcourt,
Brace & Co.
Saussure, Ferdinand de. 1916. Cours de linguistique générale. Lausanne: Payot.
Schmidtke-Bode, Karsten, Natalia Levshina, Susanne Maria Michaelis & Ilja A. Seržant (eds.).
2019. Explanation in typology: Diachronic sources, functional motivations and the nature of
the evidence. Berlin: Language Science Press.
Schnelle, Helmut. 1974. Editorial. Theoretical Linguistics 1. 1–5.
Slobin, Dan Isaac (ed.). 1985. The cross-linguistic study of language acquisition. Hillsdale, NJ:
Erlbaum.
Smirnova, Anastasia & Ray Jackendoff. 2017. Case assignment and argument realization in
nominals. Language 93. 877–911.
Song, Jae Jung. 2018. Linguistic typology. Oxford: OUP.
Vendryes, Joseph. 1921. Le langage. Paris: Renaissance du livre.
Welch, Nicholas. 2016. Propping up predicates: Adjectival predication in Tłı̨ chǫ Yatıì. Glossa: A
Journal of General Linguistics 1(2). 1–23.
Whitney, William D. 1875. The life and growth of language. London: Routledge/Thoemmes Press.