Phonemes in the history of a language
The number of phonemes (the sets of phones that occur in complementary distribution and thereby contrast with other such sets) may become larger or smaller as language changes through time.
At any given time, the set of phonemes in a language is a closed set (like function words and syntactic rules, the set of phonemes is part of the limited, hard wired part of language). A speaker cannot simply add a new phoneme to the language the way a new word can be added. The set of phonemes changes only over time. English, for instance has lost the phonemes /x/ and /∑/. English has also gained phonemes by borrowing foreign words with the sounds /z/ and /Z/. Neither of these sounds were phonemes in English until they entered the language in numerous words borrowed from Norman French after 1066. Similarly, the sound [f] was not part of Russian until after the Christianization in 988, when many Greek words containing [f] were borrowed by the Slavs.
The interrelationship between speech sounds in a language also changes over time (e.g., the fate of IE [p], [ph] through time). Thus, contrastive sounds (belonging to separate phonemes) may begin to occur in complementary distribution through accidental sound changes.
Because the number of phonemes is static at any given point in a language's history, it is possible to classify languages according to the phonemes they contain. Remember that typology is the study of structural features across languages. Phonological typology involves comparing languages according to the number or type of sounds they contain.
Although, as we have seen, there are inevitable problems in dividing the sounds of any language into separate abstract units called phonemes, linguists usually compare languages according to the number of different groups which participate in meaningful sound contrasts (i.e., phonemes) rather than the total number of actual speech sounds. Every language has a fairly small inventory of these sets, or phonemes. And the number varies from language to language. According to our analysis, General American English has 34 phonemes; these appear in more than twice as many phonetically distinct forms in actual speech. In comparison Hawaiian has only 18; Kabardian has over 80; and !Xung, a Khoisan language, is reported to have 141 phonemes, or mutually contrastive sets of sounds.
A second aspect of phonological typology classifies languages according to the type of sounds present or absent in each language. Some sounds are only rarely found in languages. Unusual sounds include: the bilabial trill of some New Guinea languages, the apico-labial flap of the Nigerian language Margi, the strident, trilled Czech r; the Czech and Slovak voiced h-sound [H], Arabic pharyngeals, African and Asia implosives, and South West African Khoisan clicks.
Other types of sound are nearly always phonemic in languages and are only rarely absent. Unusual omissions include: labials (nearly completely absent in Cherokee, Tlingit), nasals (absent from several Salish languages), sibilants (absent from Hawaiian). No known language entirely lacks either obstruents or sonorants. No known language entirely lacks either vowels or consonants, although Rotokas has only six consonants certain Northwest Caucasian languages such as Kabardian have only one vowel.
Phonetic features and classes of sounds
Remember that the smallest units of speech are the phonetic features that make up speech sounds. We have seen that the occurrence of some speech sounds is entirely predictable based on phonetic context. The phonetic features which distinguish allophones are redundant features (examples are nasality and length in vowels; aspiration in voiceless plosives; non-release of obstruents). Other features are not predictable, such as tense/lax in vowels, nasality for consonants). Features whose presence or absence in a given sound segment normally affects meaning are known as distinctive (or phonemic) features. Note that features are usually thought of as binary oppositions. Notice that sometimes the presence of one feature will always imply the presence of another (rounding and backness of vowels). Such features are known as mutually redundant features.
Knowing the phonetic features of English helps one to group sounds together into natural classes--a class of sounds with at least one phonetic feature in common. Every phonetic feature has the capacity of building a natural class.
Phonological rule writing
Patterns of interaction between speech sounds in a language can be described formally by writing phonological rules. The rules state the environment in which one sound or class of sounds changes into another.
These rules are shorthand notations for various sound relationships.
And understanding phonetic features and natural classes helps one to write out various types of pholological rules.
1. For instance, knowing the natural classes of sounds helps in describing the complementary environments of allophones of the same phoneme.
2. Thinking in terms of phonetic features also facilitates the writing of phonological rules, or rules of sound change.
a.) Phonological rules can be used to describe historical change (change of [p] to [f] in Germanic).
b.) Phonological rules can also be used to describe sound alternations that regularly occur in connected speech. (Give an example of the rule of loss of aspiration--pack, Bill's pack; towel, John's towel-- and of vowel nasalization in English--silly, silly name; yellow mask). Such processes occur every time people speak. And they might be quite different than the phonological processes that occurred in the language's history (e.g., pater --> father; but father --> Bill's father, not *Bill's pather). The alternation between [p] and [f] is historical and not productive in the language; the alternation between [p] and [ph] is productive and occurs as a regular part of speech. Phonological notation can describe both types of sound changes--the productive, contemporary ones; and the non-productive, historical ones. (Sometimes a historical change can be underway in the present [∑] to [w] for example; sometimes a change that happened in history can still be productive in a language, as the regressive voicing assimilation rule in Russian.)
Another type of phonological rule, called a phonotactic constraint, defines what sound combinations may and may not occur in a language. If you knew all the speech sounds present in a language you still wouldn't be able to make words without knowing the phonotactic constraints operating in that language. For example the sound at the end of /sing/ occurs only in syllable-final position, [h] can only occur in syllable initial position. Another phonotactic constraint prevents [Z] from occurring at the beginning or at the end of native English words (this rule might be changing under the influence of borrowed foreign names such as "Zsa-Zsa, Jacques, Zhanna, etc."). Phonotactic rules could be called phonetic syntax.
Here are some phonotactic constraints of English:
a.) Word/syllable initial: no [N], only specific types of clusters: s + voiceless plosive + liquid; or s + sonorant; or plosive + sonorant
b.) Word/syllable final: no [h]
c.) Word/syllable medial-- must be a vowel, no liquid.
Phonotactic constraints gradually change through time (just like the number of phonemes changes through time). The following word initial clusters have dropped out of English: [kn], gn], [xr], [xl].
Often, combinations of sounds that are allowed by the phonotactic rules of a language are not actually used as words: [zib], [charp], [squill]. These are called accidental gaps in the vocabulary of a language; they are potential words--perhaps someone will tomorrow use [charp] to describe the green mutant potato chip found at the bottom of a bag of chips.
Every language has its own unique set of phonotactic constraints. Sound combinations that could not possibly be English words might very well be words in another language. For instance both English and Georgian have the sound segments [t], [A], [m].
In English we have Tom but no mot, mta or tma; although mot could be a word.
In Georgian we have mta, mountain; and tma hair, but no tom or mot.
Foreign borrowings often cause changes in phonotactic rules (just like they can lead to the adoption of a new phoneme). Due to the influence of the original French, many people pronounce a final [Z] instead of [dZ] in garage. Also note the sound combination [sv] in svelte, Sven and a few other words from Scandinavian languages. As a final example, notice that the name Schmidt from German entered the language even though it violated the phonotactic rules of English. Such is also the case with many borrowings from Yiddish that contain consonsant clusters beginning with the sound [sh]: schmooze, schmuck, shlep, shlok. Such foreign borrowings often eventually result in changes in a language's phonotactic rules.
Other phonological rules describe the changes that occur in sounds when they are brought together. You will recall that in fusional languages, the morphemes alter their phonetic shape to accommodate the sound of adjacent morphemes. Let's classify these type of changes on phonological grounds. These rules may be classified according to the type of phonetic change that occurs.
1) Feature deletion or addition rules. Lengthening of English consonants before voiced obstruents.
a) Assimilation rules (the feature added is present in an adjacent segment) nasalization of English vowels before nasals.
b) Dissimilation rules (the feature deleted is present in an adjacent segment) deletion of aspiration after [s].
Assimilation and dissimilation may be progressive (velarization of English [l]) or regressive (nasality in English vowels).
2) Segment deletion or addition rules (a whole sound is added or subtracted) French, also Eng.: autumn, autumnal; athlete/"athalete". Adding schwa between sibilants when adding the English plural ending: boxes.
3) Metathesis rule reorders the segments that are present: ask/aks; nuclear, "nucular", Georgian: dzrokhi/rdze. These are examples of a rule randomly applied. For an example of a metathesis rule regularly applied, see also the example from Hebrew on p 250.
Phonological changes occur most commonly in the morphology of fusional languages but may occur in any morphological type of language when words are brought together in fast speech (English palatalizing rule: alveolar + y in fast speech yields a postalveolar sound: at your house, could you.)
On the separability of linguistic levels
Linguists traditionally divide language into separate strutural levels: phonology, morphology, syntax. We have already seen how there is no precise boundary between word and sentence formation across languages. The same is true for the so-called boundary between phonology and morphology. There are problems with the traditional division of linguistics into phonology and morphology.
The shape and distribution of morphemes is often dependent upon phonology.
The three English plural morphemes are a good example. Such phonologically conditioned variants of a single morpheme are called allomorphs. The study of the effect of phonological rules in morphology is called morphophonology.
The phonetic changes in the English plural morpheme derive from phonological rules applying throughout all of English, not just to the plural morpheme. English has no sibilant clusters or clusters of voiced and voiceless obstruents together anywhere.
Often, however, the shape of morphemes can depend upon phonetic context in ways only affecting the given morpheme and are not part of a rule that applies throughout the language.
Here is an example from English. In English, some adjectives are transformed into verbs by adding the suffix -en: black--blacken, white--whiten; bright--brighten; red--redden; ripe--ripen. Other adjectives can take on the function of verbs without adding the suffix: yellow, blue; cheer, mellow. Here both morphology and phonology play a role. The division is phonetically based: obstruent stems take -en and vowels and sonorants do not. But the constraint only applies to this one morphological instance and not elsewhere in the language: Owen.
This brings up our final question with regard to morphology: do speech sounds themselves actually have specific meanings? Morphology, it will be recalled, studies forms with specific meanings and calls these forms morphemes. Phonology studies whether a sound is capable of influencing meaning in general. On the phonological level, each contrastive speech sound, or if you will, each phoneme of a language, is capable of influincing the meaning of morphemes and yet has no identifiable, specific meaning of its own. For example, although the sounds [m] and [s] are contrastive and capable of influencing meaning in English, we cannot assign any particular meaning to [m] or [s]. Excepting morphemes that happen to consist of a single sound, such as the plural -s or the indefinite article a, individual speech sounds are--at least in theory--on a level beneath that of the level of specific meaning.
Sometimes, however, a sound or cluster of sound recurs noticeably in words of the same meaning. This is true of the cluster [tter], which marks repetitive sound or motion in twitter, titter, pitter, patter, teeter, totter; or finally, the cluster [cr], which appears in many words denoting sharp, strident sounds: crack, crisp, crunch, cry, croon. Such meaning marking sound combinations may derive from onomotopoeia, but not always, as in [gl] which marks such a noiseless phenomenon as light in glitter, glisten, glow, glitz. These sound combinations are not separate morphemes, and yet their presence as part of the given morphemes seems motivated by semantic factors. Such functional phonetic combinations could be called phonosemantic markers (or phonaesthemes). By comparison, the recurring sound group /kl/ in clear, clatter, close, clip, clasp, clue, clothing, is not a phonosemantic marker since the group of words in which it appears cannot be characterized by any particular semantic feature. Phonosemantic markers like the [gl] in glitter, glisten, or the [cr] in crack, crunch, cry, etc., belong neither entirely to morphology nor entirely to phonology; they are a minor aspect of language but an aspect nonetheless.
Individual sounds are also often thought of as conveying specific meaning in a poetic or musical way. Various types of sound symbolism often characterizes the poetic text, where the author may endow individual sounds with special connotations. Many people feel that certain sounds are inherently more happy or sad or threatening than others. For instance, a study revealed that Italian speakers tend to feel that the sounds [s], [l] or [i], [e] are inherently more pleasant, energetic and happy than than [d], [g], [a] or [u]. Poets and writers often take advantage of such popular perceptions to add strength to the meaning of their words by skillfully using assonance (the repetition of certain vowel sounds: the sad call of autumn) or alliteration (the repetition of consonant sounds: the light lips of love).
Contrary to popular belief, however, single sounds are not associated with specific meanings across languages. (Cf. an experiment which asked Italians the meaning of the Russian words telqtina, and dorogoj.) Poetic uses of sound imagery are independent of language structure, since in another text a different set of connotations between sound and meaning might just as easily be established. For instance, the poet may choose the phone [u] to be associated with the idea of sadness or [a] to be associated with joy: and all the angels announce happy tidings.
Besides such occasional, poetic usages of sound, speech sounds in themselves generally have no specifically identifiable meaning that would apply to all instances of their occurrence in a given language. However, the distribution of individual speech sounds in a language can be subject to highly specific semantic restrictions. Theoretically, [m] and [s] can appear in a word of any meaning whatsoever, the only restrictions on their distribution being purely phonetically based: for instance, a purely phonological rule of English prevents the cluster [ms] from appearing at the beginning of any English word, regardless of meaning.
There are instances, however, when specific semantic factors do act as restrictions on the distribution of speech sounds. A good example is the distribution of voiced vs. voiceless [T] in word initial position in English. The voiced sound can only appear in function words; the voiceless sound in content words. Thus, in word initial position, only the voiceless [T] is productive. The restriction on the voiced [D] in word initial position is based on morphological rather than purely phonological considerations: the two otherwise contrastive sounds are in complementary distribution governed by the specific meaning of the words in which they appear.
A more striking example of a morphological constraint on phonetic distribution is to be found in Cherokee. Cherokee has a sound [m] that contrasts with other sounds to create changes in meaning: ama salt; ada baby bird; ana strawberry; ata young girl. However, the sound [m] appears in only about 10 morphemes: ugama soup; kamama butterfly; gugama cucumber. Although most of these words seem to be foreign borrowings, no new words using [m] seem to be entering the language. Nor do new words containing [m] seem be made in Cherokee on any regular basis. Thus, the sound [m], which definitely would be considered a phoneme in the phoneme theory of phonology, is highly restricted in its distribution, at least as far as concerns the present state of Cherokee. The restriction is random: the sound [m] only appears in a small collection of words with no specific meaning in common. Yet the restriction on the distribution of [m] is morphological rather than phonological: [m] is restricted to a specific and limited set of words.
An even more extreme example is to be found in Quileute, a Native American language from the Olympic Peninsula of Washington State. The sound [g] appears in only one word in the entire language: hága'y frog. Thus, this sound, which is in contrastive distribution with other phonemes, is entirely restricted in function to being able to contribute to the makeup of a single phoneme, the word for frog. It is even possible to say that [g] in Quileute has a specific function: to contribute to the morpheme meaning frog.
Is the sound [g] in Quileute a phoneme with a specific meaning?--something that linguists claim is not supposed to exist. If not, then what exactly is it? If such a non-productive and highly restricted sound can be a phoneme, then how should we classify the special sounds found only in the vocal gestures of various languages. Examples included the voiced [h] that in English occurs only in the phrase a-ha or mhm; or the glottal stop occurs that occurs in many dialects of English only in exclamations O-ho or uh-uh; or the click in the English vocal gesture tisk-tisk. Or the nasal vowel in the negative verbal gesture u-uh.
The occurance of certain sounds is often restricted to such special gestural types of utterances. Sounds restricted to exclamations or other isolated words are usually not considered to be part of the sound system of a particular language. And yet, such words are linguistic, unique to each language and not the same as laughter, sneezing or other truly physiological and non-linguistic gestures. Combinations of sounds may likewise be restricted to special, onomotopoetic words such as English boing that contain the only example of the sound combination [oing] in the entire language. Cf. -ang in bang sang, hang, rang, etc. The classification of such occasional sounds or rare sound combinations falls outside the traditional structural divisions of language.
It would seem that, although linguists usually treat phonology and morphology as completely separate levels of linguistic function, real language doesn't always oblige the linguists in this separation. Phonology, the general ability to influence meaning, and morphology, the study of forms with specific meaning, actually constitutes a gradable continuum rather than two separate levels.