Artificial Languages

      As the world becomes more closely knit, many people have dreamed of the day when all people could communicate using a single language. In 1629 the French philosopher Rene Decartes outlined a scheme for constructing a universal language with numbers representing words and notions.  Since that time, there have been more than 700 attempts to create an artificial language. There are two types of artificial languages. 

      1) The idea for the first type of artificial language developed in the 17th century. This was a time when Latin was falling into disuse as a universal language of learning in Europe. It was also a time when Europeans were first becoming aware of the tremendous number of different languages in the world and the inadequacy of any single European language for worldwide communication. One scholar of the time remarked, "The curse of Babel is worse than our fathers imagined." Many felt that Europe needed a new language for commerce, mission work and other international tasks. The European encounter with Chinese also fueled these schemes. The fact that several mutually unintelligible Chinese dialects could be written with the same set of characters fostered the mistaken impression that Chinese characters were symbols for pure concepts rather than for the sounds of language. Many scholars at that time believed reality could be categorized into a limited set of categories that they called "concepts." Each category could then be given a symbol, creating a universal language that would not be dependent upon anyone's native language.

      Since these languages try to go beyond true language and return to what were thought to be the semantic roots of language, they are called a priori languages.  Decartes' language was an example of an a priori language; many such languages have been invented since the 1600's.  Musical notes and numerals served as the prototypes for several such languages. 

      In the 1700's many people continued to look to Chinese characters as a conceptual language, with each concept having its own pictographic symbol.  Such systems proved impossible to use because of the enormous number of different symbols needed.  The learner was presented with a virtually impossible memory task. (Chinese writing, as we have seen, is not really ideographic:  each symbol denotes the sound of a particular syllable, which is why learning to write Chinese is possible.) 

      In terms of practical usage, the a priori languages turned out to be a complete failure. The main problem was that the natural boundaries between concepts, which were supposed to be determined by 'science' or philosophy, turned out to be elusive;  the boundaries between concepts agreed upon for the artificial language turned out to be no less arbitrary than those in conventional languages.  Second, an a priori language required a prodigious memory for symbols. Learning the several thousand symbols needed for such a scheme is a daunting task which few attempted or even succeeded. By the 1800's the idea for an a priori language has fallen out of fashion.

      One of the last of these schemes is in some ways the most original. In the 19th century a French music master, Jean Francois Sudre invented Solresol, a universal language based on the principle that the tones of music--do, re, mi, etc.--could be used as the elemental syllables of a universal language. This would alleviate the need to memorize thousands of basic symbols.

   Two note combinations were used for grammatical words:  si--yes, dore--I; domi--you. 

   Common words used three note combinations:  doresol month; doredo time

   Semantic opposites were expressed by reversing the order of syllables:  misol good vs. solmi evil.

   Four note combinations were divided into different semantic classes: the note 'la' appeared in words dealing with finance and commerce. 

      Solresol could be played, whistled, or sung, as well as spoken.  But it was difficult to learn and had the defect of being monotonous, since it was composed entirely of eight syllables. In addition, it was easy to mix up the rules for combining words (misol/solmi).

      2) Failure of the a priori language schemes led to a new approach to artificial language creation. Artificial languages which are patterned after real languages--with phones, morphemes, words, and sentence patterns-- belong to the second type of artificial languages, called a posteriori languages, are actual languages with grammars patterned on a simplification of existing languages.  The first large-scale movement was Volapuk (1880; patterned after English and German; 8 vowels, 20 consonants). 

      The most successful is Esperanto, invented by Zamenhof, a Polish oculist in 1887.  It contains 5 vowels, 23 consonants, and a mainly West European lexicon with Slavic influence on syntax and spelling). 

      A posteriori artificial languages are much simpler in structure than the natural languages they are patterned after.  Esperanto grammar can be condensed to fit into a single page; and there are no exceptions to the rules.  The alphabet, which is based on Latin, has one letter for each phoneme.  Accent always falls on next to last syllable (like Polish).  The definite article is always "la," nouns end in o/oj, adj end in a/aj, present of all verbs is -as the past is -is, the future is -os, the imperative is -u.  See if you can decipher this Esperanto example: La inteligenta persono lernas la interlingvon Esperanto rapide kaj facile.  Esperanto today has several million speakers, but no one speaks it as a native language. 

Problems with a posteriori artificial languages

      First, artificial languages like Esperanto are not really linguistically neutral mediums of communication since they derive basically from one of the major language families, usually Indo-European.

      Second, considerable effort still must go into learning them; many adults don't learn a second language at all, and those who do must work hard at doing so. 

      Third, a person's native language is part of their identity and cultural heritage, something not so willingly given up. So it is unlikely that an artificial language, with no cultural prestige would come to replace living, natural languages.

      Fourth, even if an artificial language were to be adopted as a world language, each nation would in time develop local dialects based on interference from their own native tongue;  these would eventually begin to diverge into separate languages, just as English- and French-based creoles have in many parts of the world. There have even been mutinies among the esperantists. At the beginning of this century one breakaway group created Ido, a simplified version of Esperanto.

      So in practical terms, artificial language projects have really been unsuccessful. Few people today believe that the world will one day adopt such a language as the chief means of international communication. Instead of an artificial language coming to be used as an international lingua franca, the world community seems to be moving closer to the use of several widely spoken languages as lingua francas in various parts of the world.  Mandarin Chinese has over a billion speakers.  Hindi has nearly as many.  Next come English, Spanish, Russian, Portuguese.  These languages plus French, German, Japanese and Arabic, it can be used to communicate with a large majority of the world's inhabitants.  Today English comes closest to being a worldwide international language:  more people speak English as a second language than any other language.  Chinese and Hindi are spoken predominantly in South and East Asia.

      Finally, it should be said that some people lately have invented artificial languages not to simplify the international language picture, but rather to add yet another dollop of linguistic diversity to the picture.  The earliest was Francis Godwin's (1634) History of the Man in the Moon, in which the author invented a language called lunarian. Lunarian was an a priori language patterned after Chinese (or so the author thought). Godwin's book is considered the first work of science fiction. More recently other authors have created elaborate languages for imaginary peoples which function more like real languages, with phonology, morphology, and grammar (recent examples are J.R.R. Tolkien's elvish and orkish in the Lord of the Rings, or Klingon, in Star Trek's the next Generation.  A more recent language invention in this tradition involves the Klingon language developed by Marc Okrand, a specialist in Native American languages and Star Trek enthusiast. His language contains uvular and retroflex consonants, a voiceless [l], and other decidedly non-European sounds. It was first used in the episodes of Star Trek, the New Generation. But now he has expanded it into a full-fledged language by writing a grammar and dictionary. Recently, the Bible has been translated into Klingon. So literary artificial languages are often deliberately more elaborate in some way so that they would seem strikingly different from English and other widely-known natural languages.  During the 17th and 18th century a number of writers had invented languages for the imaginary civilizations in their novels.

      What is the "moral" of all this discussion of language diversity and international languages? Although the number of languages has been considerably reduced over the past 500 years, there are still thousands of languages spoken and we have every reason to believe that many if not most of these will continue to be spoken in the future. For many--if not all--humans, the diversity of language is not merely an accident of history, but an end in itself, a part of our cultural heritage worthy of preservation. More and more people have begun to view the so-called curse of Babel as a blessing that enriches the world. In this same vein, language conservation projects have arisen among small nations in danger of losing their language.