Spoken language

From Citizendium
Revision as of 12:50, 18 February 2024 by Pat Palmer (talk | contribs)
Jump to navigation Jump to search
This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.
(CC) Photo: Nick Thompson
Humans who can hear instinctively talk, as the conversation of these two men in Naples, Italy, shows.

Spoken language has two meanings. In one sense, it is any example of language produced using some of the articulatory organs, e.g. the mouth, vocal folds or lungs, or intended for production by these organs. In another way, it may refer to the entire act of communicating verbally - what people mean or intend, the words they use, their accent, intonation and so on; anything, in fact, that might be found in speech rather than other forms of expression.

Spoken language contrasts with both sign language and written language. While a sign language is a language in its own right, written language is a way of recording a (usually also spoken) language. Signed and spoken language are therefore two instances of language itself, rather than one being prioritised over the other in any way. Sign languages have the same natural origin as spoken languages, and the same grammatical complexities, but use the hands, arms, and face rather than parts of the mouth.

Spoken versus written language in linguistics

When examining language that may be spoken or written, linguists generally consider that more fundamental insights can be gleaned into the nature of language by analysing natural, spontaneous speech, whereas the written word is considered at best an incomplete representation of a linguistic system. One reason for this is the fact that written language usually develops as a way to record spoken language rather than as a separate system. This also means that in many cultures, community languages are unwritten (such as Pirahã in Brazil). Another point is that children acquire their first language(s) through speech or signing, and never solely through writing; a focus on written language therefore ignores linguistic development prior to literacy. Another reason is that writing systems typically ignore many features of spoken language - for example, the English alphabet does not show stress (rebel, out of context, could be the verb or the noun), and the Japanese mora-based systems do not record pitch accent.[1]

Although computers can produce artificial speech with the appropriate equipment and software, this tends to fall outside the definitions of 'spoken language'.

Physical characteristics of spoken language

In order to build telecommunications systems that convey spoken language, the characteristics of speech, and requirements for understandability, must be known. Telephones operate on the assumption that quite understandable speech requires a channel with a nominal analog bandwidth of 4 KHz. The important speech energy is above 300 Hz and slightly lower than 4 KHz.

Normal conversation requires that there be no more than 160-200 milliseconds of delay from sender to receiver. Understandability drops if the delay before a speaker can respond approaches 300-400 milliseconds, and essentially stops being interactive with delays in excess of 600 millisecond. If, however, the participants are aware that there is a technical reason for long delay, such as speaking with astronauts on the Moon, people become much more tolerant of delay.

Footnotes

  1. For example, hashi, written はし for the moras ha and shi in hiragana, can mean 'chopsticks' or 'bridge'. The pitch contour of the voice over the two syllables leads to different pronunciation in many varieties of Japanese, including the standard form. In writing, the only way to show the difference outside of context is to use kanji (Chinese-derived characters) - 橋 is 'bridge' and 箸 is 'chopsticks'. With another word following, a third meaning is possible: 'edge' (端).

See also