User:Wugapodes/Phonetics

From Wikipedia, the free encyclopedia

Phonetics (pronounced /fəˈnɛtɪks/) is a branch of linguistics that studies the smallest units of human language, called phones. In oral languages, these are the sounds which make up the words and sentences heard by a listener; for sign languages these are the parameters that make up a sign seen by the listener. As a field of study it is primarily concerned with the physiological aspects of speech such as how the tongue moves to produce certain sounds, or the way the brain interprets certain signs. The related field of phonology, by contrast, is concerned with the abstract, grammatical characterization of systems of sounds or signs.

There are three main branches of phonetics: acoustic, articulatory, and auditory. Articulatory phonetics is the study of speech organs and how they are used to produce speech. Instruments like the electromagnetic articulograph are used to track the movement of the tongue and lips, or motion capture systems for tracking movements of the hands and arms. Acoustic phonetics studies the characteristics of the signal produced by the speech organs. It is primarily concerned with the physical properties of speech. Auditory phonetics covers the way humans process and understand speech.


Voicing and Phonation moved to mainspace

Voicing and phonation types[edit]

An important factor in describing the production of most speech sounds is the state of the glottis--the space between the vocal folds. Muscles inside the larynx make adjustments to the vocal folds in order to produce and modify vibration patterns for different sounds. Two canonical examples are modal voiced, where the vocal folds vibrate, and voiceless, where they do not. Modal voiced and voiceless consonants are incredibly common across languages, and all languages use both phonation types to some degree. Consonants can be either voiced or voiceless, though some languages do not make distinctions between them for certain consonants.[a] No language is known to have a phonemic voicing contrast for vowels, though there are languages, like Japanese, where vowels are produced as voiceless in certain contexts. Other positions of the glottis, such as breathy and creaky voice, are used in a number of languages, like Jalapa Mazatec, to contrast phonemes while in other languages, like English, they exist allophonically. Phonation types are modelled on a continuum of glottal states from completely open (voiceless) to completely closed (glottal stop). The optimal position for vibration, and the phonation type most used in speech, modal voice, exists in the middle of these two extremes. If the glottis is slightly wider, breathy voice occurs, while bringing the vocal folds closer together results in creaky voice.[1]

There are a number of ways to determine if a segment is voiced or not, the simplest being to feel the larynx during speech and note when vibrations are felt. More precise measurements can be obtained through acoustic analysis of a spectrogram or spectral slice. In spectrographic analysis, voiced segments show a voicing bar, a region of high acoustic energy, in the low frequencies of voiced segments.[2] In examining a spectral splice, the acoustic spectrum at a given point in time a model of the vowel pronounced reverses the filtering of the mouth producing the spectrum of the glottis. A computational model of the unfiltered glottal signal is then fitted to the inverse filtered acoustic signal to determine the characteristics of the glottis.[3] Visual analysis is also available using specialized medical equipment such as ultrasound and endoscopy.[2][b]

For the vocal folds to vibrate, they must be in the proper position and there must be air flowing through the glottis.[4] The normal phonation pattern used in typical speach is modal voice, where the vocal folds are held close together with moderate tension. The vocal folds vibrate as a single unit periodically and efficiently with a full glottal closure and no aspiration.[5] If they are pulled farther apart, they do not vibrate and so produce voiceless phones. If they are held firmly together they produce a glottal stop.[1]

If the vocal folds are held slightly further apart than in modal voicing, they produce phonation types like breathy voice (or murmur) and whispery voice. The tension across the vocal ligaments (vocal cords) is less than in modal voicing allowing for air to flow more freely. Both breathy voice and whispery voice exist on a continuum loosly characterized as going from the more periodic waveform of breathy voice to the more noisy waveform of whispery voice. Acoustically, both tend to dampen the first formant with whispery voice being more extreme deviations. [6]

Holding the vocal folds more tightly together results in creaky voice. The tension in across the vocal folds is less than in modal voice, but they are held tightly together resulting in only the ligaments of the vocal folds vibrating.[c] The pulses are highly irregular, with low pitch and frequency amplitude.[7]

Voice quality variation[edit]

Places of articulation[edit]

Passive and active places of articulation: (1) Exo-labial; (2) Endo-labial; (3) Dental; (4) Alveolar; (5) Post-alveolar; (6) Pre-palatal; (7) Palatal; (8) Velar; (9) Uvular; (10) Pharyngeal; (11) Glottal; (12) Epiglottal; (13) Radical; (14) Postero-dorsal; (15) Antero-dorsal; (16) Laminal; (17) Apical; (18) Sub-apical or sub-laminal.

Articulations take place in particular parts of the mouth. They are described by the part of the mouth that constricts airflow and by what part of the mouth that constriction occurs. In most languages constrictions are made with the lips and tongue. Constrictions made by the lips are called labials. The tongue can make constrictions with many different parts, broadly classified into coronal and dorsal places of articulation. Coronal articulations are made with either the tip or blade of the tongue, while dorsal articulations are made with the back of the tongue.[8] These divisions are not sufficient for distinguishing and describing all speech sounds.[8] For example, in English the sounds [s] and [esh] are both voiceless coronal fricatives, but they are produced in different places of the mouth. Additionally, that difference in place can result in a difference of meaning like in "sack" and "shack". To account for this, articulations are further divided based upon the area of the mouth in which the constriction occurs.[9]

Labial consonants[edit]

Articulations involving the lips can be made in three different ways: with both lips (bilabial); with one lip and the teeth (labiodental); with the tongue and the upper lip (linguolabial).[10] Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations. Ladefoged and Maddieson (1996) propose that linguolabial articulations be considered coronals rather than labials, but make clear this grouping, like all groupings of articulations, is equivocable and not clearly delineated.[11] Despite this, linguolabials are included in this section as labials given their use of the lips as a place of articulation.

Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which also moves down slightly,[12] though in some cases the force from air moving through the aperature (opening between the lips) may cause the lips to separate faster than they can come together.[13] Unlike most other articulations, both articulators are made from soft tissue, and so bilabial stops are more likely to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are also unusual in that an articulator in the upper section of the vocal tract actively moves downwards.[14]

Labiodental consonants are made by the lower lip rising to the upper teeth. Labiodental consonants are most often fricatives with some debate as to whether true labiodental plosives occur in any natural language.[15] There are reports of languages that have labiodental plosives including Zulu,[16] Tonga,[17] and Shubi.[15] Labiodental affricates are reported in Tsonga[18] which would require the stop portion of the affricate to be a labiodental stop, though Ladefoged and Maddieson (1996) raise the possibility that labiodental affricates involve a bilabial closure like "pf" in German. Unlike plosives and affricates, labiodental nasals are common across languages.[19]

Linguolabial consonants are made with the blade of the tongue approaching or contacting the upper lip. Like in bilabial articulations, the upper lip moves slightly towards the more active articulator. Articulations in this group do not have their own symbols in the International Phonetic Alphabet, rather, they are formed by combining an apical symbol with a diacritic implicitly placing them in the coronal category.[20][21] They exist in a number of languages indigenous to Vanuatu such as Tangoa, though early descriptions refered to them as apical-labial consonants. The name "linguolabial" was suggested by Floyd Lounsbury given that they are produced with the blade rather than the tip of the tongue.[21]

Coronal consonants[edit]

Coronal consonants are made with the tip or blade of the tongue and, because of the agility of the front of the tongue, represent a variety not only in place but in the posture of the tongue. The coronal places of articulation are dental, alveolar, and post-alveolar. Tongue postures using the tip of the tongue can be apical if using the top of the tongue tip or sub-apical if the tongue tip is curled back. A consonant made with the blade of the tongue is a laminal consonant. Coronals are unique as a group in that they can be of any manner of articulation.[20][22] Australian languages are well known for the large number of coronal contrasts exhibited within and across languages in the region.[23]

Dental consonants are made with the tip or blade of the tongue and the upper teeth. They are divided into two groups based upon the part of the tongue used to produce them. Apical dental consonants are produced with the tongue tip touching the teeth while interdental consonants are produced with the blade of the tongue as the tip of the tongue sticks out in front of the teeth. Languages can use one, the other, or both, though no language is known to use both contrastively. Alveolar consonants are made with the tip or blade of the tongue at the alveolar ridge just behind the teeth and can similarly be apical or laminal.[24]

Crosslinguistically, dental consonants and alveolar consonants are frequently contrasted leading to a number of generalizations of crosslinguistic patterns. The different places of articulation tend to also be contrasted in the part of the tongue used to produce them: most languages with dental stops have laminal dentals, while languages with apical stops usually have apical stops. Languages rarely have two consonants in the same place with a contrast in laminality, though Taa (ǃXóõ) is a counterexample to this pattern.[25] If a language has only one of a dental stop or an alveolar stop, it will usually be laminal if it is a dental stop, and the stop will usually be apical if it is an alveolar stop, though for example Temne and Bulgarian[26] do not follow this pattern.[27] If a language has both an apical and laminal stop, then the laminal stop is more likely to be affricated like in Isoko, though Dahalo show the opposite pattern with alveolar stops being more affricated.[28]

Retroflex consonants have a number of different definitions depending on whether the position of the tongue or the position on the roof of the mouth is given prominence though in general they represent a group of articulations in which the tip of the tongue is curled upwards to some degree. In this way, retroflex articulations can occur in a number of different locations on the roof of the mouth including alveolar, post-alveolar, and palatal regions. If the underside of the tongue tip makes contact with the roof of the mouth, it is sub-apical though apical post-alveolar sounds are also described as retroflex.[29] Typical examples of sub-apical retroflex stops are commonly found in Dravidian languages, and in some languages indigenous to the southwest United States the contrastive difference between dental and alveolar stops is a slight retroflexion of the alveolar stop.[30] Acoustically, retroflexion tends to affect the higher formants.[30]

Articulations taking place just behind the alveolar ridge, known as post-alveolar consonants, have been referred to using a number of different terms. Apical post-alveolar consonants are often called retroflex, while laminal articulations are sometimes called palato-alveolar;[31] in the Australianist literature, these laminal stops are often described as 'palatal' though they are produced further forward than the region typically described as palatal.[23] Because of individual anatomical variation, the precise articulation of palato-alveolar stops (and coronals in general) can very widely within a speech community.[32]

Dorsal consonants[edit]

Dorsal consonants are those consonants made using the tongue body rather than the tip or blade.

Palatal consonants are made using the tongue body against the hard palate on the roof of the mouth. They are frequently contrasted with velar or uvular consonants, though it is rare for a language to contrast all three simultaneously, with Jaqaru as a possible example of a three way contrast.[33]

Velar consonants are made using the tongue body against the velum. They are incredibly common crosslinguistically; almost all languages have a velar stop. Because both velars and vowels are made using the tongue body, they are highly affected by coarticulation with vowels and can be produced as far forward as the hard palate or as far back as the uvula. These variations are typically divided into front, central, and back velars in parallel with the vowel space.[34] They can be hard to distinguish phonetically from palatal consonants, though are produced slightly behind the area of prototypical palatal consonants.[35]

Uvular consonants are made by the tongue body contacting or approaching the uvula. They are rare, occuring in an estimated 19 percent of languages, and large regions of the Americas and Africa have no languages with uvular consonants. In languages with uvular consonants, stops are most frequent followed by continuants (including nasals).[36]

Radical consonants[edit]

Lorem ipsum dolor sit amet

Manners of articulation[edit]

The manner of articulation describes the way in which airflow is constricted.

Vowels[edit]

Perturbation Theory[edit]

Height[edit]

Centrality[edit]

Rounding[edit]

Source-Filter Theory[edit]

Models of phonation[edit]

Anatomy[edit]

Anatomy text moved to mainspace

Of the vocal organs[edit]

Speech sounds are generally produced by the modification of an airstream exhaled from the lungs. The respiratory organs used to create and modify airflow are divided into three regions: the vocal tract (supralaryngeal), the larynx, and the subglottal system. The airstream can be either egressive (out of the vocal tract) or ingressive (into the vocal tract). In pulmonic sounds, the airstream is produced by the lungs in the subglottal system and passes through the larynx and vocal tract. Glottalic sounds use an airstream created by movements of the larynx without airflow from the lungs. Clicks or lingual ingressive sounds create an airstream using the tongue.

Pulmonary and subglottal system[edit]

The lungs are the engine that drives nearly all speech production, and their importance in phonetics is due to their creation of pressure for pulmonic sounds. The most common kinds of sound across languages are pulmonic egress, where air is exhaled from the lungs.[37] The opposite is possible, though no language is known to have pulmonic ingressive sounds as phonemes.[38] Many languages such as Swedish use them for paralinguistic articulations such as affirmations in a number of genetically and geographically diverse languages.[39] Both egressive and ingressive sounds rely on holding the vocal folds in a particular posture and using the lungs to draw air across the vocal folds so that they either vibrate (voiced) or do not vibrate (voiceless).[37] Pulmonic articulations are restricted by the volume of air able to be exhaled in a given respiratory cycle, known as the vital capacity.

The lungs are used to maintain two kinds of pressure simultaneously in order to produce and modify phonation. In order to produce phonation at all, the lungs must maintain a pressure of 3 – 5 cm H20 higher than the pressure above the glottis. However small and fast adjustments are made to the subglottal pressure to modify speech for suprasegmental features like stress. A number of thoracic muscles are used to make these adjustments. Because the lungs and thorax stretch during inhalation, the elastic forces of the lungs alone are able to produce pressure differentials sufficient for phonation at lung volumes above 50 percent of vital capacity.[40] Above 50 percent of vital capacity, the respiratory muscles are used to "check" the elastic forces of the thorax to maintain a stable pressure differential. Below that volume, they are used to increase the subglottal pressure by actively exhaling air.

During speech the respiratory cycle is modified to accomodate both linguistic and biological needs. Exhalation, usually about 60 percent of the respiratory cycle at rest, is increased to about 90 percent of the respiratory cycle. Because metabolic needs are relatively stable, the total volume of air moved in most cases of speech remains about the same as quiet tidal breathing.[41] Increases in speech intensity of 18 dB (a loud conversation) has relatively little impact on the volume of air moved. Because their respiritory systems are not as developed as adults, children tend to use a larger proportion of their vital capacity compared to adults, with more deep inhales.[42]

The larynx[edit]

The positions of the vocal folds are achieved by movement of the arytenoid cartilages.[43] The intrinsic laryngeal muscles are responsible for moving the arytenoid cartilages as well as modulating the tension of the vocal folds.[44] If the vocal folds are not close enough or not tense enough, they will vibrate sporadically (described as creaky or breathy voice depending on the degree) or not at all (voiceless sounds). Even if the vocal folds are in the correct position, there must be air flowing across them or they will not vibrate. The difference in pressure across the glottis required for voicing is estimated at 1 – 2 cm H20 (98.0665 - 196.133 pascals).[4] The pressure differential can fall below levels required for phonation either because of an increase in pressure above the glottis (superglottal pressure) or a decrease in pressure below the glottis (subglottal pressure). The subglottal pressure is maintained by the respiratory muscles. Supraglottal pressure, with no constrictions or articulations, is about atmospheric pressure. However, because articulations (especially consonants) represent constrictions of the airflow, the pressure in the cavity behind those constrictions can increase resulting in a higher supraglottal pressure.[45]

Of the ear[edit]

Articulatory models[edit]

Articulatory models moved to mainspace

When producing speech, the articulators move through and contact particular locations in space resulting in changes to the acoustic signal. Some models of speech production take this as the basis for modeling articulation in a coordinate system which may be internal to the body (intrinsic) or external (extrinsic). Intrinsic coordinate systems model the movement of articulators as positions and angles of joints in the body. Intrinsic coordinate models of the jaw often use two to three degrees of freedom representing translation and rotation. These face issues with modeling the tongue which, unlike joints of the jaw and arms, is a muscular hydrostat like an elephant trunk that lacks joints.[46] Because of the different physiological structures, movement paths of the jaw are relatively straight lines during speech and mastication, while movements of the tongue follow curves.[47]

Straight line movements have been used to argue articulations as planned in extrinsic rather than intrinsic space, though extrinsic coordinate systems also include acoustic coordinate spaces, not just physical coordinate spaces.[46] Models which assume movements are planned in extrinsic space run into an inverse problem of explaining the muscle and joint locations which produce the observed path or acoustic signal. The arm, for example, has seven degrees of freedom and 22 muscles, so multiple different joint and muscle configurations can lead to the same final position. For models of planning in extrinsic acoustic space, the same one-to-many mapping problem applies as well, with no unique mapping from physical or acoustic targets to the muscle movements required to achieve them. Concerns about the inverse problem may be exagerated, however, as speech is a highly learned skill using neurological structures which evolved for the purpose.[48]

The equilibrium-point model proposes a resolution to the inverse problem by arguing that movement targets be represented as the position of the musle pairs acting on a joint.[d] Importantly, muscles are modeled as springs, and the target is the equilibrium point for the modeled spring-mass system. By using springs, the equilibrium point model is able to easily account for compensation and response when movements are disrupted. They are considered a coordinate model because they assume that these muscle positions are represented as points in space, equilibrium points, where the spring-like action of the muscles converges.[49][50]

Gestural approaches to speech production propose that articulations are represented as movement patterns rather than particular coordinates to hit. The minimal unit is a gesture which represents a group of "functionally equivalent articulatory movement patterns that are actively controlled with reference to a given speech-relevant goal (e.g., a bilabial closure)."[51] These groups represent coordinative structures or "synergies" which view movements not as individual muscle movements but as task-dependent groupings of muscles which work together as a single unit.[52][53] This reduces the degrees of freedom in articulation planning, a problem especially in intrinsic coordinate models, which allows for any movement that achieves the speech goal, rather than encoding the particular movements in the abstract representation. Coarticulation is well described by gestural models as the articulations at faster speech rates can be explained as composites of the independent gestures at slower speech rates.[54]

Internal model[edit]

Acoustics[edit]

Aerodynamics[edit]

Vowels, Liquids, and Glides[edit]

Nasals[edit]

Fricatives[edit]

Plosives[edit]

Perception of sounds[edit]

Loudness[edit]

Frequency[edit]

Phonetic transcription[edit]

Phonetics of sign languages[edit]

Subfields[edit]

Articulatory[edit]

The sign CREPE in French Sign Language, shown here, has the left hand (shaped like an L) as an active articulator, while the right hand (open flat) is a passive articulator.

The branch of phonetics which deals with the ways in which sounds and signs are produced is known as articulatory phonetics. The production of a speech sound or sign is an "articulation", and the body parts involved in the production are called "articulators". Articulators can be active or passive depending on the role they play in articulation. Active articulators are those which move during articulation, while passive articulators are those only involved when an active articulator touches it.[55] For oral languages, various parts of the mouth are common articulators, while the hands and face are common articulators in sign languages. While many sounds and signs can be produced by articulators, as a branch of linguistics, articulatory phonetics is concerned with only those which are actually used in speech, known as "speech sounds".

Acoustic[edit]

Auditory and Linguistic[edit]

Sociophonetics[edit]

Applied Phonetics[edit]

Speech recognition[edit]

Speech Synthesis[edit]

Forensics[edit]

Pronunciation[edit]

See also[edit]

Notes[edit]

  1. ^ Hawaiian, for example, does not contrast voiced and voiceless plosives.
  2. ^ See #Models of phonation for further information on acoustic modeling.
  3. ^ See #The larynx for further information on anatomy of phonation.
  4. ^ See Feldman (1966) for the original proposal.

Citations[edit]

  1. ^ a b Gordon & Ladefoged 2001.
  2. ^ a b Dawson & Phelan 2016.
  3. ^ Gobl & Ní Chasaide 2010, p. 388, et seq.
  4. ^ a b Ohala 1997, p. 1.
  5. ^ Gobl & Ní Chasaide 2010, p. 399.
  6. ^ Gobl & Ní Chasaide 2010, p. 400-401.
  7. ^ Gobl & Ní Chasaide 2010, p. 401.
  8. ^ a b Ladefoged 2001, p. 5.
  9. ^ Ladefoged & Maddieson 1996, p. 9.
  10. ^ Ladefoged & Maddieson 1996, p. 16.
  11. ^ Ladefoged & Maddieson 1996, p. 43.
  12. ^ Maddieson 1993.
  13. ^ Fujimura 1961.
  14. ^ Ladefoged & Maddieson 1996, p. 16-17.
  15. ^ a b Ladefoged & Maddieson 1996, p. 17.
  16. ^ Doke 1926.
  17. ^ Guthrie 1948, p. 61.
  18. ^ Baumbach 1987.
  19. ^ Ladefoged & Maddieson 1996, p. 17-18.
  20. ^ a b International Phonetic Association 2015.
  21. ^ a b Ladefoged & Maddieson 1996, p. 18.
  22. ^ Ladefoged & Maddieson 1996, p. 19-31.
  23. ^ a b Ladefoged & Maddieson 1996, p. 28.
  24. ^ Ladefoged & Maddieson 1996, p. 19-25.
  25. ^ Ladefoged & Maddieson 1996, p. 20,40-1.
  26. ^ Scatton 1984, p. 60.
  27. ^ Ladefoged & Maddieson 1996, p. 23.
  28. ^ Ladefoged & Maddieson 1996, p. 23-5.
  29. ^ Ladefoged & Maddieson 1996, p. 25, 27-8.
  30. ^ a b Ladefoged & Maddieson 1996, p. 27.
  31. ^ Ladefoged & Maddieson 1996, p. 27-8.
  32. ^ Ladefoged & Maddieson 1996, p. 32.
  33. ^ Ladefoged & Maddieson 1996, p. 35.
  34. ^ Ladefoged & Maddieson 1996, p. 33-34.
  35. ^ Keating & Lahiri 1993, p. 89.
  36. ^ Maddieson 2001.
  37. ^ a b Ladefoged 2001, p. 1.
  38. ^ Eklund 2008, p. 237.
  39. ^ Eklund 2008.
  40. ^ Seikel, Drumright & King 2016, p. 176.
  41. ^ Seikel, Drumright & King 2016, p. 171.
  42. ^ Seikel, Drumright & King 2016, p. 168-77.
  43. ^ Ladefoged 2001, p. 123.
  44. ^ Seikel, Drumright & King 2016, p. 222.
  45. ^ Chomsky & Halle 1968, p. 300-301.
  46. ^ a b Löfqvist 2010, p. 359.
  47. ^ Munhall, Ostry & Flanagan 1991, p. 299, et seq.
  48. ^ Löfqvist 2010, p. 360.
  49. ^ Bizzi 1992.
  50. ^ Löfqvist 2010, p. 361.
  51. ^ Saltzman & Munhall 1989.
  52. ^ Mattingly 1990.
  53. ^ Löfqvist 2010, p. 362-4.
  54. ^ Löfqvist 2010, p. 364.
  55. ^ Dawson & Phelan 2016, p. 93.

References[edit]

  • Bizzi, E.; Hogan, N.; Mussa-Ivaldi, F.; Giszter, S. (1992). "Does the nervouse system use equilibrium-point control to guie single and multiple joint movements?". Behavioral and Brain Sciences. 15 (4): 603–13. doi:10.1017/S0140525X00072538. PMID 23302290.
  • Chomsky, Noam; Halle, Morris (1968). Sound Pattern of English. Harper and Row.
  • Dawson, Hope; Phelan, Michael, eds. (2016). Language Files: Materials for an Introduction to Linguistics (12th ed.). The Ohio State University Press. ISBN 978-0-8142-5270-3.
  • Eklund, Robert (2008). "Pulmonic ingressive phonation: Diachronic and synchronic characteristics, distribution and function in animal and human sound production and in human speech". Journal of the International Phonetic Association. 38 (3): 235–324. doi:10.1017/S0025100308003563. S2CID 146616135.
  • Fujimura, Osamu (1961). "Bilabial stop and nasal consonants: A motion picture study and its acoustical implications". Journal of Speech and Hearing Research. 4 (3): 233–47. doi:10.1044/jshr.0403.233. PMID 13702471.
  • Gobl, Christer; Ní Chasaide, Ailbhe (2010). "Voice source variation and its communicative functions". The Handbook of Phonetic Sciences (2nd ed.). pp. 378–424.
  • Matthew, Gordon; Ladefoged, Peter (2001). "Phonation types: a cross-linguistic overview". Journal of Phonetics. 29 (4): 383–406. CiteSeerX 10.1.1.232.7720. doi:10.1006/jpho.2001.0147.
  • Hardcastle, William; Laver, John; Gibbon, Fiona, eds. (2010). The Handbook of Phonetic Sciences (2nd ed.). Wiley-Blackwell. ISBN 978-1-405-14590-9.
  • Johnson, Keith (2011). Acoustic and Auditory Phonetics (3rd ed.). Wiley-Blackwell. ISBN 978-1-444-34308-3.
  • Keating, Patricia; Lahiri, Aditi (1993). "Fronted Velars, Palatalized Velars, and Palatals". Phonetica. 50 (2): 73–101. doi:10.1159/000261928. PMID 8316582. S2CID 3272781.
  • Ladefoged, Peter (2001). A Course in Phonetics (4th ed.). Boston: Thomson/Wadsworth. ISBN 978-1-413-00688-9.
  • Ladefoged, Peter; Maddieson, Ian (1996). The Sounds of the World's Languages. Oxford: Blackwell. ISBN 978-0-631-19815-4.
  • Löfqvist, Anders (2010). "Theories and Models of Speech Production". Handbook of Phonetic Sciences (2nd ed.). pp. 353–78.
  • Maddieson, Ian (1993). "Investigating Ewe articulations with electromagnetic articulography". Forschungberichte des Intituts für Phonetik und Sprachliche Kommunikation der Universität München. 31: 181–214.
  • Mattingly, Ignatius (1990). "The global character of phonetic gestures" (PDF). Journal of Phonetics. 18 (3): 445–52. doi:10.1016/S0095-4470(19)30372-9.
  • Ohala, John (1997). "Aerodynamics of phonology". Proceedings of the Seoul Internation Conference on Linguistics. 92.
  • Saltzman, Elliot; Munhall, Kevin (1989). "Dynamical Approach to Gestural Patterning in Speech Production" (PDF). Ecological Psychology. 1 (4): 333–82. doi:10.1207/s15326969eco0104_2.
  • Scatton, Ernest (1984). A reference grammar of modern Bulgarian. Slavica. ISBN 978-0893571238.
  • Seikel, J. Anthony; Drumright, David; King, Douglas (2016). Anatomy and Physiology for Speech, Language, and Hearing (5th ed.). Cengage. ISBN 978-1-285-19824-8.


External links[edit]


Category:Phonetics