Refine
Year of publication
Document Type
- Conference Proceeding (15)
- Part of a Book (4)
- Article (2)
- Working Paper (2)
- Doctoral Thesis (1)
Keywords
- Deutsch (9)
- Artikulatorische Phonetik (6)
- Vokal (5)
- Phonetik (4)
- Automatische Spracherkennung (3)
- Diphthong (3)
- Englisch (3)
- Gesprochene Sprache (3)
- Korpus <Linguistik> (3)
- Lautstärke (3)
Publicationstate
- Veröffentlichungsversion (9)
- Postprint (8)
Reviewstate
Publisher
- Institut für Phonetik und Sprachliche Kommunikation, Ludwig Maximilians Universität München (3)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (2)
- Acoustical Society of America (1)
- Association pour l'Avancement des Etudes Iraniennes (1)
- Department of Linguistics, University of California (1)
- Department of Linguistics, University of Cambridge (1)
- Department of Phonetics, Trier University (1)
- European Language Resources Association (1)
- Heidelberg University Publishing (1)
- International Phonetic Association (1)
American English and German AI, AU observed in cognates such as Wein, wine, Haus, house are usually treated on a par, represented with the same initial vowel (cf. [ai], [au] for Am. Engl, and German [1]). Yet, acoustic measurements indicate differences as the relevant trajectories characteristically cross in Am. Engl, but not in German. These data may indicate consistency with the same initial target for these diphthongs in German, supporting the choice of the same Symbol /a/ in phonemic representation, as opposed to distinct targets (and distinct initial phonemes) in American English.
The goal of this study was to evaluate invariance vs. variability in both articulation and acoustics of speech production units. To keep interaction of controlled variables manageable, only a very simple subrange of speech productions was studied. Three different vowel qualities and six different consonants were examined in a VCV sequence embedded in an utterance. Beside coarticulation vocal effort was a further factor of perturbation occuring in natural speech. The set of consonants comprised various modes of articulation (stop, fricative, nasal, lateral) all produced at virtually the same place of articulation, viz. (post-) alveolar. The range of vowel environments /i:/, /e:/, /a:/ was selected for differences in height, in order to vary coarticulatory effects between the segments. Utterances were produced at two different volume levels, viz. normal and loud speech. Experiments by others have demonstrated that higher speech volume is not simply realized as a raised sound pressure level or as raised intensity. For loud speech a number of different correlates were observed, as raised subglottal pressure (see Ladefoged/McKinney 1963), raised fundamental frequency, raised first formant, and change of segmental durations (e.g. Traunmüller/Eriksson 2000). Furthermore an effect on jaw height was observed in vowels, which is that in vowel production in loud speech the jaw has a lower position. In earlier studies results have been presented for either articulatory (Schulman 1989) or acoustic changes (Traunmüller/Eriksson 2000) associated with higher volume. The present study examines effects of higher volume level on vowels as well as on consonants, in the articulatory as well as the acoustic channel. Data from six German speakers (5 male, 1 female) were recorded and analyzed. In the 266 articulatory channel jaw and tongue-tip movements were analyzed, in the acoustic domain segmental characteristics as formants, duration, intensity and fundamental frequency. The main results can be described as follows: - Jaw height in vowels depends on vowel height, in the vowel production of loud speech the jaw is lowered significantly. - Jaw height in consonants depends on the type of consonant (very high for /s/, / /, /t/, fairly low for /n/, /l/). Speaking at higher volume level does not have a significant effect on jaw height during (post-) alveoloar consonant production, coarticulatory effect of vowel context is mainly found with /n/ and /l/. - In loud speech jaw gestures have higher amplitude. - Acoustic segmental duration is changed: Vowels are lengthened and consonants are shortened. - Fundamental frequency in vowel segments is raised significantly. - In all vowels the first formant is raised. - The second formant of the non-front vowel /a:/ is raised. This work has demonstrated that jaw articulation in a number of alveolar consonants is remarkably precise and that motor equivalence only plays a minor role. Moreover, it has been shown that in the face of the generally larger variability of acoustic and articulatory parameters, the results are best considered in terms of perceptual invariants. The findings also substantiate the complexity of articulatory and acoustic reorganisation in loud speech.
The Partitur Format at BAS
(1997)
Most spoken language resources are produced and disseminated together with symbolic information relating to the speech signal. These are for instance orthographic transcript labeling and segmentation on the phonologic phoneti prosodic phrasal level. Most of the known formats for these symbolic data are defined in a ‘closed form’ that is not fexible enough to allow simple and platform independent processing and easy extensions.
At the Bavarian Archive for Speech Signals (BAS) a new format has been developed and used over the last few years that shows some significant advantages over other existing formats. This paper describes the basic principles behind this format discusses briefly the advantages and gives detailed definitions of the description levels used so far.
Analyses of jaw movement(obtained by Electromagnetic Articulography) and acoustics show that loud speech is an intricate phenomenon. Besides involving higher intensity and subglottal pressure it affects jaw movements as well as fundamental frequency and especially first formants. It is argued that all these effects serve the purpose of enhancing perceptual salience.
The vowel quality in some diphthongs of Swabian (an upper german dialect) was determined by measurement of first and second formant values. A minimal contrast could be shown between two different diphthong qualities […], where for Standard German only one is assumed, viz. /ai/. The two diphthong qualities differ only slightly in onset and offset vowel quality, so a better understanding of their relationship was expected from an examination of their dynamic aspects. Our preliminary results suggest that there is indeed a difference in the temporal structure of the two diphthongs.
If more than one articulator is involved in the execution of a phonetic task, then the individual articulators have to be temporally coordinated with each other in a lawful manner. The present study aims at analyzing tongue-jaw cohesion in the temporal domain for the German coronal consonants /s, b, t, d, n, l/, i.e., consonants produced with the same set of articulators—the tongue blade and the jaw—but differing in manner of articulation. The stability of obtained interaction patterns is evaluated by varying the degree of vocal effort: comfortable and loud. Tongue and jaw movements of five speakers of German were recorded by means of electromagnetic midsagittal articulography _EMMA_ during /aCa/ sequences. The results indicate that _1_ tongue-jaw coordination varies with manner of articulation, i.e., a later onset and offset of the jaw target for the stops compared to the fricatives, the nasal and the lateral; (2) the obtained patterns are stable across vocal effort conditions; (3) the sibilants are produced with smaller standard deviations for latencies and target positions; and (4) adjustments to the lower jaw positions during the surrounding vowels in loud speech occur during the closing and opening movement intervals and not the consonantal target phases.
This paper outlines the generation process of a specifi computational linguistic representation termed the Multilingual Time Map, conceptually a multi-tape finit state transducer encoding linguistic data at different levels of granularity. The fi st component acquires phonological data from syllable labeled speech data, the second component define feature profiles the third component generates feature hierarchies and augments the acquired data with the define feature profiles and the fourth component displays the Multilingual Time Map as a graph.