Refine
Year of publication
Document Type
- Conference Proceeding (15)
- Part of a Book (4)
- Article (2)
- Working Paper (2)
- Doctoral Thesis (1)
Keywords
- Deutsch (9)
- Artikulatorische Phonetik (6)
- Vokal (5)
- Phonetik (4)
- Automatische Spracherkennung (3)
- Diphthong (3)
- Englisch (3)
- Gesprochene Sprache (3)
- Korpus <Linguistik> (3)
- Lautstärke (3)
Publicationstate
- Veröffentlichungsversion (9)
- Postprint (8)
Reviewstate
Publisher
- Institut für Phonetik und Sprachliche Kommunikation, Ludwig Maximilians Universität München (3)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (2)
- Acoustical Society of America (1)
- Association pour l'Avancement des Etudes Iraniennes (1)
- Department of Linguistics, University of California (1)
- Department of Linguistics, University of Cambridge (1)
- Department of Phonetics, Trier University (1)
- European Language Resources Association (1)
- Heidelberg University Publishing (1)
- International Phonetic Association (1)
This paper outlines the generation process of a specifi computational linguistic representation termed the Multilingual Time Map, conceptually a multi-tape finit state transducer encoding linguistic data at different levels of granularity. The fi st component acquires phonological data from syllable labeled speech data, the second component define feature profiles the third component generates feature hierarchies and augments the acquired data with the define feature profiles and the fourth component displays the Multilingual Time Map as a graph.
The goal of this study was to evaluate invariance vs. variability in both articulation and acoustics of speech production units. To keep interaction of controlled variables manageable, only a very simple subrange of speech productions was studied. Three different vowel qualities and six different consonants were examined in a VCV sequence embedded in an utterance. Beside coarticulation vocal effort was a further factor of perturbation occuring in natural speech. The set of consonants comprised various modes of articulation (stop, fricative, nasal, lateral) all produced at virtually the same place of articulation, viz. (post-) alveolar. The range of vowel environments /i:/, /e:/, /a:/ was selected for differences in height, in order to vary coarticulatory effects between the segments. Utterances were produced at two different volume levels, viz. normal and loud speech. Experiments by others have demonstrated that higher speech volume is not simply realized as a raised sound pressure level or as raised intensity. For loud speech a number of different correlates were observed, as raised subglottal pressure (see Ladefoged/McKinney 1963), raised fundamental frequency, raised first formant, and change of segmental durations (e.g. Traunmüller/Eriksson 2000). Furthermore an effect on jaw height was observed in vowels, which is that in vowel production in loud speech the jaw has a lower position. In earlier studies results have been presented for either articulatory (Schulman 1989) or acoustic changes (Traunmüller/Eriksson 2000) associated with higher volume. The present study examines effects of higher volume level on vowels as well as on consonants, in the articulatory as well as the acoustic channel. Data from six German speakers (5 male, 1 female) were recorded and analyzed. In the 266 articulatory channel jaw and tongue-tip movements were analyzed, in the acoustic domain segmental characteristics as formants, duration, intensity and fundamental frequency. The main results can be described as follows: - Jaw height in vowels depends on vowel height, in the vowel production of loud speech the jaw is lowered significantly. - Jaw height in consonants depends on the type of consonant (very high for /s/, / /, /t/, fairly low for /n/, /l/). Speaking at higher volume level does not have a significant effect on jaw height during (post-) alveoloar consonant production, coarticulatory effect of vowel context is mainly found with /n/ and /l/. - In loud speech jaw gestures have higher amplitude. - Acoustic segmental duration is changed: Vowels are lengthened and consonants are shortened. - Fundamental frequency in vowel segments is raised significantly. - In all vowels the first formant is raised. - The second formant of the non-front vowel /a:/ is raised. This work has demonstrated that jaw articulation in a number of alveolar consonants is remarkably precise and that motor equivalence only plays a minor role. Moreover, it has been shown that in the face of the generally larger variability of acoustic and articulatory parameters, the results are best considered in terms of perceptual invariants. The findings also substantiate the complexity of articulatory and acoustic reorganisation in loud speech.
Analyses of jaw movement(obtained by Electromagnetic Articulography) and acoustics show that loud speech is an intricate phenomenon. Besides involving higher intensity and subglottal pressure it affects jaw movements as well as fundamental frequency and especially first formants. It is argued that all these effects serve the purpose of enhancing perceptual salience.
The vowel quality in some diphthongs of Swabian (an upper german dialect) was determined by measurement of first and second formant values. A minimal contrast could be shown between two different diphthong qualities […], where for Standard German only one is assumed, viz. /ai/. The two diphthong qualities differ only slightly in onset and offset vowel quality, so a better understanding of their relationship was expected from an examination of their dynamic aspects. Our preliminary results suggest that there is indeed a difference in the temporal structure of the two diphthongs.
The aim of this paper is to highlight the actual need for corpora that have been annotated based on acoustic information. The acoustic information should be coded in features or properties and is needed to inform further processing systems, i.e. to present a basis for a speech recognition system using linguistic information. Feature annotation of existing corpora in combination with segmental annotation can provide a powerful training material for speech recognition systems, but will as well challenge the further processing of features to segments and syllables. We present here the theoretical preliminaries for our multilingual feature extraction system, that we are currently working on.
As can be shown for English data, the assimilation of the alveolar stop can result from an increased gestural overlap of the following oral closure gesture. Our experiment with German synthetic speech showed similar results. Further, it suggests that it is neccessary to complete the gestural specification of the glottal state. A voiced stop should be represented not only by an oral gesture, but by a glottal one as well.
This work exploited coarticulation and loud speech as natural sources of perturbation in order to determine whether articulatory covariation (motor equivalent behavior) can be observed inspeech that is not artificially perturbed. Articulatory analyses of jaw and tongue movement in the production of alveolar consonants by German speakers were performed. The sibilant /s/ shows virtually no articulatory covariation under the influence of natural perturbations, whereas other alveolar consonants show more obvious compensatory behavior. Our conclusion is that an effect of natural sources of perturbation is noticable, but sounds are affected to different degrees.