Refine
Year of publication
Document Type
- Conference Proceeding (15)
- Part of a Book (4)
- Article (2)
- Working Paper (2)
- Doctoral Thesis (1)
Keywords
- Deutsch (9)
- Artikulatorische Phonetik (6)
- Vokal (5)
- Phonetik (4)
- Automatische Spracherkennung (3)
- Diphthong (3)
- Englisch (3)
- Gesprochene Sprache (3)
- Korpus <Linguistik> (3)
- Lautstärke (3)
Publicationstate
- Veröffentlichungsversion (9)
- Postprint (8)
Reviewstate
Publisher
- Institut für Phonetik und Sprachliche Kommunikation, Ludwig Maximilians Universität München (3)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (2)
- Acoustical Society of America (1)
- Association pour l'Avancement des Etudes Iraniennes (1)
- Department of Linguistics, University of California (1)
- Department of Linguistics, University of Cambridge (1)
- Department of Phonetics, Trier University (1)
- European Language Resources Association (1)
- Heidelberg University Publishing (1)
- International Phonetic Association (1)
Analyses of jaw movement(obtained by Electromagnetic Articulography) and acoustics show that loud speech is an intricate phenomenon. Besides involving higher intensity and subglottal pressure it affects jaw movements as well as fundamental frequency and especially first formants. It is argued that all these effects serve the purpose of enhancing perceptual salience.
In diesem Beitrag werden drei quantitative Studien vorgestellt, mit deren Hilfe untersucht wird, ob neben dem robusten Längenunterschied auch Qualitätsunterschiede für die deutschen <a>-Laute vorhanden sind (z.B. <Saat> versus <satt>). Auf Basis von ausgewählten Korpora und instrumentalphonetischen Messungen kann dieser Zusammenhang bestätigt werden. Zudem zeigen sich signifikante Unterschiede in den dynamischen
Verläufen der beiden Vokale.
The aim of this paper is to highlight the actual need for corpora that have been annotated based on acoustic information. The acoustic information should be coded in features or properties and is needed to inform further processing systems, i.e. to present a basis for a speech recognition system using linguistic information. Feature annotation of existing corpora in combination with segmental annotation can provide a powerful training material for speech recognition systems, but will as well challenge the further processing of features to segments and syllables. We present here the theoretical preliminaries for our multilingual feature extraction system, that we are currently working on.
The Partitur Format at BAS
(1997)
Most spoken language resources are produced and disseminated together with symbolic information relating to the speech signal. These are for instance orthographic transcript labeling and segmentation on the phonologic phoneti prosodic phrasal level. Most of the known formats for these symbolic data are defined in a ‘closed form’ that is not fexible enough to allow simple and platform independent processing and easy extensions.
At the Bavarian Archive for Speech Signals (BAS) a new format has been developed and used over the last few years that shows some significant advantages over other existing formats. This paper describes the basic principles behind this format discusses briefly the advantages and gives detailed definitions of the description levels used so far.
As can be shown for English data, the assimilation of the alveolar stop can result from an increased gestural overlap of the following oral closure gesture. Our experiment with German synthetic speech showed similar results. Further, it suggests that it is neccessary to complete the gestural specification of the glottal state. A voiced stop should be represented not only by an oral gesture, but by a glottal one as well.