Refine
Year of publication
Document Type
- Part of a Book (37) (remove)
Has Fulltext
- yes (37)
Keywords
- Korpus <Linguistik> (23)
- Gesprochene Sprache (19)
- Deutsch (10)
- Annotation (9)
- Transkription (5)
- Computerunterstützte Lexikographie (3)
- Forschungsmethode (3)
- Interaktion (3)
- Multimodalität (3)
- Sprachanalyse (3)
Publicationstate
- Veröffentlichungsversion (11)
- Zweitveröffentlichung (11)
- Postprint (5)
Reviewstate
- (Verlags)-Lektorat (16)
- Peer-Review (8)
- Peer-review (1)
- Verlags-Lektorat (1)
Publisher
- de Gruyter (8)
- Cambridge Scholars Publ. (3)
- Narr (3)
- Springer (3)
- Wilhelm Fink (3)
- De Gruyter (2)
- Institut für Deutsche Sprache (2)
- Leibniz-Institut für Deutsche Sprache (IDS) (2)
- Austrian academy of sciences (1)
- Benjamins (1)
Transkriptionswerkzeuge sind spezialisierte Softwaretools für die Transkription und Annotation von Audio- oder Videoaufzeichnungen gesprochener Sprache. Dieses Kapitel erklärt einleitend, worin der zusätzliche Nutzen solcher Werkzeuge gegenüber einfacher Textverarbeitungssoftware liegt, und gibt dann einen Überblick über grundlegende Prinzipien und einige weitverbreitete Tools dieser Art. Am Beispiel der Editoren FOLKER und OrthoNormal wird schließlich der praktische Einsatz zweier Werkzeuge in den Arbeitsabläufen eines Korpusprojekts illustriert.
Vorwort
(2019)
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
Researchers interested in the sounds of speech or the physical gestures of Speakers make use of audio and video recordings in their work. Annotating these recordings presents a different set of requirements to the annotation of text. Special purpose tools have been developed to display video and audio Signals and to allow the creation of time-aligned annotations. This chapter reviews the most widely used of these tools for both manual and automatic generation of annotations on multimodal data.
This paper presents the Kicktionary, a multilingual (English - German - French) electronic lexical resource of the language of football. In the Kicktionary, methods from corpus linguistics and two approaches to lexical semantics - the theory of frame semantics and the concept of semantic relations - are combined to construct a lexical resource in which the user can explore relationships between lexical units in various ways. This paper explains the theoretical background of the Kicktionary, sketches the data and methods which were used in its construction, and describes how the resulting resource is presented to users via a set of hyperlinked webpages.
This article discusses questions concerning the creation, annotation and sharing of spoken language corpora. We use the Hamburg Map Task Corpus (HAMATAC), a small corpus in which advanced learners of German were recorded solving a map task, as an example to illustrate our main points. We first give an overview of the corpus creation and annotation process including recording, metadata documentation, transcription and semi-automatic annotation of the data. We then discuss the manual annotation of disfluencies as an example case in which many of the typical and challenging problems for data reuse – in particular the reliability of interpretative annotations – are revealed.