Refine
Year of publication
- 2012 (79) (remove)
Document Type
- Conference Proceeding (32)
- Part of a Book (21)
- Article (20)
- Book (3)
- Doctoral Thesis (2)
- Part of Periodical (1)
Language
- English (79) (remove)
Keywords
- Computerlinguistik (15)
- Korpus <Linguistik> (13)
- Deutsch (10)
- Standardisierung (8)
- Metadaten (7)
- Sprachpolitik (6)
- Annotation (5)
- Datenmanagement (5)
- Englisch (5)
- Information Extraction (5)
Publicationstate
- Veröffentlichungsversion (41)
- Zweitveröffentlichung (7)
- Postprint (5)
Reviewstate
Publisher
This paper presents the application of the <tiger2/> format to various linguistic scenarios with the aim of making it the standard serialisation for the ISO 24615 [1] (SynAF) standard. After outlining the main characteristics of both the SynAF metamodel and the <tiger2/> format, as extended from the initial Tiger XML format [2], we show through a range of different language families how <tiger2/> covers a variety of constituency and dependency based analyses.
A key difference between traditional humanities research and the emerging field of digital humanities is that the latter aims to complement qualitative methods with quantitative data. In linguistics, this means the use of large corpora of text, which are usually annotated automatically using natural language processing tools. However, these tools do not exist for historical texts, so scholars have to work with unannotated data. We have developed a system for systematic iterative exploration and annotation of historical text corpora, which relies on an XML database (BaseX) and in particular on the Full Text and Update facilities of XQuery.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
Creating and maintaining metadata for various kinds of resources requires appropriate tools to assist the user. The paper presents the metadata editor ProFormA for the creation and editing of CMDI (Component Metadata Infrastructure) metadata in web forms. This editor supports a number of CMDI profiles currently being provided for different types of resources. Since the editor is based on XForms and server-side processing, users can create and modify CMDI files in their standard browser without the need for further processing. Large parts of ProFormA are implemented as web services in order to reuse them in other contexts and programs.
This paper presents the system architecture as well as the underlying workflow of the Extensible Repository System of Digital Objects (ERDO) which has been developed for the sustainable archiving of language resources within the Tübingen CLARIN-D project. In contrast to other approaches focusing on archiving experts, the described workflow can be used by researchers without required knowledge in the field of long-term storage for transferring data from their local file systems into a persistent repository.
This paper describes work in progress on I5, a TEI-based document grammar for the corpus holdings of the Institut für Deutsche Sprache (IDS) in Mannheim and the text model used by IDS in its work. The paper begins with background information on the nature and purposes of the corpora collected at IDS and the motivation for the I5 project (section 1). It continues with a description of the origin and history of the IDS text model (section 2), and a description (section 3) of the techniques used to automate, as far as possible, the preparation of the ODD file documenting the IDS text model. It ends with some concluding remarks (section 4). A survey of the additional features of the IDS-XCES realization of the IDS text model is given in an appendix.
The paper presents an XML schema for the representation of genres of computer-mediated communication (CMC) that is compliant with the encoding framework defined by the TEI. It was designed for the annotation of CMC documents in the project Deutsches Referenzkorpus zur internetbasierten Kommunikation (DeRiK), which aims at building a corpus on language use in the most popular CMC genres on the German-speaking Internet. The focus of the schema is on those CMC genres which are written and dialogic―such as forums, bulletin boards, chats, instant messaging, wiki and weblog discussions, microblogging on Twitter, and conversation on “social network” sites.
The schema provides a representation format for the main structural features of CMC discourse as well as elements for the annotation of those units regarded as “typical” for language use on the Internet. The schema introduces an element <posting>, which describes stretches of text that are sent to the server by a user at a certain point in time. Postings are the main constituting elements of threads and logfiles, which, in our schema, are the two main types of CMC macrostructures. For the microlevel of CMC documents (that is, the structure of the <posting> content), the schema introduces elements for selected features of Internet jargon such as emoticons, interaction words and addressing terms. It allows for easy anonymization of CMC data for purposes in which the annotated data are made publicly available and includes metadata which are necessary for referencing random excerpts from the data as references in dictionary entries or as results of corpus queries.
Documentation of the schema as well as encoding examples can be retrieved from the web at http://www.empirikom.net/bin/view/Themen/CmcTEI. The schema is meant to be a core model for representing CMC that can be modified and extended by others according to their own specific perspectives on CMC data. It could be a first step towards an integration of features for the representation of CMC genres into a future new version of the TEI Guidelines.
This paper presents Release 2.0 of the SALSA corpus, a German resource for lexical semantics. The new corpus release provides new annotations for German nouns, complementing the existing annotations of German verbs in Release 1.0. The corpus now includes around 24,000 sentences with more than 36,000 annotated instances. It was designed with an eye towards NLP applications such as semantic role labeling but will also be a useful resource for linguistic studies in lexical semantics.
Language attitudes may be differentiated into attitudes towards speakers and attitudes towards languages. However, to date, no systematic and differentiated instrument exists that measures attitudes towards language. Accordingly, we developed, validated, and applied the Attitudes Towards Languages (AToL) scale in four studies. In Study 1, we selected 15 items for the AToL scale, which represented the three dimensions of value, sound, and structure. The following studies replicated and validated the three-factor structure and differential mean profiles along the three dimensions for different languages (a) in a more diverse German sample (Study 2), (b) in different countries (Study 3), and (c) when participants based their evaluations on speech samples (Study 4). Moreover, we investigated the relation between the AToL dimensions and stereotypic speaker evaluations. Results confirm the reliability, validity, and generalizability of the AToL scale and its incremental value to mere speaker evaluations.
This chapter explores the Linguistic Landscape of six medium-size towns in the Baltic States with regard to languages of tourism and to the role of English and Russian as linguae francae. A quantitative analysis of signs and of tourism web sites shows that, next to the state languages, English is the most dominant language. Yet, interviews reveal that underneath the surface, Russian still stands strong. Therefore, possible claims that English might take over the role of the main lingua franca in the Baltic States cannot be maintained. English has a strong position for attracting international tourists, but only alongside Russian which remains important both as a language of international communication and for local needs.
The present contribution addresses an infrastructural issue of universal relevance, addressed in the specific context of the TEI. We describe a combination of open-source tools and an open-access approach to creating knowledge repositories that have been employed in building a bibliographic reference library for the “TEI for Linguists” special interest group (LingSIG). The authors argue that, for an initiative such as the TEI, it is important to choose open, freely available solutions. If these solutions have the advantage of attracting new users and promoting the initiative itself, so much the better, especially if it is done in a non-committal way: no one using the LingSIG bibliographic repository has to be a member of the LingSIG or a “TEI-er” in general.
The paper’s purpose is to give an overview of the work on the Component Metadata Infrastructure (CMDI) that was implemented in the CLARIN research infrastructure. It explains, the underlying schema, the accompanying tools and services. It also describes the status and impact of the CMDI developments done within the CLARIN project and past and future collaborations with other projects.
In developing an interdisciplinary approach integrating Conversation Analysis (“CA”), audiology and User Centered Design, the applied goal of this international collaboration is to analyze real-world social interaction from the perspective of the participants in order to build an empirical basis for innovation in the field of communication with hearing impairment and hearing aid use. In reviewing theory, methodology and analysis of eight cases analyzed in this volume, the editors assess the potential of application for the various stakeholders in communication with hearing loss and hearing aids, including the estimated impact factor. The chapter closes with a consideration of desiderata for future research.
Our paper outlines a proposal for the consistent modeling of heterogeneous lexical structures in semasiological dictionaries, based on the element structures described in detail in chapter 9 (Dictionaries) of the TEI Guidelines. The core of our proposal describes a system of relatively autonomous lexical “crystals” that can, within the constraints of the relevant element’s definition, be combined to form complex structures for the description of morphological form, grammatical information, etymology, word-formation, and meaning for a lexical structure.
The encoding structures we suggest guarantee sustainability and support re-usability and interoperability of data. This paper presents case studies of encoding dictionary entries in order to illustrate our concepts and test their usability.
We comment on encoding issues involving <entry>, <form>, <etym>, and on refinements to the internal content of <sense>.
Although most of the relevant dictionary productions of the recent past have relied on digital data and methods, there is little consensus on formats and standards. The Institute for Corpus Linguistics and Text Technology (ICLTT) of the Austrian Academy of Sciences has been conducting a number of varied lexicographic projects, both digitising print dictionaries and working on the creation of genuinely digital lexicographic data. This data was designed to serve varying purposes: machine-readability was only one. A second goal was interoperability with digital NLP tools. To achieve this end, a uniform encoding system applicable across all the projects was developed. The paper describes the constraints imposed on the content models of the various elements of the TEI dictionary module and provides arguments in favour of TEI P5 as an encoding system not only being used to represent digitised print dictionaries but also for NLP purposes.
In this paper, we examine methods to automatically extract domain-specific knowledge from the food domain from unlabeled natural language text. We employ different extraction methods ranging from surface patterns to co-occurrence measures applied on different parts of a document. We show that the effectiveness of a particular method depends very much on the relation type considered and that there is no single method that works equally well for every relation type. We also examine a combination of extraction methods and also consider relationships between different relation types. The extraction methods are applied both on a domain-specific corpus and the domain-independent factual knowledge base Wikipedia. Moreover, we examine an open-domain lexical ontology for suitability.
This document presents ongoing work related to spoken language data within a project that aims to establish a common and unified infrastructure for the sustainable provision of linguistic primary research data at the Institut für Deutsche Sprache (IDS). In furtherance of its mission to “document the German language as it is currently used”, the project expects to enable the research community to access a broad empirical base of working material via a single platform. While the goal is to eventually cover all linguistically relevant digital resources of the IDS, including lexicographic information systems such as the IDS German Vocabulary Portal, OWID, written language corpora such as the IDS German Reference Corpus, DeReKo, and spoken language corpora such as the IDS German Speech Corpus for Research and Teaching, FOLK, the work presented here predominantly focuses on the latter type of data, i.e. speech corpora. Within this context, the present document pictures the project’s contributions to the development of standards and best practice guidelines concerning data storage, process documentation and legal issues for the sustainable preservation and long-term accessibility of primary linguistic research data.
Electronic dictionaries should support dictionary users by giving them guidance in text production and text reception, alongside a user-definable offer of lexicographic data for cognitive purposes. In this article, we sketch the principles of an interactive and dynamic electronic dictionary aimed at text production and text reception guiding users in innovative ways, especially with respect to difficult, complicated or confusing issues. The lexicographer has to do a very careful analysis of the nature of the possible problems to suggest an optimal solution for a specific problem. We are of the opinion that there are numerous complex situations where users need more detailed support than currently available in e-dictionaries, enabling them to make valid and correct choices. For highly complex situations, we suggest guidance through a decision tree-like device. We assume that the solutions proposed here are not specific to one language only but can, after careful analysis, be applied to e-dictionaries in different languages across the world.
I nationale og curopa’iskc sprogpolitiske undersogelser savner man orte et tilt'redsstiIlende cmpirisk grundlag. De tilgsngelige data om den aktuelle Situation for sprogene i de forskelligc lande er heterogene. ufuldstEndige og delvist foraddede og derfor vanskelige at sammenligne over tid. EKNIL’s curoptciskc sprogbarometer. KLM, er et forsog pä al afhjxlpe denne Situation. KLM er baseret pä et omfattende spor- geskema om en bred vifte al’sproglige forhold som er egnet til at danne et billede at'sprogenes Status og sprogpolitiske praksisser i hvert enkelt land. fx sprogencs juridiske Status, sprogenes Status i undervis- ning og forskning, Situationen for minoritetssprog, sprogene i kulturen og i erhvervslivet. KLM gennem- tores med fä ärs mellemrum. Naervjerende artikel beskriver baggrunden og resultateme af KLM 2 (2007- 2011) som omfatler 23 europa’iske lande
Linguistic query systems are special purpose IR applications. As text sizes, annotation layers, and metadata schemes of language corpora grow rapidly, performing complex searches becomes a highly computational expensive task. We evaluate several storage models and indexing variants in two multi-processor/multi-core environments, focusing on prototypical linguistic querying scenarios. Our aim is to reveal modeling and querying tendencies – rather than absolute benchmark results – when using a relational database management system (RDBMS) and MapReduce for natural language corpus retrieval. Based on these findings, we are going to improve our approach for the efficient exploitation of very large corpora, combining advantages of state-of-the-art database systems with decomposition/parallelization strategies. Our reference implementation uses the German DeReKo reference corpus with currently more than 4 billion word forms, various multi-layer linguistic annotations, and several types of text-specific metadata. The proposed strategy is language-independent and adaptable to large-scale multilingual corpora.
We report an ethnographic and field-experiment-based study of time intervals in Amondawa, a Tupi language and culture of Amazonia. We analyse two Amondawa time interval systems based on natural environmental events (seasons and days), as well as the Amondawa system for categorising lifespan time (“age”). Amondawa time intervals are exclusively event-based, as opposed to time-based (i.e. they are based on event-duration, rather than measured abstract time units). Amondawa has no lexicalised abstract concept of time and no practices of time reckoning, as conventionally understood in the anthropological literature. Our findings indicate that not only are time interval systems and categories linguistically and culturally specific, but that they do not depend upon a universal “concept of time”. We conclude that the abstract conceptual domain of time is not a human cognitive universal, but a cultural historical construction, semiotically mediated by symbolic and cultural-cognitive artefacts for time reckoning.
This paper presents two toolsets for transcribing and annotating spoken language: the EXMARaLDA system, developed at the University of Hamburg, and the FOLK tools, developed at the Institute for the German Language in Mannheim. Both systems are targeted at users interested in the analysis of spontaneous, multi-party discourse. Their main user community is situated in conversation analysis, pragmatics, sociolinguistics and related fields. The paper gives an overview of the individual tools of the two systems – the Partitur-Editor, a tool for multi-level annotation of audio or video recordings, the Corpus Manager, a tool for creating and administering corpus metadata, EXAKT, a query and analysis tool for spoken language corpora, FOLKER, a transcription editor optimized for speed and efficiency of transcription, and OrthoNormal, a tool for orthographical normalization of transcription data. It concludes with some thoughts about the integration of these tools into the larger tool landscape.
This paper presents an extension to the Stuttgart-Tübingen TagSet, the standard part-of-speech tag set for German, for the annotation of spoken language. The additional tags deal with hesitations, backchannel signals, interruptions, onomatopoeia and uninterpretable material. They allow one to capture phenomena specific to spoken language while, at the same time, preserving inter-operability with already existing corpora of written language.
In this paper, we compare three different generalization methods for in-domain and cross-domain opinion holder extraction being simple unsupervised word clustering, an induction method inspired by distant supervision and the usage of lexical resources. The generalization methods are incorporated into diverse classifiers. We show that generalization causes significant improvements and that the impact of improvement depends on the type of classifier and on how much training and test data differ from each other. We also address the less common case of opinion holders being realized in patient position and suggest approaches including a novel (linguistically-informed) extraction method how to detect those opinion holders without labeled training data as standard datasets contain too few instances of this type.
Research today is often performed in collaborated projects composed of project partners with different backgrounds and from different institutions and countries. Standards can be a crucial tool to help harmonizing these differences and to create sustainable resources. However, choosing a standard depends on having enough information to evaluate and compare different annotation and metadata formats. In this paper we present ongoing work on an interactive, collaborative website that collects information on standards in the field of linguistics as a means to guide interested researchers.
Globally, hearing loss is the second most frequent disability. About 80% of the persons affected by hearing loss do not use hearing aids. The goal of this edited volume is to present a theoretically founded, interdisciplinary approach geared at understanding and improving social interaction impacted by hearing loss and (non-)use of hearing technologies. The researchers report on pilot studies from Australia, Denmark, Finland, Germany, Switzerland and the USA. Using Conversation Analysis, the studies identify problems and serve as points of departure for possible solutions. Researchers and practitioners from the different disciplines (medicine, audiology, hearing rehabilitation, User Centered Design, Conversation Analysis, change business) as well as users of hearing technologies comment on this approach.
Conversation Analysis (CA) and Discursive Psychology (DP) reject the view that assumptions
about cognitive processes should be used to account for discursive phenomena. Instead, cognitive
issues are respecified as discursive phenomena. Discursive psychologists do this by
studying discursive practices of talking about mental phenomena and using mental predicates.
This approach is exemplified by a study of the use of constructions with German verstehen
(‘to understand’) in conversation. Some conversation analysts take another approach,
namely, inquiring into how participants display mental states in talk-in-interaction. This is
exemplified by a study of how grammatical constructions are used to display different types
of inferences drawn from a partner’s prior turn. It will be argued that the constructivist, antiessentialist
stance which CA and DP take with regard to cognition is a prosperous line of
research, which has much in its favor from a methodological point of view. However, it
can be shown that tacit assumptions about cognitive processes are still inevitable when
doing CA and DP. As a conclusion, the paper pleads for an enhanced awareness of how cognitive
processes come into play when analysing talk-in-interaction and it advocates the integration
of a more explicit cognitive perspective into research on talk-in-interaction.
This paper describes the ongoing work to integrate WebLicht into the CLARIN infrastructure. It introduces the CLARIN infrastructure for scholars in the humanities and social sciences as well as WebLicht - an orchestration and execution environment that is built upon Service Oriented Architecture principles. The integration of WebLicht into the CLARIN infrastructure involves adapting it to the standards and practices used within CLARIN, including distributed repositories, CMDI metadata, and persistent identifiers.
Introduction
(2012)
Introduction
(2012)
Hearing loss is a prevalent communication disability, yet to date there is almost no research on naturally occurring interaction which examines how participants handle hearing loss and the use of hearing aids in communication. In contrast, research focussing on the medical and technological dimensions has advanced tremendously. Still, the social reaction to hearing loss is frequently stress, withdrawal and isolation. Despite the enormous technological development, most people who could benefit from a hearing aid do not use it. The goal of this edited volume is to present a theoretically founded, interdisciplinary research approach geared at understanding and improving social interaction impacted by hearing loss and (non-)use of hearing technologies. Towards this end, we are integrating Conversation Analysis, audiology and User Centered Design.
In this brief presentation of Conversation Analysis (“CA”), we take up some of the communication problems associated with hearing loss and link them to conversation analytic concepts. We explain how attempts to control the conversation, embarrassment and miscommunication can be analyzed as interactional achievements in the areas of turn-taking, repair and nonverbal actions. The chapter also explains which kinds of data are used in CA, how the participants’ perspective is analyzed and some of the theoretical assumptions underlying the analysis. Examples of transcribed interactional sequences with hearing loss illustrate how turn-taking, eye gaze and trouble in hearing/understanding (“repair”) are sensitive to this communication disorder.
Corpora with high-quality linguistic annotations are an essential component in many NLP applications and a valuable resource for linguistic research. For obtaining these annotations, a large amount of manual effort is needed, making the creation of these resources time-consuming and costly. One attempt to speed up the annotation process is to use supervised machine-learning systems to automatically assign (possibly erroneous) labels to the data and ask human annotators to correct them where necessary. However, it is not clear to what extent these automatic pre-annotations are successful in reducing human annotation effort, and what impact they have on the quality of the resulting resource. In this article, we present the results of an experiment in which we assess the usefulness of partial semi-automatic annotation for frame labeling. We investigate the impact of automatic pre-annotation of differing quality on annotation time, consistency and accuracy. While we found no conclusive evidence that it can speed up human annotation, we found that automatic pre-annotation does increase its overall quality.
Knowledge Acquisition with Natural Language Processing in the Food Domain: Potential and Challenges
(2012)
In this paper, we present an outlook on the effectiveness of natural language processing (NLP) in extracting knowledge for the food domain. We identify potential scenarios that we think are particularly suitable for NLP techniques. As a source for extracting knowledge we will highlight the benefits of textual content from social media. Typical methods that we think would be suitable will be discussed. We will also address potential problems and limits that the application of NLP methods may yield.
This article discusses the situation of the Latgalian language in Latvia today. It first provides an overview of languages in Latvia, followed by a historical and contemporary sketch of the societal position of Latgalian and by an account of current Latgalian language activism. On this basis, the article then applies schemes of language functions and of evaluations of the societal position of minority languages to Latgalian. Given the range of functions that Latgalian fulfils today and the wishes and attempts by activists to expand these functions, the article argues that it is surprising that so little attention is given to Latgalian in mainstream Latvian and international sociolinguistic publications. In this light, the fate of the language is difficult to prognose, but a lot depends on whether the Latvian state will clarify its own unclear perception of policies towards Latgalian and on how much attention it will receive in the future.
Understanding the design of talk-in-interaction is important in many domains, including speech technology. Although phonetic, linguistic and gestural correlates have been identified for some of the social actions that conversational participants accomplish, it is only recently that researchers have begun to take account of the immediately prior interactional context as an important factor influencing the design of a speaker’s turn. The present study explores the influence of context by focussing on characteristics of short turns produced by one speaker between turns from another speaker. The hypothesis is that the speaker designs her inserted turn as a match to the prior turn when wishing to align with the previous speaker’s agenda. By contrast, non-matching would display that the speaker is non-aligning, preferring instead to initiate a new action for example. Data are taken from the AMI corpus, focussing on the spontaneous talk of first-language English participants. Using sequential analysis, such short turns are classified as either aligning or non-aligning in accordance with definitions in the Conversation Analysis literature. The degree of prosodic similarity between the inserted turn and the prior speaker’s turn is measured using novel acoustic techniques. The results show that aligning turns are significantly more similar to the immediately preceding turn, in terms of pitch contour, than non-aligning turns. In contrast to the prosodic-acoustic analysis, the results of the gestural analysis indicate that aligning and non-aligning are differentiated by the use of distinct gestures, rather than by the matching (or non-matching) of gestures across the adjacent turns. These results support the view that choice of pitch contour is managed locally, rather than by reference to an intonational lexicon. However, this is not the case for speakers’ use of gesture. The implications of these findings for a model of talk-in-interaction are considered, along with potential applications.
Providing an innovative approach to the written displays of minority languages in public space this volume explores minority language situations through the lens of linguistic landscape research. Based on very tangible data it explores the 'same old issues' of language contact and language conflict in new ways.
When we first started the project of looking at minority languages through a linguistic landscape lens, we felt that the visibility of minority languages in public space had been insufficiently dealt with in traditional minority language research. A linguistic landscape approach, as it had developed over the last years, would constitute a valuable path to explore, by looking at the ‘same old issues’ of language contact and language conflict from a specific angle. We were convinced that fresh linguistic landscape data would be able to provide innovative and useful insights into ‘patterns of language […] use, official language policies, prevalent language attitudes, [and] power relations between different linguistic groups’ (Backhaus 2007, p. 11). The linguistic landscape approach, as presented by the different authors in this volume, has clearly proven to be a heuristic appropriate and relevant for a wide range of minority language situations. More specifically, the ideas and analyses in the different chapters do contribute to a further understanding of minority languages and their speakers. They deepen our comprehension of language policies, power relations and ideologies in minority language settings.
In this paper, we describe MLSA, a publicly available multi-layered reference corpus for German-language sentiment analysis. The construction of the corpus is based on the manual annotation of 270 German-language sentences considering three different layers of granularity. The sentence-layer annotation, as the most coarse-grained annotation, focuses on aspects of objectivity, subjectivity and the overall polarity of the respective sentences. Layer 2 is concerned with polarity on the word- and phrase-level, annotating both subjective and factual language. The annotations on Layer 3 focus on the expression-level, denoting frames of private states such as objective and direct speech events. These three layers and their respective annotations are intended to be fully independent of each other. At the same time, exploring for and discovering interactions that may exist between different layers should also be possible. The reliability of the respective annotations was assessed using the average pairwise agreement and Fleiss’ multi-rater measures. We believe that MLSA is a beneficial resource for sentiment analysis research, algorithms and applications that focus on the German language.
The perception of syllable prominence depends to a limited extent on the acoustic properties of the speech signal in question. Psychoacoustic factors are involved as well. Thus, research often relies on two types of data: subjective prominence ratings collected in perception experiments and acoustic measures. A problem with the rating data is noise resulting from individual approaches to the rating task. This paper addresses the question of how this noise can be reduced by normalization, evaluating 12 normalization methods. In a perception experiment, prominence ratings concerning German read speech were collected. From the raw rating data 12 different ‘mirror’ data-sets were computed according to the 12 methods. Each mirror data-set was correlated with the same set of underlying acoustic data. The multiple regression setup included raw syllable duration as well as within-syllable maximum F0 and intensity. Adjusted r2-values could beraised considerably with selected methods.
In Spoken Egyptian, the form of a linguistic sign is restricted by rules of root structure and consonant compatibility as well as word-formation patterns. Hieroglyphic Egyptian, however, displays additional principles of sign formation. Iconicity is one of the crucial features of a part of its sign inventory. In this article, hieroglyphic iconicity will be investigated by means of a preliminary comparative typology originally developed for German Sign Language (Kutscher 2010). The authors argue that patterns found in Egyptian hieroglyphic sign formation are systematically comparable to patterns of German Sign Language (DGS). These patterns determine what types of lexical meaning can be inferred from iconic linguistic signs.
This paper deals with a case study of a first visit of a person with hearing loss to her family doctor. In the first part of the paper, basic properties of doctor-patient interaction, which are also relevant for treatment of hearing loss, are outlined: the relevance of institutional conditions for interaction, asymmetries between the participants, goal-orientation, specific conditions of trust, and the relevance of the specific genre of doctor-patient interaction. The second part of the paper presents a case study, which focuses on three interactional phenomena: a) the negotiation of the hearing loss as an existential threat to the patient and her identity; b) the discrepancy of illness theories between doctor and patient; c) the collaborative work of negotiating an intersubjectively viable description of the experience of hearing loss.
A formal narrative representation is a procedure assigning a formal description to a natural language narrative. One of the goals of the computational models of narrative community is to understand this procedure better in order to automatize it. A formal framework fit for automatization should allow for objective and reproducible representations. In this paper, we present empirical work focussing on objectivity and reproducibility of the formal framework by Vladimir Propp (1928). The experiments consider Propp’s formalization of Russian fairy tales and formalizations done by test subjects in the same formal framework; the data show that some features of Propp’s system such as the assignment of the characters to the dramatis personae and some of the functions are not easy to reproduce.
A frequently replicated finding is that higher frequency words tend to be shorter and contain more strongly reduced vowels. However, little is known about potential differences in the articulatory gestures for high vs. low frequency words. The present study made use of electromagnetic articulography to investigate the production of two German vowels, [i] and [a], embedded in high and low frequency words. We found that word frequency differently affected the production of [i] and [a] at the temporal as well as the gestural level. Higher frequency of use predicted greater acoustic durations for long vowels; reduced durations for short vowels; articulatory trajectories with greater tongue height for [i] and more pronounced downward articulatory trajectories for [a]. These results show that the phonological contrast between short and long vowels is learned better with experience, and challenge both the Smooth Signal Redundancy Hypothesis and current theories of German phonology.
The instructions under which raters quantify syllable prominence perception need to be simple in order to maintain immediate reactions. This leads to noise in the rating data that can be dealt with by normalization, e.g. setting central tendency = 0 and dispersion = 1 (as in Z-score normalization). Questions arise such as: Which parameter is adequate here to capture central tendency? Which reference distribution should the normalization be based on? In this paper 16 different normalization methods are evaluated. In a perception experiment using German read speech (prose and poetry), syllable prominence ratings were collected. From the rating data 16 complete “mirror” data-sets were computed according to the 16 methods. Each mirror data-set was correlated with the same set of measures from the underlying acoustic data, focusing on raw syllable duration which is seen as a rather straightforward acoustic aspect of syllable prominence. Correlation coefficients could be raised considerably by selected methods.
Online dictionary use
(2012)
Opening/Eröffnung/Aperture
(2012)
In order to explore the influence of context on the phonetic design of talk-in-interaction, we investigated the pitch characteristics of short turns (insertions) that are produced by one speaker between turns from another speaker. We investigated the hypothesis that the speaker of the insertion designs her turn as a pitch match to the prior turn in order to align with the previous speaker’s agenda, whereas non-matching displays that the speaker of the insertion is non-aligning, for example to initiate a new action. Data were taken from the AMI meeting corpus, focusing on the spontaneous talk of first-language English participants. Using sequential analysis, 177 insertions were classified as either aligning or non-aligning in accordance with definitions of these terms in the Conversation Analysis literature. The degree of similarity between the pitch contour of the insertion and that of the prior speaker’s turn was measured, using a new technique that integrates normalized F0 and intensity information. The results showed that aligning insertions were significantly more similar to the immediately preceding turn, in terms of pitch contour, than were non-aligning insertions. This supports the view that choice of pitch contour is managed locally, rather than by reference to an intonational lexicon.
This thesis deals with expressions consisting of two noun phrases connected by a comitative preposition, referred to as comitative constructions (CCs). It focuses on CCs in Polish, with some comparisons to other languages, and provides an analysis at the morphosyntax-semantics-pragmatics interface in the paradigm of Head-Driven Phrase Structure Grammar with the integrated model-theoretic semantic framework of Lexicalized Flexible Ty2. After postulating three different readings of Polish CCs: accompanitive, conjunctive and (open and closed) inclusive, a number of semantic phenomena are discussed which provide evidence for this classification. Further examination of the data shows that all CC types behave uniformly with regard to their syntactic properties but exhibit differences regarding agreement and person, number and gender resolution. These differences have previously been explained by syntactic stipulations. This thesis argues that a syntactic approach to CCs lacks real empirical motivation and it demonstrates that some of the existing analyses are problematic for a number of empirical and / or theoretical reasons. It further offers an alternative analysis based on the assumption that all CC types have a uniform, adjunctionbased syntactic structure, and that the crucial differences between them are semantic in nature, being triggered by the meaning of the comitative preposition. The core of the proposed semantic analysis are three different logical representations of the comitative preposition, whose truth conditions allow us to make the right predictions about the different behavior of the three CC types. All other lexical components of CCs, including plural pronouns, bear in each type of CC their customary forms and meanings. Implementing this idea in a constraint-based framework whose description language incorporates a formal semantic representation language, and modeling the morphosyntactic, semantic, pragmatic and referential properties of CCs within a single grammatical paradigm, we arrive at an analysis that accounts for these expressions in a very natural way.
Over the past decades, problems related to linguistic minorities and their well-being, as well as to minority languages and their maintenance, have developed as an independent branch of minority studies. Studies of language in society and sociolinguistics, strategies of minority language survival and the empowerment of their speakers have produced a considerable output of case studies and theoretical writings.In this multifaceted field of investigation, language use, language practices, language policies and language politics represent interrelated aspects of social and linguistic relations that cannot be meaningfully addressed from a point of view of one scientific discipline only. This is specially the case when one wants to understand processes of language loss and maintenance, or the revitalization and empowerment of a language community. Such processes are linguistic expressions of complex social settings, and reflect group and individual identities that in turn express changing systems of collective values, human networks, fashions and social practices.
The current state of the art for metadata provision allows for a very flexible approach, catering for the needs of different archives and communities, referring to common data category registries that describe the meaning of a data category at least to authors of metadata. Component models for metadata provisions are for example used by CLARIN and META-SHARE, but there is also an increased flexibility in other metadata schemas such as Dublin Core, which is usually not seen as appropriate for meaningful description of language resources.
Making resources available for others and putting this to a second use in other projects has never been more widely accepted as a sensible efficient way to avoid a waste of efforts and resources. However, when it comes to the details, there is still a vast number of problems. This workshop has aimed at being a forum to address issues and challenges in the concrete work with metadata for LRs, not restricted to a single initiative for archiving LRs. It has allowed for exchange and discussion and we hope that the reader finds the articles here compiled interesting and useful.
In two eye-tracking experiments, we investigated the relationship between the subject preference in the resolution of subject-object ambiguities in German embedded clauses and semantic word order constraints (i.e., prominence hierarchies relating to the specificity/referentiality of noun phrases, case assignment and thematic role assignment). Our central research question concerned the timecourse with which prominence information is used and particularly whether it modulates the subject preference. In both experiments, we replicated previous findings of reanalysis effects for object-initial structures. Our findings further suggest that noun phrase prominence does not alter initial parsing strategies (viz., the subject preference), but rather modulates the ease of later reanalysis processes. In Experiment 1, the object case assigned by the verb did not affect the ease of reanalysis. However, the syntactic reanalysis was rendered more difficult when the order of the two arguments violated the specificity/referentiality hierarchy. Experiment 2 revealed that the initial subject preference also holds for verbs favoring an object-initial base order (i.e., dative object-experiencer verbs). However, the advantage for subject-initial sentences is neutralized in relatively late processing stages when the thematic role hierarchy and the specificity hierarchy converge to promote scrambling.
Current work on sentiment analysis is characterized by approaches with a pragmatic focus, which use shallow techniques in the interest of robustness but often rely on ad-hoc creation of data sets and methods. We argue that progress towards deep analysis depends on a) enriching shallow representations with linguistically motivated, rich information, and b) focussing different branches of research and combining ressources to create synergies with related work in NLP. In the paper, we propose SentiFrameNet, an extension to FrameNet, as a novel representation for sentiment analysis that is tailored to these aims.
This special issue of the Journal on Ethnopolitics and Minority Issues in Europe (JEMIE) brings together some of the participants of the symposium Political and Economic Resources and Obstacles of Minority Language Maintenance organized by the Language Survival Network ‘POGA’ at Tallinn University, Estonia, in December 2010. More than 20 scholars representing linguistics, anthropology, social sciences and law participated in the symposium, to present papers and discuss questions related to minority language loss, maintenance and revitalization. The six case studies contained in this special issue look at different minorities and regions in the European Union, Russia and the US. The linguistic communities discussed are the Russian-, Võru/Seto- and Latgalian-speaking minorities of Estonia and Latvia; the Welsh- and Breton-speaking communities of the Celtic language; the Russian Finno-Ugrian people with regional autonomies; and the native American groups of the Delaware/Cherokee and the Oneida. The reader will find articles relating to interdisciplinary research approaches in and on minority languages and minority language communities.
This paper describes the status of the standardization efforts of a Component Metadata approach for describing Language Resources with metadata. Different linguistic and Language & Technology communities as CLARIN, META-SHARE and NaLiDa use this component approach and see its standardization of as a matter for cooperation that has the possibility to create a large interoperable domain of joint metadata. Starting with an overview of the component metadata approach together with the related semantic interoperability tools and services as the ISOcat data category registry and the relation registry we explain the standardization plan and efforts for component metadata within ISO TC37/SC4. Finally, we present information about uptake and plans of the use of component metadata within the three mentioned linguistic and L&T communities.
This article discusses questions concerning the creation, annotation and sharing of spoken language corpora. We use the Hamburg Map Task Corpus (HAMATAC), a small corpus in which advanced learners of German were recorded solving a map task, as an example to illustrate our main points. We first give an overview of the corpus creation and annotation process including recording, metadata documentation, transcription and semi-automatic annotation of the data. We then discuss the manual annotation of disfluencies as an example case in which many of the typical and challenging problems for data reuse – in particular the reliability of interpretative annotations – are revealed.
In this paper, we provide an analysis of temporality in Hausa (Chadic, Afro-Asiatic). By testing the hypothesis of covert tense (Matthewson 2006) against empirical data, we show that Hausa is genuinely tenseless in the sense that the grammar does not restrict the relation between reference time and utterance time. Rather, temporal reference is pragmatically inferred from aspectual and contextual information. We also argue that future time reference in Hausa is realized as a combination of a modal operator and a prospective aspect, thus involving the modal meaning components of intention and prediction as well as event time shifting.
The Component Metadata Infrastructure (CMDI) in a project on sustainable linguistic resources
(2012)
The sustainable archiving of research data for predefined time spans has become increasingly important to researchers and is stipulated by funding organizations with the obligatory task of being observed by researchers. An important aspect in view of such a sustainable archiving of language resources is the creation of metadata, which can be used for describing, finding and citing resources. In the present paper, these aspects are dealt with from the perspectives of two projects: the German project for Sustainability of Linguistic Data at the University of Tubingen (NaLiDa, cf. http://www.sfs.uni-tuebingen.de/nalida) and the Dutch-Flemish HLT Agency hosted at the Institute for Dutch Lexicology (TST-Centrale, cf.http://www.inl.nl/tst-centrale). Both projects unfold their approaches to the creation of components and profiles using the Component Metadata Infrastructure (CMDI) as underlying metadata schema for resource descriptions, highlighting their experiences as well as advantages and disadvantages in using CMDI.
The ISOcat registry reloaded
(2012)
The linguistics community is building a metadata-based infrastructure for the description of its research data and tools. At its core is the ISOcat registry, a collaborative platform to hold a (to be standardized) set of data categories (i.e., field descriptors). Descriptors have definitions in natural language and little explicit interrelations. With the registry growing to many hundred entries, authored by many, it is becoming increasingly apparent that the rather informal definitions and their glossary-like design make it hard for users to grasp, exploit and manage the registry’s content. In this paper, we take a large subset of the ISOcat term set and reconstruct from it a tree structure following the footsteps of schema.org. Our ontological re-engineering yields a representation that gives users a hierarchical view of linguistic, metadata-related terminology. The new representation adds to the precision of all definitions by making explicit information which is only implicitly given in the ISOcat registry. It also helps uncovering and addressing potential inconsistencies in term definitions as well as gaps and redundancies in the overall ISOcat term set. The new representation can serve as a complement to the existing ISOcat model, providing additional support for authors and users in browsing, (re-)using, maintaining, and further extending the community’s terminological metadata repertoire.
The present article describes the first stage of the KorAP project, launched recently at the Institut für Deutsche Sprache (IDS) in Mannheim, Germany. The aim of this project is to develop an innovative corpus analysis platform to tackle the increasing demands of modern linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse primary data and annotations in the petabyte range, while at the same time allowing an undistorted view of the primary linguistic data, and thus fully satisfying the demands of a scientific tool. An additional important aim of the project is to make corpus data as openly accessible as possible in light of unavoidable legal restrictions, for instance through support for distributed virtual corpora, user-defined annotations and adaptable user interfaces, as well as interfaces and sandboxes for user-supplied analysis applications. We discuss our motivation for undertaking this endeavour and the challenges that face it. Next, we outline our software implementation plan and describe development to-date.
The TEI has served for many years as a mature annotation format for corpora of different types, including linguistically annotated data. Although it is based on the consensus of a large community, it does not have the legal status of a standard. During the last decade, efforts have been undertaken to develop definitive de jure standards for linguistic data that not only act as a normative basis for the exchange of language corpora but also address recent advancements in technology, such as web-based standards, and the use of large and multiply annotated corpora.
In this article we will provide an overview of the process of international standardization and discuss some of the international standards currently being developed under the auspices of ISO/TC 37, a technical committee called “Terminology and other Language and Content Resources”. After that the relationship between the TEI Guidelines and these specifications, according to their formal model, notation format, and annotation model, will be discussed. The conclusion of the paper provides recommendations for dealing with language corpora.
Towards a part-of-speech ontology: encoding morphemic units of two South African Bantu languages
(2012)
This article describes the design of an electronic knowledge base, namely a morpho-syntactic database structured as an ontology of linguistic categories, containing linguistic units of two related languages of the South African Bantu group: Northern Sotho and Zulu. These languages differ significantly in their surface orthographies, but are very similar on the lexical and sub-lexical levels. It is therefore our goal to describe the morphemes of these languages in a single common database in order to outline and interpret commonalities and differences in more detail. Moreover, the relational database which is developed defines the underlying morphemic units (morphs) for both languages. It will be shown that the electronic part-of-speech ontology goes hand in hand with part-of-speech tagsets that label morphemic units. This database is designed as part of a forthcoming system providing lexicographic and linguistic knowledge on the official South African Bantu languages.
In this paper, we examine methods to extract different domain-specific relations from the food domain. We employ different extraction methods ranging from surface patterns to co-occurrence measures applied on different parts of a document. We show that the effectiveness of a particular method depends very much on the relation type considered and that there is no single method that works equally well for every relation type. As we need to process a large amount of unlabeled data our methods only require a low level of linguistic processing. This has also the advantage that these methods can provide responses in real time.
We present an experimental approach to determining natural dimensions of story comparison. The results show that untrained test subjects generally do not privilege structural information. When asked to justify sameness ratings, they may refer to content, but when asked to state differences, they mostly refer to style, concrete events, details and motifs. We conclude that adequate formal models of narratives must represent such non-structural data.
In this chapter, I will focus on the phenomenon of drop out, i.e., withdrawal from the turn due to overlapping talk, in order to reflect on the link between “unfinished” turns and participation framework. With the help of a sequential and multimodal analysis inspired by the conversation analytical approach, I will show that dropping out from a turn is strongly linked to the availability displayed by potential recipients of a turn-at-talk. Although conversation analysis has described in detail the systematics of overlapping talk, especially of its onset (Jefferson 1973, 1983, 1986) and its resolution (Scheg-loff 2000; Jefferson 2004), the phenomenon of withdrawal from a turn due to simultaneous talk has not been investigated in detail. While it seems to bedifficult to describe this interactional practice by referring exclusively to syntactic features (incompleteness of the turn), I suggest looking at turn withdrawal from a multimodal perspective (e.g. Goodwin 1980, 1981; Mondada2007a; Schmitt 2005), taking into account visible resources like gaze or gesture. The problem of continuing or stopping a turn-in-progress in overlapping talk can be closely linked to the participation framework (Goodwin and Goodwin 2004), as speakers do visibly take into account their recipient’s availability and coordinate their turn construction with the dynamic changes of the participation framework and the interactional space.
The article discusses the possibilities and challenges of combining conversation analysis and ethnography in the study of everyday family life. We argue that such a combination requires the decision whether to prioritise interaction data or ethno-graphic (in particular, interview) data in the analysis. We present a conversation analytic case study of how household work is commonly brought up in the interactions of one couple and bring this to bear on a re-analysis of a possible conflict situation originally described in the ethnographic analysis by Klein, Izquierdo, and Bradbury (2007), published in this journal. While the findings of the two analyses converge, they inform us about different dimensions of couple interaction. The ethnographic analysis is focused on participants’ experiences, and the conversation analysis is focused on participants’ practices. We conclude that the methodological decision to prioritise interaction or interview data has consequences for the kind of questions we can ask.
This paper presents an annotation scheme for English modal verbs together with sense-annotated data from the news domain. We describe our annotation scheme and discuss problematic cases for modality annotation based on the inter-annotator agreement during the annotation. Furthermore, we present experiments on automatic sense tagging, showing that our annotations do provide a valuable training resource for NLP systems.