Refine
Year of publication
Document Type
- Article (21) (remove)
Language
- English (21) (remove)
Has Fulltext
- yes (21)
Is part of the Bibliography
- no (21) (remove)
Keywords
- Deutsch (3)
- Automatische Sprachanalyse (2)
- Computerlinguistik (2)
- Diskursanalyse (2)
- Englisch (2)
- Kasus (2)
- Korpus <Linguistik> (2)
- Rezension (2)
- Annotation (1)
- Bedeutung (1)
Publicationstate
- Veröffentlichungsversion (12)
- Postprint (7)
- Zweitveröffentlichung (2)
Reviewstate
- (Verlags)-Lektorat (21) (remove)
Publisher
This paper is concerned with relative constructions in non-standard varieties of European languages, which will be analyzed on the basis of three typological parameters (word order, relative element, syntactic role of the relativized item). The validity of claims raised in studies on the areal distribution of relative constructions in Europe will be checked against the results of the analysis, so as to ascertain whether they still hold when non-standard varieties are examined.
Corpora with high-quality linguistic annotations are an essential component in many NLP applications and a valuable resource for linguistic research. For obtaining these annotations, a large amount of manual effort is needed, making the creation of these resources time-consuming and costly. One attempt to speed up the annotation process is to use supervised machine-learning systems to automatically assign (possibly erroneous) labels to the data and ask human annotators to correct them where necessary. However, it is not clear to what extent these automatic pre-annotations are successful in reducing human annotation effort, and what impact they have on the quality of the resulting resource. In this article, we present the results of an experiment in which we assess the usefulness of partial semi-automatic annotation for frame labeling. We investigate the impact of automatic pre-annotation of differing quality on annotation time, consistency and accuracy. While we found no conclusive evidence that it can speed up human annotation, we found that automatic pre-annotation does increase its overall quality.
Positioning analysis, a variant of discourse analysis, was used to explore the narratives of 40 psychiatric patients (11 females and 29 males; mean age = 40 years) who had manifest difficulties with engagement with statutory mental health services. Positioning analysis is a qualitative method that captures how people linguistically position the roles and identities of themselves and others in their day-to-day lives and narratives. The language of disengagement incorporated the passive positioning of self in relation to their lives and treatment through the use of metaphor, the passive voice and them and us attribution, while the discourse of engagement incorporated more active positioning of self, achieved through the use of the personal pronoun we and metaphoric references to balanced relationships. The findings corroborate previous thematic analysis that highlighted the importance of identity and agency in the ‘making or breaking’ of therapeutic relationships (Priebe et al. 2005). Implications are discussed in relation to how positioning analysis may help signal and emphasize important life and therapeutic experiences in spoken narratives as well as clinical consultations.
Traditionally, research on language change has been a post-mortem activity, focused on isolated changes that are complete and often only documented in written texts. In the 1960s the field was advanced considerably by Labovian sociolinguistics and the investigation of “change in progress” adduced through patterns of community-internal linguistic variation correlated with external facts about speakers such as age and class (see Labov 1994 for an overview). However, despite the many benefits of such work on “dynamic synchrony,” we still know relatively little about how language change unfolds over the lifetimes of individual speakers, that is, in real time (cf. Bailey et al. 1991). The logistical challenges of such research are, of course, considerable. Whereas it is straightforward for psycholinguists to observe language development in children over the course of a few years, documenting changes in the verbal behavior of individuals over several decades is by contrast much less feasible. Nevertheless, present theoretical models of language change could be considerably improved by the results of real-time studies.
In the project SemDok (Generic document structures in linearly organised texts) funded by the German Research Foundation DFG, a discourse parser for a complex type (scientific articles by example), is being developed. Discourse parsing (henceforth DP) according to the Rhetorical Structure Theory (RST) (Mann and Taboada, 2005; Marcu, 2000) deals with automatically assigning a text a tree structure in which discourse segments and rhetorical relations between them are marked, such as Concession. For identifying the combinable segments, declarative rules are employed, which describe linguistic and structural cues and constraints about possible combinations by referring to different XML annotation layers of the input text, and external knowledge bases such as a discourse marker lexicon, a lexico-semantic ontology (later to be combined with a domain ontology), and an ontology of rhetorical relations. In our text-technological environment, the obvious choice of formalism to represent such ontologies is OWL (Smith et al., 2004). In this paper, we describe two OWL ontologies and how they are consulted from the discourse parser to solve certain tasks within DP. The first ontology is a taxononomy of rhetorical relations which was developed in the project. The second one is an OWL version of GermaNet, the model of which we designed together with our project partners.
E-VALBU: Advanced SQL/XML processing of dictionary data using an object-relational XML database
(2008)
Contemporary practical lexicography uses a wide range of advanced technological aids,most prominently database systems for the administration of dictionary content. Since XML has become a de facto standard for the coding of lexicographic articles, integrated markup functionality – such as query, update, or transformation of instances – is of particular importance. Even the multi-channel distribution of dictionary data benefits from powerful XML database services. Exemplified by E-VALBU, the most comprehensive electronic dictionary on German verb valency, we outline an integrated approach for advanced XML storing and processing within an object-relational database, and for a public retrieval frontend using Web Services and AJAX technology.
Freezing in it-clefts
(2013)
Digital Text Collections, Linguistic Research Data, and Mashups: Notes on the Legal Situation
(2008)
Comprehensive data repositories are an essential part of practically all research carried out in the digital humanities nowadays. For example, library science, literary studies, and computational and corpus linguistics strongly depend on online archives that are highly sustainable and that contain not only digitized texts but also audio and video data as well as additional information such as metadata and arbitrary annotations. Current Web technologies, especially those that are related to what is commonly referred to as the Web 2.0, provide a number of novel functions such as multiuser editing or the inclusion of third-party content and applications that are also highly attractive for research applications in the areas mentioned above. Hand in hand with this development goes a high degree of legal uncertainty. The special nature of the data entails that, in quite a few cases, there are multiple holders of personal rights (mostly copyright) to different layers of data that often have different origins. This article discusses the legal problems of multiple authorships in private, commercial, and research environments. We also introduce significant differences between European and U.S. law with regard to the handling of this kind of data for scientific purposes.
An approach to the unification of XML (Extensible Markup Language) documents with identical textual content and concurrent markup in the framework of XML-based multi-layer annotation is introduced. A Prolog program allows the possible relationships between element instances on two annotation layers that share PCDATA to be explored and also the computing of a target node hierarchy for a well-formed, merged XML document. Special attention is paid to identity conflicts between element instances, for which a default solution that takes into account metarelations that hold between element types on the different annotation layers is provided. In addition, rules can be specified by a user to prescribe how identity conflicts should be solved for certain element types.
This article introduces the topic of ‘‘Multilingual language resources and interoperability’’. We start with a taxonomy and parameters for classifying language resources. Later we provide examples and issues of interoperatability, and resource architectures to solve such issues. Finally we discuss aspects of linguistic formalisms and interoperability.
We report on finished work in a project that is concerned with providing methods, tools, best practice guidelines, and solutions for sustainable linguistic resources. The article discusses several general aspects of sustainability and introduces an approach to normalizing corpus data and metadata records. Moreover, the architecture of the sustainability platform implemented by the authors is described.
This article shows that the TEI tag set for feature structures can be adopted to represent a heterogeneous set of linguistic corpora. The majority of corpora is annotated using markup languages that are based on the Annotation Graph framework, the upcoming Linguistic Annotation Format ISO standard, or according to tag sets defined by or based upon the TEI guidelines. A unified representation comprises the separation of conceptually different annotation layers contained in the original corpus data (e.g. syntax, phonology, and semantics) into multiple XML files. These annotation layers are linked to each other implicitly by the identical textual content of all files. A suitable data structure for the representation of these annotations is a multi-rooted tree that again can be represented by the TEI and ISO tag set for feature structures. The mapping process and representational issues are discussed as well as the advantages and drawbacks associated with the use of the TEI tag set for feature structures as a storage and exchange format for linguistically annotated data.
The present paper provides a new approach to the form-function relation in Latin declension. First, inflections are discussed from a functional point of view with special consideration to questions of syncretism. A case hierarchy is justified for Latin that conforms to general observations on case systems. The analysis leads to a markedness scale that provides a ranking of case-number-combinations from unmarked to most marked. Systematic syncretism always applies to contiguous sections of the case-number-scale (‘syncretism fields’). Second, inflections are analysed from a formal point of view taking into account partial identities and differences among noun endings. Theme vowels being factored out, endings are classified on the basis of their make-up, e.g., as sigmatic endings; as containing desinential (non-thematic) vowels; as containing long vowels; and so on. The analysis leads to a view of endings as involving more basic elements or ‘markers’. Endings of the various declensions instantiate a small number of types, and these can be put into a ranked order (a formal scale) that applies transparadigmatically. Third, the relationship between the independently substantiated functional and formal hierarchies is examined. In any declension, the form-function-relationship is established by aligning the relevant formal and functional scales (or ‘sequences’). Some types of endings are in one-to-one correspondence with bundles of morphosyntactic properties as they should be according to a classical morphemic approach, but others are not. Nevertheless, endings can be assigned a uniform role if the form-function-relationship is understood to be based on an alignment of formal and functional sequences. A diagrammatical form-function relationship is revealed that could not be captured in classical or refined morphemic approaches.
In order to explore the influence of context on the phonetic design of talk-in-interaction, we investigated the pitch characteristics of short turns (insertions) that are produced by one speaker between turns from another speaker. We investigated the hypothesis that the speaker of the insertion designs her turn as a pitch match to the prior turn in order to align with the previous speaker’s agenda, whereas non-matching displays that the speaker of the insertion is non-aligning, for example to initiate a new action. Data were taken from the AMI meeting corpus, focusing on the spontaneous talk of first-language English participants. Using sequential analysis, 177 insertions were classified as either aligning or non-aligning in accordance with definitions of these terms in the Conversation Analysis literature. The degree of similarity between the pitch contour of the insertion and that of the prior speaker’s turn was measured, using a new technique that integrates normalized F0 and intensity information. The results showed that aligning insertions were significantly more similar to the immediately preceding turn, in terms of pitch contour, than were non-aligning insertions. This supports the view that choice of pitch contour is managed locally, rather than by reference to an intonational lexicon.