Refine
Year of publication
Document Type
- Part of a Book (363)
- Conference Proceeding (237)
- Article (189)
- Book (63)
- Other (31)
- Working Paper (22)
- Contribution to a Periodical (7)
- Review (7)
- Doctoral Thesis (6)
- Preprint (5)
Language
- German (516)
- English (416)
- French (5)
- Multiple languages (3)
Keywords
- Korpus <Linguistik> (940) (remove)
Publicationstate
- Veröffentlichungsversion (544)
- Zweitveröffentlichung (203)
- Postprint (51)
- Erstveröffentlichung (2)
- Ahead of Print (1)
- Preprint (1)
Reviewstate
- (Verlags)-Lektorat (404)
- Peer-Review (304)
- Peer-review (12)
- Verlags-Lektorat (11)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (8)
- Review-Status-unbekannt (5)
- Peer-Revied (4)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (3)
- Zweitveröffentlichung (2)
- (Verlags-)Lektorat (1)
Publisher
- de Gruyter (138)
- Institut für Deutsche Sprache (57)
- Narr (47)
- European Language Resources Association (ELRA) (29)
- Leibniz-Institut für Deutsche Sprache (IDS) (29)
- IDS-Verlag (25)
- European Language Resources Association (23)
- Narr Francke Attempto (23)
- Association for Computational Linguistics (18)
- Leibniz-Institut für Deutsche Sprache (17)
Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus
(2021)
Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.
Gehören nun die Männer an den Herd? Anmerkungen zum Wandel der Rollenbilder von Mann und Frau
(2015)
Studenten, StudentInnen, Studierende? Aktuelle Verwendungspräferenzen bei Personenbezeichnungen
(2020)
Im Beitrag werden Meinungen und Einstellungen zur geschlechtergerechten Sprache dargestellt. Dazu werden verschiedene Möglichkeiten für die Bezeichnung von Personen, die studieren, in den Blick genommen. Diese werden zunächst beschrieben und ihre Frequenzen im Deutschen Referenzkorpus ausgewertet. Anschließend werden explizit die Meinungen und Einstellungen behandelt. Dafür werden die Daten der Deutschland-Erhebung 2008 und der Deutschland-Erhebung 2017 ausgewertet. In der aktuellen Erhebung wurden laienlinguistische Verwendungspräferenzen von Personenbezeichnungen erhoben; präferiert wird von den meisten Befragten die Partizipialform (den Studierenden). Die Verwendungspräferenzen hangen vor allem mit dem Alter der Befragten und ihrer politischen Orientierung zusammen. Insgesamt zeigt sich jedoch, dass das Thema der geschlechtergerechten Sprache für die meisten Befragten nur eine untergeordnete Rolle spielt.
COSMAS. Ein Computersystem für den Zugriff auf Textkorpora. Version R.1.3-1. Benutzerhandbuch
(1994)
Seit der Forschung große Datenmengen und Rechenkapazitäten zur Verfügung stehen arbeitet auch die Sprachwissenschaft zunehmend datengeleitet. Datengeleitete Forschung geht nicht von einer Hypothese aus, sondern sucht nach statistischen Auffälligkeiten in den Daten. Sprache wird dabei oft stark vereinfacht als lineare Abfolge von Wörtern betrachtet. Diese Studie zeigt erstmals, wie der zusätzliche Einbezug syntaktischer Annotationen dabei hilft, sprachliche Strukturen des Deutschen besser zu erfassen.
Als Anwendungsbeispiel dient der Vergleich der Wissenschaftssprachen von Linguistik und Literaturwissenschaft. Die beiden Fächer werden oft als Teildisziplinen der Germanistik zusammengefasst. Ihre wissenschaftliche Praxis unterscheidet sich jedoch systematisch hinsichtlich Forschungsdaten, Methoden und Erkenntnisinteressen, was sich auch in den Wissenschaftssprachen niederschlägt.
Die Rolle der antizipatorischen Verstehensdokumentation erweist sich in den Interviews aus dem Israelkorpus m. E. als besonders wichtig. Es wird von der Tatsache ausgegangen, dass es sich bei den Informanten um Personen mit besonders delikaten biographischen Hintergründen handele. Die Interviewerinnen müssen demzufolge mit der starken emotionalen Belastung rechnen, der die Interviewten während der Rekonstruktion ihrer Lebensgeschichte ausgesetzt sind. Ein sehr direkter Frage-Antwort-Stil könnte wegen dieser emotionalen Belastung als unangenehm empfunden werden. Der Einsatz von Verfahren antizipatorischer Verstehensdokumentation weist stattdessen m. E. eindeutig darauf hin, wie sich die Interviewerinnen offensichtlich um Empathie bemühen und im Sinne einer intersubjektiven Inreraktionskonstitution mit den Interviewten kooperieren. Ziel dieses Beitrages ist es zu zeigen, wie solche Verfahren der antizipatorischen Verstehensdokumentation durch den systematischen Einsatz der Konnektoren und, also, dann realisiert werden können.
Dieses Buch schließt eine Lücke in der Konnektorenforschung, indem es den Gebrauch von Konnektoren im gesprochenen Deutsch untersucht. Die Fragestellung bringt Elemente aus dem traditionellen grammatischen Ansatz und aus der pragmatisch basierten Forschung zur gesprochenen Sprache zusammen. In Anlehnung an die Methode der Interaktionalen Linguistik analysiert der Autor den Gebrauch der Konjunktoren «und», «aber» und der Adverbkonnektoren «also», «dann» in zwei Korpora von autobiographischen Interviews. Die Untersuchung zeigt, wie Konnektoren zur Bewältigung von verschiedenartigen kommunikativen Aufgaben zur Stiftung von Intersubjektivität und zur Gesprächsorganisation eingesetzt werden können.
In diesem Beitrag stellen wir die Ergebnisse einer Studie über die Intonation von Frageaktivitäten in deutschen Alltagsgesprächen vor. Unsere Untersuchung erforscht, inwieweit die Intonation zur Kontextualisierung von konversationellen Fragen beiträgt. In der Analyse stützen wir uns auf das autosegmental-metrische Modell von Peters und das taxonomische Modell der interaktionalen Prosodieforschung von Selting. Diese Modelle beschreiben jeweils phonologische oder pragmatische Aspekte der Frageintonation, zwei Dimensionen, die für sich genommen, keine vollständige Beschreibung liefern können. Auf der Grundlage authentischer Gesprächsdaten aus dem Korpus FOLK argumentieren wir für die Kompatibilität des autosegmental-metrischen Modells von Peters und des taxonomischen Modells der Frageintonation von Selting. Die Merkmale aus beiden Modellen lassen sich zu Bündeln kombinieren, die es erlauben, die Intonation von Fragen zu erfassen.
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are the depositors’ questionnaire and automatic quality assurance, both developed as web applications. They are accompanied by a Knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we split linguistic data into three resource classes (data deposits, collections and corpora). The class of a resource defines the strictness of the quality assurance it should undergo. This division is introduced so that too strict quality criteria do not prevent researchers from depositing their data.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST also develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are a questionnaire and automatic quality assurance for depositors of language resources, both developed as web applications. They are accompanied by a knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we consider three main data maturity levels in order to decide on a suitable level of strictness of the quality assurance. This division has been introduced to avoid that a set of ideal quality criteria prevent researchers from depositing or even assessing their (legacy) data. The tools described in the paper are work in progress and are expected to be released by the end of the QUEST project in 2022.
The CMDI Explorer
(2020)
We present the CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. The CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
CMDI Explorer
(2021)
We present CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
Signposts for CLARIN
(2020)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold et al. 2020 present Signposts as a solution to challenges in long-term preservation of corpora, especially corpora that are continuously extended and subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution Signposts can make to the CLARIN infrastructure and document the design for the CMDI profile.
Signposts for CLARIN
(2021)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold, Fisseni et al. (2020) present signposts as a solution to challenges in long-term preservation of corpora. Though applicable to digital resources in general, we focus on corpora, especially those that are continuously extended or subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution signposts can make to the CLARIN infrastructure, notably virtual collections, and document the design for the CMDI profile.
The current paper presents a corpus containing 35 dialogues of spontaneously spoken southern German, including half an hour of articulography for 13 of the speakers. Speakers were seated in separate recording chambers, mimicking a telephone call, and recorded on individual audio channels. The corpus provides manually corrected word boundaries and automatically aligned segment boundaries. Annotations are provided in the Praat format. In addition to audio recordings, speakers filled out a detailed questionnaire, assessing among others their audio-visual consumption habits.
In diesem Beitrag wird untersucht, wie mithilfe korpuslinguistischer Verfahren Erkenntnisse über den Aufbau von Bedeutungsparaphrasen in Wörterbüchern gewonnen werden können. Diese Erkenntnisse sollen dazu genutzt werden, den Aufbau von Bedeutungsparaphrasen in Wörterbüchern umfassend und systematisch zu beschreiben, z.B. im Hinblick auf eine Optimierung der Bedeutungsparaphrasen für so genannte elektronische Wörterbücher oder für die Extraktion lexikalisch-semantischer Information für NLP-Zwecke.
Anders als bei Sonntagspredigten haben die katholischen und evangelischen AutorInnen von Kirche in 1live nur 90 Sekunden zur Verfügung, um ihre christliche Botschaft zu vermitteln. Vorliegender Beitrag untersucht, wie die katholischen und evangelischen AutorInnen dies tun. Welche Inhalte erachten sie für relevant? Welche sprachliche Gestaltung wählen sie? Greifen katholische und evangelische AutorInnen zu den gleichen Inhalten und sprachlichen Mitteln oder zeigen sich konfessionelle Präferenzen und Differenzen? Diesen Fragen soll an einem Korpus aus Kirche in 1live-Radiopredigten aus den Jahren 2012 bis 2021 (= 2.755 Texte mit insgesamt 726.570 Token) mit einem quantitativen und qualitativen Methoden-Mix nachgegangen werden. Die Studie wird im Rahmen des DFG-Projekts „Sprache und Konfession 500 Jahre nach der Reformation“ am Germanistischen Institut der Westfälischen Wilhelms-Universität Münster durchgeführt.
We present an approach to an aspect of managing complex access scenarios to large and heterogeneous corpora that involves handling user queries that, intentionally or due to the complexity of the queried resource, target texts or annotations outside of the given user’s permissions. We first outline the overall architecture of the corpus analysis platform KorAP, devoting some attention to the way in which it handles multiple query languages, by implementing ISO CQLF (Corpus Query Lingua Franca), which in turn constitutes a component crucial for the functionality discussed here. Next, we look at query rewriting as it is used by KorAP and zoom in on one kind of this procedure, namely the rewriting of queries that is forced by data access restrictions.
As the Web ought to be considered as a series of sources rather than as a source in itself, a problem facing corpus construction resides in meta-information and categorization. In addition, we need focused data to shed light on particular subfields of the digital public sphere. Blogs are relevant to that end, especially if the resulting web texts can be extracted along with metadata and made available in coherent and clearly describable collections.
Die Kernaufgabe der Projektgruppe des DWDS besteht darin, den in den Korpora enthaltenen Wortschatz lexikografisch und korpusbasiert zu beschreiben. In der modernen Lexikografie werden die Aussagen zu den sprachlichen Aspekten und Eigenschaften der beschriebenen Wörter und zu Besonderheiten ihrer Verwendung auf Korpusevidenz gestutzt. Empirisch können riesige Textsammlungen Hypothesen genauer oder ausführlicher belegen. Dabei wird deutlich, wie vielfältig Sprache im Gebrauch tatsachlich realisiert wird. Zu diesem Zweck bieten wir auf der DWDS-Plattform neben den zeitlich und nach Textsorten ausgewogenen Kernkorpora und den Zeitungskorpora eine Reihe von Spezialkorpora an, die hinsichtlich ihres Gegenstandes oder ihrer sprachlichen Charakteristika von den erstgenannten Korpora abweichen. Die Webkorpora bilden einen wesentlichen Bestandteil dieser Spezialkorpora.
We present a method to identify and document a phenomenon on which there is very little empirical data: German phrasal compounds occurring in the form of as a single token (without punctuation between their components). Relying on linguistic criteria, our approach implies to have an operational notion of compounds which can be systematically applied as well as (web) corpora which are large and diverse enough to contain rarely seen phenomena. The method is based on word segmentation and morphological analysis, it takes advantage of a data-driven learning process. Our results show that coarse-grained identification of phrasal compounds is best performed with empirical data, whereas fine-grained detection could be improved with a combination of rule-based and frequency-based word lists. Along with the characteristics of web texts, the orthographic realizations seem to be linked to the degree of expressivity.
We present a collection of (currently) about 5.500 commands directed to voice-controlled virtual assistants (VAs) by sixteen initial users of a VA system in their homes. The collection comprises recordings captured by the VA itself and with a conditional voice recorder (CVR) selectively capturing recordings including the VA-directed commands plus some surrounding context. Next to a description of the collection, we present initial findings on the patterns of use of the VA systems during the first weeks after installation, including usage timing, the development of usage frequency, distributions of sentence structures across commands, and (the development of) command success rates. We discuss the advantages and disadvantages of the applied collection-specific recording approach and describe potential research questions that can be investigated in the future, based on the collection, as well as the merit of combining quantitative corpus linguistic approaches with qualitative in-depth analyses of single cases.
In this paper we present the results of an automatic classification of Russian texts into three levels of difficulty. Our aim is to build a study corpus of Russian, in which a L2 student is able to select texts of a desired complexity. We are building on a pilot study, in which we classified Russian texts into two levels of difficulty. In the current paper, we apply the classification to an extended corpus of 577 labelled texts. The best-performing combination of features achieves an accuracy of 0,74 within at most one level difference.
In this paper, we present first results of training a classifier for discriminating Russian texts into different levels of difficulty. For the classification we considered both surface-oriented features adopted from readability assessments and more linguistically informed, positional features to classify texts into two levels of difficulty. This text classification is the main focus of our Levelled Study Corpus of Russian (LeStCoR), in which we aim to build a corpus adapted for language learning purposes – selecting simpler texts for beginner second language learners and more complex texts for advanced learners. The most discriminative feature in our pilot study was a lexical feature that approximates accessibility of the vocabulary by the second language learner in terms of the proportion of familiar words in the texts. The best feature setting achieved an accuracy of 0.91 on a pilot corpus of 209 texts.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.
In this paper, we present an overview of freely available web applications providing online access to spoken language corpora. We explore and discuss various solutions with which the corpus providers and corpus platform developers address the needs of researchers who are working with spoken language. The paper aims to contribute to the long-overdue exchange and discussion of methods and best practices in the design of online access to spoken language corpora.
Lexical explorer
(2018)
Das Tool Lexical Explorer ermöglicht, die Korpus-Frequenzangaben vom FOLK (Forschung und Lehrkorpus Gesprochenes Deutsch; Schmidt 2014) und GeWiss (Gesprochene Wissenschaftssprache; Fandrych, Meißner & Wallner 2017) zu durchsuchen und abzufragen. Das Tool besteht aus Tabellen, die für die Zwecke des Projekts LeGeDe entwickelt wurden (Möhrs et al. 2017). Die Zahlen beruhen auf dem DGD-Release 2.10 (23.05.2018). Für den Vergleich zwischen Korpora der gesprochenen Sprache und DeReKo wird die DeReKo Version 2016-II (30.09.2016) ohne Subkorpora Wikipedia-Daten (Artikel, Diskussionen) und ohne Sprachliche Umbrüche (45/68) verwendet (vgl. Kupietz & Keibel 2009). Die Tabellen werden mit Hilfe von DataTables (plug-in for jQuery) präsentiert, wobei die Ajax Protokolle benutzt werden, um die Tabellen asynchron aus der Datenbank zu ziehen. Die Benutzung des Tools setzt die Vertrautheit mit der Annotation der Korpora in der DGD voraus.
We present the annotation of information structure in the MULI project. To learn more about the information structuring means in prosody, syntax and discourse, theory- independent features were defined for each level. We describe the features and illustrate them on an example sentence. To investigate the interplay of features, the representation has to allow for inspecting all three layers at the same time. This is realised by a stand-off XML mark-up with the word as the basic unit. The theory-neutral XML stand-off annotation allows integrating this resource with other linguistic resources such as the Tiger Treebank for German or the Penn treebank for English.
The KorAP project (“Korpusanalyseplattform der nächste Generation”, “Corpus-analysis platform of the next generation”), carried out at the Institut fUr Deutsche Sprache (IDS) in Mannheim, Germany, has as its goal the development of a modem, state-of-the-art corpus-analysis platform, capable of handling very large corpora and opening the perspectives for innovative linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse extremely large amounts of primary data and annotations, while at the same time allowing an undistorted view of the primary un-annotated text, and thus fully satisfying expectations associated with a scientific tool. The project started in July 2011 and is funded till June 2014. The demo presentation in December will be the first version following a preliminary feature freeze, and will open the alpha testing phase of the project.
It is well known that the distribution of lexical and grammatical patterns is size- and register-sensitive (Biber 1986, and later publications). This fact alone presents a challenge to many corpus-oriented linguistic studies focusing on a single language. When it comes to cross-linguistic studies using corpora, the challenge becomes even greater due to the lack of high-quality multilingual corpora (Kupietz et al. 2020; Kupietz/Trawiński 2022), which are comparable with respect to the size and the register. That was the motivation for the creation of the European Reference Corpus EuReCo, an initiative started in 2013 at the Leibniz Institute for the German Language (IDS) together with several European partners (Kupietz et al. 2020). EuReCo is an emerging federated corpus, with large virtual comparable corpora across various languages and with an infrastructure supporting contrastive research. The core of the infrastructure is KorAP (Diewald et al. 2016), a scalable open-source platform supporting the analysis and visualisation of properties of texts annotated by multiple and potentially conflicting information layers, and supporting several corpus query languages. Until recently, EuReCo consisted of three monolingual subparts: the German Reference Corpus DeReKo (Kupietz et al. 2018), the Reference Corpus of Contemporary Romanian Language (Barbu Mititelu/Tufiş/Irimia 2018), and the Hungarian National Corpus (Váradi 2002). The goal of the present submission is twofold. On the one hand, it reports about the new component of EuReCo: a sample of the National Corpus of Polish (Przepiórkowski et al. 2010). On the other hand, it presents the results of a new pilot study using the newly extended EuReCo. This pilot study investigates selected Polish collocations involving light verbs and their prepositional / nominal complements (Fig. 1) and extends the collocation analyses of German, Romanian and Hungarian (Fig. 2) discussed in Kupietz/Trawiński (2022).
The present article describes the first stage of the KorAP project, launched recently at the Institut für Deutsche Sprache (IDS) in Mannheim, Germany. The aim of this project is to develop an innovative corpus analysis platform to tackle the increasing demands of modern linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse primary data and annotations in the petabyte range, while at the same time allowing an undistorted view of the primary linguistic data, and thus fully satisfying the demands of a scientific tool. An additional important aim of the project is to make corpus data as openly accessible as possible in light of unavoidable legal restrictions, for instance through support for distributed virtual corpora, user-defined annotations and adaptable user interfaces, as well as interfaces and sandboxes for user-supplied analysis applications. We discuss our motivation for undertaking this endeavour and the challenges that face it. Next, we outline our software implementation plan and describe development to-date.
The present paper describes Corpus Query Lingua Franca (ISO CQLF), a specification designed at ISO Technical Committee 37 Subcommittee 4 “Language resource management” for the purpose of facilitating the comparison of properties of corpus query languages. We overview the motivation for this endeavour and present its aims and its general architecture. CQLF is intended as a multi-part specification; here, we concentrate on the basic metamodel that provides a frame that the other parts fit in.
CoMParS is a resource under construction in the context of the long-term project German Grammar in European Comparison (GDE) at the IDS Mannheim. The principal goal of GDE is to create a novel contrastive grammar of German against the background of other European languages. Alongside German, which is the central focus, the core languages for comparison are English, French, Hungarian and Polish, representing different typological classes. Unlike traditional contrastive grammars available for German, which usually cover language pairs and are based on formal grammatical categories, the new GDE grammar is developed in the spirit of functionalist typology. This implies that, instead of formal criteria, cognitively motivated functional domains in terms of Givón (1984) are used as tertia comparationis. The purpose of CoMParS is to document the empirical basis of the theoretical assumptions of GDE-V and to illustrate the otherwise rather abstract content of grammar books by as many as possible naturally occurring and adequately presented multilingual examples, including information on their use in specific contexts and registers. These examples come from existing parallel corpora, and our presentation will focus on the legal aspects and consequences of this choice of language data.
In diesem Beitrag beschäftigen wir uns mit moralisierenden Sprachhandlungen, worunter wir diskursstrategische Verfahren verstehen, in denen die Beschreibung von Streitfragen und erforderlichen Handlungen mit moralischen Begriffen enggeführt werden. Auf moralische Werte verweisendes Vokabular (wie beispielsweise „Freiheit“, „Sicherheit“ oder „Glaubwürdigkeit“) wird dabei verwendet, um eine Forderung durchzusetzen, die auf diese Weise unhintergehbar erscheint und keiner weiteren Begründung oder Rechtfertigung bedarf. Im Fokus unserer Betrachtungen steht dementsprechend das aus pragma-linguistischer Sicht auffällige Phänomen einer spezifischen Redepraxis der Letztbegründung oder Unhintergehbarkeit, die wir als Pragmem auffassen und beschreiben. Hierfür skizzieren wir zunächst den in der linguistischen Pragmatik verorteten Zugang zu Praktiken der Moralisierung, betrachten sprachliche Formen des Moralisierens und deren strukturelle Einbettung in den Satz oder den Text (also kotextuelle und pragmasyntaktischen Struktureinbettungen), um anschließend Hypothesen zu kontextuellen Wirkungsfunktionen aufzustellen. Darauf basierend leiten wir schließlich anhand von exemplarischen Korpusbelegen Strukturmuster des Moralisierens ab, die wir in dem philosophisch-linguistischen Fachterminus ‚Pragmem‘ verdichten und mittels qualitativer und quantitativer Analysen operationalisieren.
In diesem Beitrag beschäftigen wir uns mit moralisierenden Sprachhandlungen, worunter wir diskursstrategische Verfahren verstehen, in denen die Beschreibung von Streitfragen und erforderlichen Handlungen mit moralischen Begriffen enggeführt werden. Auf moralische Werte verweisendes Vokabular (wie beispielsweise „Freiheit“, „Sicherheit“ oder „Glaubwürdigkeit“) wird dabei verwendet, um eine Forderung durchzusetzen, die auf diese Weise unhintergehbar erscheint und keiner weiteren Begründung oder Rechtfertigung bedarf. Im Fokus unserer Betrachtungen steht dementsprechend das aus pragma-linguistischer Sicht auffällige Phänomen einer spezifischen Redepraxis der Letztbegründung oder Unhintergehbarkeit, die wir als Pragmem auffassen und beschreiben. Hierfür skizzieren wir zunächst den in der linguistischen Pragmatik verorteten Zugang zu Praktiken der Moralisierung, betrachten sprachliche Formen des Moralisierens und deren kotextuellen und insbesondere pragma-syntaktischen Struktureinbettungen, um anschließend Hypothesen zu kontextuellen Wirkungsfunktionen aufzustellen. Darauf basierend leiten wir schließlich anhand von exemplarischen Korpusbelegen Strukturmuster des Moralisierens ab, die wir in dem Terminus „Pragmem“ verdichten und mittels qualitativer und quantitativer Analysen operationalisieren.
Tagset und Richtlinie für das PoSTagging von Sprachdaten aus Genres internetbasierter Kommunikation
(2015)
The paper presents best practices and results from projects in four countries dedicated to the creation of corpora of computer-mediated communication and social media interactions (CMC). Even though there are still many open issues related to building and annotating corpora of that type, there already exists a range of accessible solutions which have been tested in projects and which may serve as a starting point for a more precise discussion of how future standards for CMC corpora may (and should) be shaped like.
The paper presents best practices and results from projects in four countries dedicated to the creation of corpora of computer-mediated communication and social media interactions (CMC). Even though there are still many open issues related to building and annotating corpora of that type, there already exists a range of accessible solutions which have been tested in projects and which may serve as a starting point for a more precise discussion of how future standards for CMC corpora may (and should) be shaped like.
The paper presents best practices and results from projects dedicated to the creation of corpora of computer-mediated communication and social media interactions (CMC) from four different countries. Even though there are still many open issues related to building and annotating corpora of this type, there already exists a range of tested solutions which may serve as a starting point for a comprehensive discussion on how future standards for CMC corpora could (and should) be shaped like.
Converting and Representing Social Media Corpora into TEI: Schema and best practices from CLARIN-D
(2016)
The paper presents results from a curation project within CLARIN-D, in which an existing lMWord corpus of German chat communication has been integrated into the DEREKO and DWDS corpus infrastructures of the CLARIN-D centres at the Institute for the German Language (IDS, Mannheim) and at the Berlin-Brandenburg Academy of Sciences (BBAW, Berlin). The focus is on the solutions developed for converting and representing the corpus in a TEI format.
The paper reports the results of the curation project ChatCorpus2CLARIN. The goal of the project was to develop a workflow and resources for the integration of an existing chat corpus into the CLARIN-D research infrastructure for language resources and tools in the Humanities and the Social Sciences (http://clarin-d.de). The paper presents an overview of the resources and practices developed in the project, describes the added value of the resource after its integration and discusses, as an outlook, to what extent these practices can be considered best practices which may be useful for the annotation and representation of other CMC and social media corpora.
Die MoCoDa 2 (https://db.mocoda2.de) ist eine webbasierte Infrastruktur für die Erhebung, Aufbereitung, Bereitstellung und Abfrage von Sprachdaten aus privater Messenger-Kommunikation (WhatsApp und ähnliche Anwendungen). Zentrale Komponenten bilden (1) eine Datenbank, die für die Verwaltung von WhatsApp-Sequenzen eingerichtet ist, die von Nutzer/innen gespendet und für linguistische Recherche- und Analysezwecke aufbereitet wurden, (2) ein Web-Frontend, das die Datenspender/innen dabei unterstützt, gespendete Sequenzen um analyserelevante Metadaten anzureichern und zu pseudonymisieren, und (3) ein Web-Frontend, über das die Daten für Zwecke in Forschung und Lehre abgefragt werden können. Der Aufbau der MoCoDa-2-Infrastruktur wurde im Rahmen des Programms „Infrastrukturelle Forderung für die Geistes- und Gesellschaftswissenschaften“ vom Ministerium für Kultur und Wissenschaft des Landes Nordrhein-Westfalen gefordert. Ziel des Projekts ist es, ein aufbereitetes Korpus zur Sprache und Interaktion in der deutschsprachigen Messenger-Kommunikation bereitzustellen, das speziell auch für qualitative Untersuchungen eine wertvolle Grundlage bildet.