Refine
Document Type
- Part of a Book (4)
- Conference Proceeding (4)
- Article (2)
Has Fulltext
- yes (10)
Keywords
- corpora (10) (remove)
Publicationstate
Reviewstate
- Peer-Review (5)
- Peer-review (2)
- (Verlags)-Lektorat (1)
Dictionaries have been part and parcel of literate societies for many centuries. They assist in communication, particularly across different languages, to aid in understanding, creating, and translating texts. Communication problems arise whenever a native speaker of one language comes into contact with a speaker of another language. At the same time, English has established itself as a lingua franca of international communication. This marked tendency gives lexicography of English a particular significance, as English dictionaries are used intensively and extensively by huge numbers of people worldwide.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state-ofthe-art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
This paper presents an algorithm and an implementation for efficient tokenization of texts of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
In this article, we describe a user support solution for the digital humanities. As a case study, we show the development of the CLARIN-D Helpdesk from 2013 into the current support solution that has been extended for several other CLARIN-related software and projects and the DARIAH-ERIC. Furthermore, we describe a way towards a common support platform for CLARIAH-DE, which is currently in the final phase. We hope to further expand the help desk in the following years in order to act as a hub for user support and a central knowledge resource for the digital humanities not only in the German, but also in the European area and perhaps at some point worldwide.
Die Vermittlung von Fachsprache gewinnt in der heutigen europäischen Gesellschaft, die von 'Bewegungen' unterschiedlicher Art charakterisiert ist, immer mehr an Relevanz, aber die Lernergruppen werden immer differenzierter und die Lehrenden, die meist keine Experten auf dem Fachgebiet sind, haben Schwierigkeiten lernergerechte Kurse zu gestalten, da die Möglichkeiten zur Aus- oder Fortbildung selten sind. Fragen, die offen stehen oder nur teilweise beantwortet wurden, gibt es noch viele und eine einheitliche Antwort ist nicht immer möglich, aber wir möchten trotzdem versuchen, anstatt von Problemfällen auch Experimente und Lösungen vorzustellen. Wir möchten zeigen, wie und mit welchen Mitteln und Werkzeugen Fachsprachen beschrieben werden können und welche Auswirkungen dies im Unterricht haben kann. Nach einem Überblick über die unterschiedlichen Definitionsmöglichkeiten von 'Fachsprache', zeigen wir, welche Auswirkungen die unterschiedlichen Schwerpunkte in der Lehre haben können. Abschließend werden wir ein kleines korpuslinguistisches Experiment vorstellen (Korpus mit den Aufsätzen zum Themenschwerpunkt 'Fachsprache' ZIF 2019-1), um mögliche Anregungen zur Benutzung von Korpora zu geben, da sich Korpora in allen Phasen des Unterrichts (vor, während und danach) sowohl für Lehrende als auch für Lernende positiv auswirken können.
The possibility to search electronically very large corpora of texts has opened up ways in which we can truly evaluate the rules through which grammarians have tried and continue to try to simulate natural languages. However, the possibility to handle incredibly large amounts of texts might lead to problems with the assessment of certain phenomena that are hardly ever represented in those corpora and yet, have always been regarded as grammatically correct elements of a given language. In German, typical phenomena of this kind are forms like betrögest or erwögest, i.e. second person singular of the so-called strong verbs in the subjunctive mood. Should we see them merely as grammarians’ inventions? Before doing so, we should reconsider the nature of these phenomena. They may appear to be isolated word forms but, in fact, are compact realizations of syntactic constructions, and it is the frequency of these constructions that should be evaluated, not the frequency of their specific realizations.
The paper reports on the results of a scientific colloquium dedicated to the creation of standards and best practices which are needed to facilitate the integration of language resources for CMC stemming from different origins and the linguistic analysis of CMC phenomena in different languages and genres. The key issue to be solved is that of interoperability – with respect to the structural representation of CMC genres, linguistic annotations metadata, and anonymization/pseudonymization schemas. The objective of the paper is to convince more projects to partake in a discussion about standards for CMC corpora and for the creation of a CMC corpus infrastructure across languages and genres. In view of the broad range of corpus projects which are currently underway all over Europe, there is a great window of opportunity for the creation of standards in a bottom-up approach.
As a consequence of a recent curation project, the Dortmund Chat Corpus is available in CLARIN-D research infrastructures for download and querying. In a legal expertise it had been recommended that standard measures of anonymisation be applied to the corpus before its republication. This paper reports about the anonymisation campaign that was conducted for the corpus. Anonymisation has been realised as categorisation, and the taxonomy of anonymisation categories applied is introduced and the method of applying it to the TEI files is demonstrated. The results of the anonymisation campaign as well as issues of quality assessment are discussed. Finally, pseudonymisation as an alternative to categorisation as a method of the anonymisation of CMC data is discussed, as well as possibilities of an automatisation of the process.
American English and German AI, AU observed in cognates such as Wein, wine, Haus, house are usually treated on a par, represented with the same initial vowel (cf. [ai], [au] for Am. Engl, and German [1]). Yet, acoustic measurements indicate differences as the relevant trajectories characteristically cross in Am. Engl, but not in German. These data may indicate consistency with the same initial target for these diphthongs in German, supporting the choice of the same Symbol /a/ in phonemic representation, as opposed to distinct targets (and distinct initial phonemes) in American English.
Reading corpora are text collections that are enriched with processing data. From a corpus linguist’s perspective, they can be seen as an extension of classical linguistic corpora with human language processing behavior. From a psycholinguist’s perspective, reading corpora allow to test psycholinguistic hypotheses on subsets of language and language processing as it is ‘in the wild’ – in contrast to strictly controlled language material in isolated sentences, as used in most psycholinguistic experiments. In this paper, we will investigate a relevance-based account of language processing which states that linguistic structures, that are embedded deeper syntactically, are read faster because readers allocate less attention to these structures.