Refine
Year of publication
Document Type
- Part of a Book (1761)
- Article (1170)
- Conference Proceeding (442)
- Book (214)
- Other (100)
- Review (61)
- Working Paper (48)
- Part of Periodical (28)
- Doctoral Thesis (25)
- Report (17)
Language
- German (2834)
- English (961)
- French (22)
- Multiple languages (18)
- Russian (14)
- Spanish (11)
- Portuguese (9)
- Ukrainian (5)
- Latvian (3)
- Polish (3)
Keywords
- Deutsch (1505)
- Korpus <Linguistik> (544)
- Konversationsanalyse (208)
- Gesprochene Sprache (176)
- Wörterbuch (176)
- Grammatik (162)
- Interaktion (153)
- Kommunikation (140)
- Sprachgebrauch (139)
- Computerlinguistik (136)
Publicationstate
- Veröffentlichungsversion (3883) (remove)
Reviewstate
- (Verlags)-Lektorat (2490)
- Peer-Review (1008)
- Verlags-Lektorat (79)
- Peer-review (37)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (33)
- Review-Status-unbekannt (12)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (5)
- (Verlags-)Lektorat (4)
- Verlagslektorat (4)
- Peer-Revied (3)
Publisher
- de Gruyter (621)
- Institut für Deutsche Sprache (354)
- Leibniz-Institut für Deutsche Sprache (IDS) (223)
- Narr (206)
- IDS-Verlag (108)
- Lang (97)
- Niemeyer (90)
- De Gruyter (59)
- Verlag für Gesprächsforschung (51)
- Association for Computational Linguistics (44)
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
Ph@ttSessionz and Deutsch heute are two large German speech databases. They were created for different purposes: Ph@ttSessionz to test Internet-based recordings and to adapt speech recognizers to the voices of adolescent speakers, Deutsch heute to document regional variation of German. The databases differ in their recording technique, the selection of recording locations and speakers, elicitation mode, and data processing.
In this paper, we outline how the recordings were performed, how the data was processed and annotated, and how the two databases were imported into a single relational database system. We present acoustical measurements on the digit items of both databases. Our results confirm that the elicitation technique affects the speech produced, that f0 is quite comparable despite different recording procedures, and that large speech technology databases with suitable metadata may well be used for the analysis of regional variation of speech.
There have been several attempts to annotate communicative functions to utterances of verbal feedback in English previously. Here, we suggest an annotation scheme for verbal and non-verbal feedback utterances in French including the categories base, attitude, previous and visual. The data comprises conversations, maptasks and negotiations from which we extracted ca. 13,000 candidate feedback utterances and gestures. 12 students were recruited for the annotation campaign of ca. 9,500 instances. Each instance was annotated by between 2 and 7 raters. The evaluation of the annotation agreement resulted in an average best-pair kappa of 0.6. While the base category with the values acknowledgement, evaluation, answer, elicit and other achieves good agreement, this is not the case for the other main categories. The data sets, which also include automatic extractions of lexical, positional and acoustic features, are freely available and will further be used for machine learning classification experiments to analyse the form-function relationship of feedback.
The main objective of this article is to describe the current activities at the Mannheim Institute for German Language regarding the implementation of a domain-specific ontology for German grammar. We differentiate ontology bases from ontology management Systems, point out the benefits of database-driven Solutions, and go Step by Step through all phases of the ontology lifecycle. In Order to demonstrate the practical use of our approach, we outline the interface between our ontology and the grammis web Information System, and compare the ontology-based retrieval mechanism with traditional full text search.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g. title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
We present an implemented XML data model and a new, simplified query language for multi-level annotated corpora. The new query language involves automatic conversion of queries into the underlying, more complicated MMAXQL query language. It supports queries for sequential and hierarchical, but also associative (e.g. coreferential) relations. The simplified query language has been designed with non-expert users in mind.
Linguistic query systems are special purpose IR applications. We present a novel state-of-the-art approach for the efficient exploitation of very large linguistic corpora, combining the advantages of relational database management systems (RDBMS) with the functional MapReduce programming model. Our implementation uses the German DEREKO reference corpus with multi-layer linguistic annotations and several types of text-specific metadata, but the proposed strategy is language-independent and adaptable to large-scale multilingual corpora.
So far, there have been few descriptions on creating structures capable of storing lexicographic data, ISO 24613:2008 being one of the latest. Another one is by Spohr (2012), who designs a multifunctional lexical resource which is able to store data of different types of dictionaries in a user-oriented way. Technically, his design is based on the principle of a hierarchical XML/OWL (eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) describes a database system of tables containing data and definitions of relations between them. The model was developed in the context of the project "Scientific eLexicography for Africa" and the lexicographic database to be built thereof will be implemented with MySQL. The principles of the ISO model and of Spohr's model are adhered to with one major difference in the implementation strategy: we do not place the lemma in the centre of attention, but the sense description — all other elements, including the lemma, depend on the sense description. This article also describes the contained lexicographic data sets and how they have been collected from different sources. As our aim is to compile several prototypical internet dictionaries (a monolingual Northern Sotho dictionary, a bilingual learners' Xhosa–English dictionary and a bilingual Zulu–English dictionary), we describe the necessary microstructural elements for each of them and which principles we adhere to when designing different ways of accessing them. We plan to make the model and the (empty) database with all graphical user interfaces that have been developed, freely available by mid-2015.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
We present a testsuite for POS tagging German web data. Our testsuite provides the original raw text as well as the gold tokenisations and is annotated for parts-of-speech. The testsuite includes a new dataset for German tweets, with a current size of 3,940 tokens. To increase the size of the data, we harmonised the annotations in already existing web corpora, based on the Stuttgart-Tübingen Tag Set. The current version of the corpus has an overall size of 48,344 tokens of web data, around half of it from Twitter. We also present experiments, showing how different experimental setups (training set size, additional out-of-domain training data, self-training) influence the accuracy of the taggers. All resources and models will be made publicly available to the research community.
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NP- and non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron’s (2002) manually tuned system.
Creating and maintaining metadata for various kinds of resources requires appropriate tools to assist the user. The paper presents the metadata editor ProFormA for the creation and editing of CMDI (Component Metadata Infrastructure) metadata in web forms. This editor supports a number of CMDI profiles currently being provided for different types of resources. Since the editor is based on XForms and server-side processing, users can create and modify CMDI files in their standard browser without the need for further processing. Large parts of ProFormA are implemented as web services in order to reuse them in other contexts and programs.
In this paper we present a new approach to lexicographical design for the description of German speech act verbs. This approach is based on an action-theoretical semantic conception. The several conditions for linguistic action provide the basis for the elaboration of the central semantic features. The systematic relationship of these features is reflected in the organization of a lexical database which allows various possibilities of access to different types of lexical information.
In the following paper we shall give an outline of the semantic framework for describing speech act verbs, i. e. verbs of communication, with the practical goal of a semantical database for a (dictionary of) synonymy of German speech act verbs which enables the user not only to find a list of synonymous verbs but also enables him to gain an insight into the semantic relations between the words.
The semantic framework is based on
(i) a set of conditions for performing speech acts as the relevant domain of reference
(ii) the introduction of a notion of situation, or better type of situation
The performative as well as the descriptive use of the verbs can be reduced to their fundamental dependency on the situations in which they are used: on the one hand with regard to the possibility of the action itself, and on the other hand with regard to the possibility of their designation. For both ways of use the relevant aspects of the situation constitute the necessary conditions.
One of the most popular techniques used in HPSG-based studies to describe linguistic phenomena is the raising mechanism. Besides ordinary raising verbs or adjectives, this tool has been applied for handling verbal complexes and discontinuous constituents, among other phenomena. In this paper, a new application for raising within the HPSG paradigm will be discussed, thereby investigating data from the prepositional domain. We will analyze linguistic properties of word combinations in German consisting of a preposition, a noun, and another preposition (such as auf Grund von (‘by virtue of’)), thus arguing that raising is the most appropriate method for satisfactorily describing the crucial syntactic features which are typical for those expressions. The objective of this paper is thus to demonstrate the efficiency of the raising mechanism as used in HPSG, and therefore, to emphasize the importance of designing a satisfactory uniform theory of raising within this grammar framework.
One of the most popular techniques used in HPSG-based studies to describe linguistic phenomena is the raising mechanism. Besides ordinary raising verbs or adjectives, this tool has been applied for handling verbal complexes and discontinuous constituents, among other phenomena. In this paper, a new application for raising within the HPSG paradigm will be discussed, thereby investigating data from the prepositional domain. We will analyze linguistic properties of word combinations in German consisting of a preposition, a noun, and another preposition (such as auf Grund von (‘by virtue of’)), thus arguing that raising is the most appropriate method for satisfactorily describing the crucial syntactic features which are typical for those expressions. The objective of this paper is thus to demonstrate the efficiency of the raising mechanism as used in HPSG, and therefore, to emphasize the importance of designing a satisfactory uniform theory of raising within this grammar framework.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
The understanding of story variation, whether motivated by cultural currents or other factors, is important for applications of formal models of narrative such as story generation or story retrieval. We present the first stage of an experiment to elicit natural narrative variation data suitable for evaluation with respect to story similarity, to qualitative and quantitative analysis of story variation, and also for data processing. We also present few preliminary results from the first stage of the experiment, using Red Riding Hood and Romeo and Juliet as base texts.
XML has been designed for creating structured documents, but the information that is encoded in these structures are, by definition, out of scope for XML. Additional sources, normally not easily interpretable by computers, such as documentation are needed to determine the intention of specific tags in a tag-set. The Component Metadata Infrastructure (CMDI) takes a rather pragmatic approach to foster interoperability between XML instances in the domain of metadata descriptions for language resources. This paper gives an overview of this approach.
This paper presents the current results of an ongoing research project on corpus distribution of prepositions and pronouns within Polish preposition-pronoun contractions. The goal of the project is to provide a quantitative description of Polish preposition-pronoun contractions taking into consideration morphosyntactic properties of their components. It is expected that the results will provide a basis for a revision of the traditionally assumed inflectional paradigms of Polish pronouns and, thus, for a possible remodeling of these paradigms. The results of corpus-based investigations of the distribution of prepositions within preposition-pronoun contractions can be used for grammar-theoretical and lexicographic purposes.
The present paper examines the relationship between pragmatics, semantics and grammar as subdisciplines of linguistics from three different perspectives. The first section gives a historical survey of their development during the 20th century and classifies linguistic schools according to their interest in different fields of research. The second part presents a systematic model of the field of objects to be investigated by linguistics, aiming at a more precise delimitation of its subdisciplines. Finally, in the third section, the division of labour between pragmatics, semantics and grammar is discussed in the light of the concrete example of verb valence.
This paper presents the system architecture as well as the underlying workflow of the Extensible Repository System of Digital Objects (ERDO) which has been developed for the sustainable archiving of language resources within the Tübingen CLARIN-D project. In contrast to other approaches focusing on archiving experts, the described workflow can be used by researchers without required knowledge in the field of long-term storage for transferring data from their local file systems into a persistent repository.
This paper describes the lexical database tool LOLA (Linguistic-Oriented Lexical database Approach) which has been developed for the construction and maintenance of lexicons for the machine translation system LMT. First, the requirements such a tool should meet are discussed, then LMT and the lexical information it requires, and some issues concerning vocabulary acquisition are presented. Afterwards the architecture and the components of the LOLA system are described and it is shown how we tried to meet the requirements worked out earlier. Although LOLA originally has been designed and implemented for the German-English LMT prototype, it aimed from the beginning at a representation of lexical data that can be reused for other LMT or MT prototypes or even other NLP applications. A special point of discussion will therefore be the adaptability of the tool and its components as well as the reusability of the lexical data stored in the database for the lexicon development for LMT or for other applications.
Connectives are conjunctions, prepositions, adverbs and other particles which share the function of encoding semantic relations between sentences, or rather, between semantic objects some of which can be meanings of sentences. The relata linked by any such relation will fall into one of four distinct categories: they will be physical objects, states of affairs, propositions, or pragmatic options (the atoms of human interaction). Physical objects constitute the conceptual domain of space, states of affairs the domain of time, propositions the epistemic domain, and pragmatic options the deontic domain. The relations encodable in any of these domains can be divided into four basic types: similarity relations, situating relations, conditional relations, and causal relations. Conceptual domains and types of relations define the universe of possible connections between semantic objects.
Connectives differ as to the interpretations they permit in terms of conceptual domains and types of relations. Very few connectives are specialized on relata of one certain category and relations of one certain type. Possible examples in German are später (‘later on’) and zwischenzeitlich (‘in the meantime’), which encode situating relations between states of affairs. Other connectives are specialized on relata of one certain category, but are underspecified with respect to the type of relation. An example is German sobald (‘as soon as’), which can only connect states of affairs, but accepts situating, conditional and causal readings. Connectives of a third group are specialized on relations of a certain type, but are underspecified with respect to the category of the relata. Examples of this kind are German weil (‘because’) and trotzdem (‘nevertheless’), which encode causal relations, but accept states of affairs, propositions and pragmatic options as their relata. Connectives of a fourth group are underspecified both for the category of relata and the type of relation. An example is German da (‘there’), which accepts relata of any category and allows for situating, conditional and causal readings. Connectives like und (‘and’) and oder (‘or’) exhibit an even higher degree of under specification, in that they allow for all kinds of relations and relata.
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of linguistic theories that take social interaction, involving language, into account. This paper introduces the corpora and datasets of a project scrutinizing this kind of feedback utterances in French. We present the genesis of the corpora (for a total of about 16 hours of transcribed and phone force-aligned speech) involved in the project. We introduce the resulting datasets and discuss how they are being used in on-going work with focus on the form-function relationship of conversational feedback. All the corpora created and the datasets produced in the framework of this project will be made available for research purposes.
A Supervised learning approach for the extraction of opinion sources and targets from German text
(2019)
We present the first systematic supervised learning approach for the extraction of opinion sources and targets on German language data. A wide choice of different features is presented, particularly syntactic features and generalization features. We point out specific differences between opinion sources and targets. Moreover, we explain why implicit sources can be extracted even with fairly generic features. In order to ensure comparability our classifier is trained and tested on the dataset of the STEPS shared task.
This paper presents a survey on hate speech detection. Given the steadily growing body of social media content, the amount of online hate speech is also increasing. Due to the massive scale of the web, methods that automatically detect hate speech are required. Our survey describes key areas that have been explored to automatically recognize these types of utterances using natural language processing. We also discuss limits of those approaches.
This paper presents a survey on the role of negation in sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis.
We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.
A tale of many stories: explaining policy diffusion between European higher education systems
(2013)
The thesis ”A Tale of Many Stories - Explaining Policy Diffusion between European Higher Education Systems" systematically examines diffusion processes and their effects with regard to a rather neglected policy area – the case of European higher education policy. The thesis contributes to the slowly growing number of comparative and mechanism-based studies on policy diffusion and represents the first study on the diffusion of policies between European Higher Education Systems. The main aim is to contrast and compare testable and coherent explanatory models on the functioning of different diffusion mechanisms. Three sets of explanatory models on the relationship between variables triggering and conditioning diffusion mechanisms and their impact on policy adoption are drawn from mechanism-based thinking on policy diffusion: on learning, socialization, and externalities. These approaches conceptualize the policy process in terms of interdependencies between international and national actors. Explanatory models based on assumptions about domestic policies and the common responses of countries to similar policy problems extend this theoretical framework. The thesis is based on event history modelling of policy change and adoption in higher education systems of 16 West European countries between the yeas 1980 and 1998. Overall 14 policy items describing performance-orientated reforms for public universities ranging from the adoption of external quality assurance systems to tuition fees are examined. Empirically, the main research question is what international, national and policy-specific factors cause and condition diffusion processes and the adoption of public policies? Evidence can be found for and against all of the four theoretical approaches tested. In comparison, many of the assumptions related to interdependencies lack robustness, whereas the common response model is the most stable one. This does not mean that explanatory models based on interdependent decision-making are not suitable for analysing policy diffusion in higher education. Rather interdependency is a multi- dimensional concept that requires a comparative assessment of diffusion mechanisms. Some of explanatory factors based on interdependent decision- making are still supported by the empirical analysis though. From this point of view, the recommendation for analysing diffusion is to start with a model based on domestic politics, that is successively extended by explanatory factors dealing with interdependencies between international and national actors. Diffusion variables matter – but it is only one side of the tale on policy diffusion.
Travel guides and travel reports constitute an important source for the generation and spread of popular geopolitical epistemes and assumptions. With regard to colonial attitudes and their possible perpetuation, it is therefore of great interest what kind of information such texts convey regarding (post)colonial places, and how they contextualize it. The paper compares descriptions of Qingdao (Tsingtau), a German colonized territory between 1897 and 1914, in travel guides and related material from colonial and postcolonial times and in different European languages. It investigates what differences can be found between these descriptions in relation to time, language, and medium (print or online) of publication. Of particular interest is the question whether, and in what ways, colonial perspectives are perpetuated in present-day (especially German) travel literature.
The Lehnwortportal Deutsch (2012 seqq.) serves as an integrated online information system on German lexical borrowings into other languages, synthesizing an increasing number of lexicographical dictionaries and providing basic cross-resource search options. The paper discusses the far-reaching revision of the system’s conceptual, lexicographical and technological underpinnings currently under way, focussing on their relevance for multilingual loanword lexicography.
We present SPLICR, the Web-based Sustainability Platform for Linguistic Corpora and Resources. The system is aimed at people who work in Linguistics or Computational Linguistics: a comprehensive database of metadata records can be explored in order to find language resources that could be appropriate for one’s spe cific research needs. SPLICR also provides a graphical interface that enables users to query and to visualise corpora. The project in which the system is developed aims at sustainably archiving the ca. 60 language resources that have been constructed in three collaborative research centres. Our project has two primary goals: (a) To process and to archive sustainably the resources so that they are still available to the research community in five, ten, or even 20 years time. (b) To enable researchers to query the resources both on the level of their metadata as well as on the level of linguistic annotations. In more general terms, our goal is to enable solutions that leverage the interoperability, reusability, and sustainability of heterogeneous collec- tions of language resources.
In this paper we present an experimental semantic search function, based on word embeddings, for an integrated online information system on German lexical borrowings into other languages, the Lehnwortportal Deutsch (LWPD). The LWPD synthesizes an increasing number of lexicographical resources and provides basic cross-resource search options. Onomasiological access to the lexical units of the portal is a highly desirable feature for many research questions, such as the likelihood of borrowing lexical units with a given meaning (Haspelmath & Tadmor, 2009; Zeller, 2015). The search technology is based on multilingual pre-trained word embeddings, and individual word senses in the portal are associated with word vectors. Users may select one or more among a very large number of search terms, and the database returns lexical items with word sense vectors similar to these terms. We give a preliminary assessment of the feasibility, usability and efficacy of our approach, in particular in comparison to search options based on semantic domains or fields.
Im September 1522 erschien in Wittenberg „Das newe Testament Deutzsch“ mit einer Auflage von über 3.000 Stück und war binnen einer Woche ausverkauft. Martin Luther, der auf dem Titelblatt auf eigenen Wunsch nicht erwähnt wird, hatte die Übersetzung auf der Wartburg in nur elf Wochen angefertigt und wenig später fünf Wochen lang mit seinem Kollegen und Freund, dem Gräzisten Philipp Melanchthon, insbesondere im Hinblick auf die griechische Urfassung bearbeitet. Die Geschichte der Revisionen der Lutherbibel beginnt im gleichen Jahr – schon für den Nachdruck im Dezember hat Luther dieses so genannte „Septemberevangelium“ an vielen Stellen revidiert. In Teilen erschien danach seine Übersetzung des Alten Testaments, 1534 die vollständige Übersetzung der Bibel. Luther korrigierte den Bibeltext unablässig weiter bis zur Ausgabe von 1545, der Lutherbibel „letzter Hand“.
Abertura/Opening
(2010)
Qualifizierungsmaßnahmen wie „Perspektive für Flüchtlinge Plus“ (PerFPlus) können als wichtige Bestandteile der neuen Willkommenskultur in Deutschland betrachtet werden. Deutschland als Einwanderungsland kann mit Hilfe solcher Initiativen gezielt für Arbeitsbereiche und Berufsgruppen werben, in denen es an Nachwuchs mangelt. Den Neuzugewanderten bieten sie die eine Chance sich in der hiesigen Arbeitswelt zu orientieren und Berufsfelder zu erkunden, die ihnen bislang noch nicht oder nur in anderer Form bekannt waren. Auf der anderen Seite bergen solche Maßnahmen aber auch ihr Risiko: Wenn sie ihr Ziel verfehlen und Frustrationen auf beiden Seiten erzeugen, sind lange Warteschleifen, Arbeitslosigkeit und möglicherweise politische Polarisierung und Radikalisierung die Folge. Insofern ist eine schnelle Intervention hinsichtlich der Verbesserung solcher Maßnahmen essentiell. Der vorliegende Bericht soll die konzeptionell-arbeitenden Teams bei der Bundesagentur für Arbeit (BA) sowie bei Bildungsanbietern die mit der BA kooperieren bei ihren wichtigen Aufgaben unterstützen. Alle Partner bleiben im Bericht anonym.
Deutschland sieht sich in den nächsten Jahren vor enormen Herausforderungen gegen-übergestellt. Mit der Fluchtmigration von knapp 1,5 Mio. Menschen alleine zwischen 2014 und 2017 stehen nahezu in jedem gesellschaftlichen Bereich und hier insbesonde-re in den Sektoren Bildung und Arbeit große Integrationsaufgaben an. Steven Vertovec, der Leiter des Max-Planck-Instituts zur Erforschung multireligiöser und multiethnischer Gesellschaften bezeichnet die Fluchtmigration von 2015 auch deshalb als die „zweite Wende“ (Vertovec 2015) für Deutschland, die das Land nachhaltig verändern wird. Nach seiner Einschätzung werden die gesellschaftlichen Transformationen dermaßen tiefgrei-fend sein, dass die Formulierung „seit der Flüchtlingskrise“ eine ebenso geläufige Rede-wendung sein wird wie die Formulierung „seit der Wende“.
Berufliche Qualifizierungsmaßnahmen wie „GASTRO“ im Rhein-Neckar-Raum sind in diesem Kontext sehr wichtige Anstrengungen im Hinblick auf die strukturelle Integrati-on der Fluchtmigranten. Im gesamtgesellschaftlichen Kontext sind sie unverzichtbare Bestandteile der neuen Willkommenskultur, die seit den 2010ern versucht wird, in Deutschland zu etablieren. Als Einwanderungsland kann Deutschland mit Hilfe solcher Initiativen gezielt für Arbeitsbereiche und Berufsgruppen werben, in denen es an Nach-wuchs mangelt. Den Neuzugewanderten bieten sie die Chance sich in der hiesigen Ar-beitswelt zu orientieren und möglicherweise Berufsfelder zu erkunden, die ihnen bis-lang noch nicht oder nur in anderer Form bekannt waren.
Die Arbeitsgruppe konstituierte sich im Rahmen des Workshops „Querbezüge des Knowledge Engineering zu Methoden des Software Engineering und der Entwicklung von Informationssystemen" auf der 2. Deutschen Tagung Expertensysteme [AnS93]. Anfangs beteiligten sich zehn verschiedene Gruppen bzw. Einzelpersonen an der Arbeitsgruppe. Zur Fokussierung der Arbeiten beschloß die Arbeitsgruppe, sich primär mit den Themen Vorgehensmodelle und Methoden zu beschäftigen. Unter einem Vorgehensmodell wurde dabei die „Festlegung der bei der Entwicklung eines Systems durchzuführenden Arbeitsschritte verstanden, ... Beziehungen zwischen den Arbeitsschritten sind ebenso festzulegen wie Anforderungen an die zu erzeugenden Ergebnisse." [AL0+93]. Als eine Methode wurde eine „systematische Handlungsvorschrift zur Lösung von Aufgaben einer bestimmten Art verstanden." [AL0+93]. Dementsprechend wurde in der Arbeitsgruppe der Begriff Methodik im Sinne von Methodensammlung verwendet. Außerdem einigte man sich in der Arbeitsgruppe darauf, die Arbeiten anhand einer vergleichenden Fallstudie durchzuführen. In Abwandlung des oft verwendeten IFIP Beispiels [0SV82] wurde als Aufgabenstellung für die Fallstudie die Entwicklung eines (wissensbasierten) Systems zur Tagungsverwaltung ausgewählt. Im Rahmen ihrer Arbeit organisierte die Arbeitsgruppe noch einen weiteren Workshop „Vorgehensmodelle und Methoden zur Entwicklung komplexer Softwaresysteme", der auf der 18. Deutschen Jahrestagung für Künstliche Intelligenz durchgeführt wurde [KuS94]. Leider zeigte es sich in der laufenden Arbeit der Arbeitsgruppe, daß es insbesondere für Mitglieder aus der Wirtschaft sehr schwierig ist, sich über eine längeren Zeitraum aktiv an einer derartigen Arbeitsgruppe zu beteiligen. So blieben für die letzte Phase der Arbeitsgruppe nur noch vier Gruppen übrig, die auch in diesem Abschlußbericht vertreten sind. Von daher sollte klar sein, daß dieser Abschlußbericht keine alle Aspekte umfassende Analyse sein kann, sondern sich vielmehr auf Schlußfolgerungen beschränken muß, die auf Grund der analysierten Methodiken möglich sind. Gleichwohl beinhalten diese Methodiken aus Sicht der Autoren typische methodische Vorgehensweisen in den beteiligten Fachgebieten. Um einen systematischen Vergleich der Methodiken zu ermöglichen, erarbeitete die Arbeitsgruppe einen Kriterienkatalog, mit dem charakteristische Eigenschaften einer Methodik erfaßt werden können [Kri97]. Dieser Kriterienkatalog wird nachfolgend verwendet, um jede der vier Methodiken detailliert zu charakterisieren.
Abstrakte Nomina. Vorarbeiten zu ihrer Erfassung in einem zweisprachigen syntagmatischen Wörterbuch
(1998)
In diesem Band werden die Ergebnisse eines deutsch-französischen Kooperationsprojekts vorgestellt. Im Zentrum steht ein Konzept für die Behandlung abstrakter Nomina in einem zweisprachigen syntagmatischen Wörterbuch deutsch-französisch/französisch-deutsch. Die Nomina werden als Prädikate mit Argumentstrukturen betrachtet, die zusammen mit Stützverben (verbes supports) den Kern eines Satzes bilden. Neben der ausführlichen syntaktischen und semantischen Charakterisierung der Argumente wird besonders auf die angemessene Behandlung von Kollokationen, idiomatischen Phrasemen und Komposita Wert gelegt. Die hier entwickelte Konzeption hat inzwischen Pate gestanden für ein deutsch-ungarisches Valenzwörterbuch der Substantive, dessen Konzept ebenfalls erörtert wird. Weitere sich anschließende Beiträge greifen in einem großen Bogen Diskussionsthemen auf, die im Gesamtrahmen des deutsch-französischen Gemeinschaftsprojekts relevant sind.
The Manatee corpus management system on which the Sketch Engine is built is efficient, but unable to harness the power of today’s multiprocessor machines. We describe a new, compatible implementation of Manatee which we develop in the Go language and report on the performance gains that we obtained.
Im Beitrag werden ausgewählte semantische und syntaktische Eigenschaften von AcI-Konstruktionen bei Wahrnehmungsverben im Deutschen, Italienischen und Ungarischen anhand einer Korpusanalyse dargestellt. Dabei wird in erster Linie auf Eigenschaften eingegangen, denen in der bisherigen Forschung wenig Aufmerksamkeit gewidmet wurde. Das Hauptziel ist, syntaktische Eigenschaften der Konstruktion aufzudecken, die sich von den Eigenschaften von Sätzen mit einer weniger markierten syntaktischen Struktur unterscheiden. Des Weiteren wird auch auf den Grammatikalisierungsgrad der Konstruktion in den einzelnen Vergleichssprachen eingegangen.
This study investigates high vowel laxing in the Louisiana French of the Lafourche Basin. Unlike Canadian French, in which the high vowels /i, y, u/ are traditionally described as undergoing laxing (to [I, Y, U]) in word-final syllables closed by any consonant other than a voiced fricative (see Poliquin 2006), Oukada (1977) states that in the Louisiana French of Lafourche Parish, any coda consonant will trigger high vowel laxing of /i/; he excludes both /y/ and /u/ from his discussion of high vowel laxing. The current study analyzes tokens of /i, y, u/ from pre-recorded interviews with three older male speakers from Terrebonne Parish. We measured the first and second formants and duration for high vowel tokens produced in four phonetic environments, crossing syllable type (open vs. closed) by consonant type (voiced fricative vs. any consonant other than a voiced fricative). Results of the acoustic analysis show optional laxing for /i/ and /y/ and corroborate the finding that high vowels undergo laxing in word-final closed syllables, regardless of consonant type. Data for /u/ show that the results vary widely by speaker, with the dominant pattern (shown by two out of three speakers) that of lowering and backing in the vowel space of closed syllable tokens. Duration data prove inconclusive, likely due to the effects of stress. The formant data published here constitute the first acoustic description of high vowels for any variety of Louisiana French and lay the groundwork for future study on these endangered varieties.
This paper presents Release 2.0 of the SALSA corpus, a German resource for lexical semantics. The new corpus release provides new annotations for German nouns, complementing the existing annotations of German verbs in Release 1.0. The corpus now includes around 24,000 sentences with more than 36,000 annotated instances. It was designed with an eye towards NLP applications such as semantic role labeling but will also be a useful resource for linguistic studies in lexical semantics.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
Adieu, Fremdwort!
(1991)
In German there are about twenty-five elements (like gemäß, nahe, voll) that seem to be used as a preposition along with their use as an adjective. In former approaches the preposition is interpreted as the product of grammaticalizing (and/or reanalyzing) the adjective. It is argued that the two criteria these approaches rely on, namely change of linear position and change of case government, are insufficient. In this paper, seven criteria for distinguishing adjectives form prepositions in German are put forward. What is most important is that these criteria have to be evaluated on the token level as well as on the level of type and word class/syntactic category. It can be shown that the individual ‘adjective-prepositions' as types possess a specific mixture of adjective-like and preposition-like features. On the token level, occurring as part of a postnominal restrictive attribute is indicative for preposition-like status in German. The comparison of German with English and Italian adjective-prepositions (like near, far, due and vicino, lontano) reveals a lot of differences, which counts as evidence for the language-specific nature of word classes. Nevertheless, Lehmanns functional-typological approach uncovers a fundamental functional similarity between complement governing adjectives and prepositions: the primary function of the phrases, i.e., adjective/preposition + complement, is to modify a nominal or a verbal concept, respectively. This insight explains why adjective-prepositions can be found cross-linguistically. The question whether we should propose one type or two types for gemäß and its cognates is of minor importance only.
Die adnominalen (attributiven) Verwendungsmöglichkeiten von temporalen und lokalen Adverbien im Deutschen werden untersucht und mit denen aus vier anderen europäischen Nachbarsprachen – Englisch, Französisch, Polnisch, Ungarisch – verglichen. Gezeigt wird, wie diese Sprachen unterschiedliche Anbindungsstrategien nutzen, um Adverbien in attributiver Funktion einsetzen zu können. Drei solcher Strategien werden unterschieden: Juxtaposition, Adjektivierung und formale Verknüpfung. Die Anbindungsstrategien sind in den Vergleichssprachen unterschiedlich verteilt und in unterschiedlichem Maße dominant. Verfügt eine Sprache über zwei oder mehr Anbindungsstrategien, so können diese in Abhängigkeit von der semantischen Teilklasse des Attributs mit verschiedenen semantischen Beschränkungen und Effekten korreliert sein. Diese bezeichnen wir als temporale bzw. lokale Kompatibilität, Persistenz und Oppositivität. Es lassen sich z.T. übereinzelsprachlich bestimmte Form-Funktions-Korrelationen zwischen Anbindungsstrategien und semantischen Beschränkungen bzw. Effekten feststellen. So können adjektivische und formal verknüpfte Attribute Persistenz und Oppositivität kodieren, juxtaponierte dagegen grundsätzlich nicht.
Besides English, Afrikaans is considered “the [Germanic] language which deviates grammatically the farthest from the others” (Harbert 2007: 17). But how exactly do we measure “grammatical deviation”, and how deviant is Afrikaans really if we compare it not just to other standard languages but also to non-standard varieties? The present contribution aims to address those questions combining functional-typological and dialectometric perspectives. We first select data for 28 Germanic varieties showing vastly different speaker numbers, grades of standardisation and amounts of language contact. Based on 48 (micro)typological variables from syntax, morphology and phonology, we perform cluster analysis and multidimensional scaling and present ways of visualizing and interpreting the results. Inter alia, the analyses show a major divide between Continental West Germanic and North Germanic (as might be expected) and they also identify a number of outliers, including English and pidgin and creole languages such as Russenorsk or Rabaul Creole German. Afrikaans appears to cluster with the other West Germanic languages rather than the outliers. Within West Germanic, however, it does indeed emerge as rather deviant and, according to our metric, it is, for example, typologically closer to other high-contact varieties such as Yiddish than it is to Dutch.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/ cllt.2005.1.2.277. http://www.degruyter.com/view/j/cllt.2005.1.issue-2/cllt.2005. 1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
The paper presents the process of developing the AirFrame database, a specialized lexical resource in which aviation terminology is defined in the form of semantic frames, following the methodology of the Berkeley FrameNet (FN). First, the structure of the database is presented, and then the methodology applied in developing and populating the database is described. The link between specialized aviation frames and general language semantic frames, of which frames defining entities, processes, attributes and events are particularly relevant, is discussed on the example of the semantic frame of Flight and its related frames. The paper ends with discussing possibilities of using AirFrame as a model for further developing resources in which general and specialized knowledge are linked.
Aktuelle Regionalsprachforschung zum Deutschen. Das IDS-Projekt Variation des gesprochenen Deutsch
(2010)
In den letzten Jahren haben sich einige Themen mit Bezug zur deutschen Sprache zu sprachpolitischen Kontroversen entwickelt, die heute mit großer Intensität diskutiert werden. Es handelt sich um Themen wie das der geschlechtergerechten Sprache, das durch verschiedene rechtliche und publizistische Impulse eine immer noch wachsende Präsenz in Medien und Öffentlichkeit besitzt. Auch das Thema des sogenannten politisch korrekten Sprachgebrauchs führt zu polarisiert geführten Debatten. Der vorliegende Beitrag will diese Debatten in ihren Grundzügen nachzeichnen und dabei zeigen, wie diese Themen vermittelt über die Medien und den «Verein Deutsche Sprache» ihren Weg bis in die politische Sphäre gefunden haben. Aus sprachwissenschaftlicher Sicht ist es wichtig, die Grenzen des Politischen so zu ziehen, dass die Sprache selbst in derartigen Kontroversen keinen Schaden nimmt.
This paper presents a dictionary writing system developed at the Institute for the German Language in Mannheim (IDS) for an ongoing international lexicographical project that traces the way of German loanwords in the East Slavic languages Russian, Belarusian and Ukrainian that were possibly borrowed via Polish. The results will be published in the Lehnwortportal Deutsch (LWP, lwp.ids-mannheim.de), a web portal for loanword dictionaries with German as the common donor language. The system described here is currently in use for excerpting data from a large range of historical and contemporary East Slavic monolingual dictionaries. The paper focuses on the tools that help in merging excerpts that are etymologically related to one and the same Polish etymon. The merging process involves eliminating redundancies and inconsistencies and, above all, mapping word senses of excerpted entries onto a common cross-language set of ‘metasenses’. This mapping may involve literally hundreds of excerpted East Slavic word senses, including quotations, for one ‘underlying’ Polish etymon.
In der wissenschaftlichen Forschung zum Argumentieren besteht immer noch ein eklatantes Empiriedefizit. Die gesprächsanalytische Untersuchung natürlicher Gespräche zeigt die Schwierigkeiten bei der Bestimmung der Grenzen des Argumentierens auf wie auch bei der Identifikation der internen Strukturen. Im Beitrag wird versucht, ausgehend von der theoretischen Konzeption der Interaktionskonstitution sensu Kallmeyer und Schütze, den Gegenstand 'Argumentieren in Gesprächen' aus den konstitutiven Eigenschaften von Interaktion selbst heraus zu bestimmen. Es zeigt sich, dass Gesprächsteilnehmer argumentieren, wenn die Bearbeitung übergreifender Handlungsaufgaben durch ein Darstellungsdefizit gefährdet oder blockiert ist. Argumentieren ist dabei intern in fünf Sequenzschritten organisiert, wobei die Argumentationssequenz auf verschiedene Weise expandiert und kondensiert werden kann. Sequenzielle Struktur und Variabilität gewährleisten interaktive Kontrolle des Geschehens und maximale Flexibilität, was Argumentieren zu einem praktikablen, lösungsorientierten Interaktionsverfahren macht.
Alles verstehen heißt alles verzeihen ist ein Satz, der im Deutschen den Charakter eines Spruchs, eines geflügelten Wortes angenommen hat, und der wahrscheinlich auf einem Zitat aus „Corinne ou l‘Italie“ von Madame de Staël (1807) (tout) comprendre c‘est (tout) pardonner basiert. Dieser Satz wurde ins Deutsche übersetzt und als Alles verstehen heißt alles verzeihen tradiert. Die Form eines Spruchs, eines geflügelten Wortes ist im Allgemeinen sehr konstant. Die Tendenz zur grammatischen Variation ist auch dann gering, wenn sie nach gängigen grammatischen Regeln möglich wäre.
Alltagsgespräche
(2001)
Allusion
(2023)
Almanca tuhfe / Deutsches Geschenk (1916) oder: Wie schreibt man deutsch mit arabischen Buchstaben?
(2022)
Versified dictionaries are bilingual/multilingual glossaries written in verse form to teach essential words in any foreign language. In Islamic culture, versified dictionaries were produced to teach the Arabic language to the young generations of Muslim communities not native in Arabic. In the course of time, many bilingual/multilingual versified dictionaries were written in different languages throughout the Islamic world. The focus of this study is on the Turkish-German versified dictionary titled Almanca Tuhfe / Deutsches Geschenk [German Gift], published by Dr. Sherefeddin Pasha in Istanbul in 1916. This dictionary is the only dictionary in verse ever written combining these two languages. Moreover the dictionary is one of the few texts containing German words written in Arabic letters (applying Ottoman spelling conventions). The study concentrates on the way German words are spelled and tries to find out, whether Sherefeddin Pasha applied something like fixed rules to write the German lexemes.
Altern ist eine Aufgabe, die von allen Menschen - durchaus auf unterschiedliche Weise - zu bewältigen ist und an der sie aktiv teilhaben. Altern ist demnach nicht etwas, was Personen nur passiert bzw. widerfährt, sondern es erfolgt in einem sozialen Prozess, in dem sich die Beteiligten mit dem Altern auseinandersetzen und es interaktiv gestalten. Altern impliziert so als Aufgabe auch die Reflexion der lebensgeschichtlich eintretenden Veränderungen und ihre interaktive und kommunikative Be- und Verarbeitung. In der kommunikativen Bewältigung dieser Veränderungen wird zugleich Identitätsarbeit geleistet und werden Aspekte von Altersidentität ausgebildet. Dabei spielt die Auseinandersetzung mit Identitätsmerkmalen der mittleren Generation eine zentrale Rolle. Der Beitrag modelliert diese Wechselwirkungen zwischen Altern, Kommunikation und Identitätsarbeit.
Am Anfang ist das Wort
(2017)
Am Anfang war die Lücke
(2012)
Beim Lesen stolpert man über den unscheinbaren Artikel den. Muss das nicht dem heißen? Richtig. Die lokale Angabe am Stadioneingang und die temporale Angabe am Sonntag stehen im Dativ, wie sich eindeutig an dem definiten Artikel dem erkennen lässt, der hier mit der Präposition an zu am verschmolzen ist. Und der Artikel, der nach dem Komma folgt und den ‚lockere‘ oder
‚lose Apposition‘ genannten Nachtrag einleitet, bezieht sich ebenfalls auf Stadioneingang bzw. Sonntag und sollte mit diesem Bezugsnomen kongruieren, das heißt ebenfalls im Dativ – und nicht wie in den Beispielen in im Akkusativ – stehen.