Refine
Year of publication
Document Type
- Part of a Book (581)
- Conference Proceeding (561)
- Article (453)
- Book (66)
- Working Paper (26)
- Doctoral Thesis (21)
- Other (18)
- Part of Periodical (12)
- Preprint (12)
- Contribution to a Periodical (6)
Language
- English (1765) (remove)
Keywords
- Korpus <Linguistik> (416)
- Deutsch (410)
- Computerlinguistik (161)
- Konversationsanalyse (138)
- Interaktion (116)
- Englisch (112)
- Annotation (97)
- Gesprochene Sprache (93)
- Automatische Sprachanalyse (75)
- Wörterbuch (73)
Publicationstate
- Veröffentlichungsversion (961)
- Zweitveröffentlichung (248)
- Postprint (236)
- Ahead of Print (6)
- Preprint (5)
- Erstveröffentlichung (2)
Reviewstate
- Peer-Review (828)
- (Verlags)-Lektorat (410)
- Peer-review (24)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (18)
- Verlags-Lektorat (14)
- Peer-Revied (8)
- Review-Status-unbekannt (6)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (3)
- (Verlags-)Lektorat (2)
- Peer review (2)
Publisher
- de Gruyter (104)
- Benjamins (87)
- IDS-Verlag (81)
- Springer (63)
- European Language Resources Association (ELRA) (56)
- Association for Computational Linguistics (46)
- European Language Resources Association (42)
- Oxford University Press (35)
- Elsevier (33)
- Institut für Deutsche Sprache (33)
Are borrowed neologisms accepted more slowly into the German language than German words resulting from the application of wrd formation rules? This study addresses this question by focusing on two possible indicators for the acceptance of neologisms: a) frequency development of 239 German neologisms from the 1990s (loanwords as well as new words resulting from the application of word formation rules) in the German reference corpus DEREKO and b) frequency development in the use of pragmatic markers (‘flags’, namely quotation marks and phrases such as sogenannt ‘so-called’) with these words. In the second part of the article, a psycholinguistic approach to evaluating the (psychological) status of different neologisms and non-words in an experimentally controlled study and plans to carry out interviews in a field test to collect speakers’ opinions on the acceptance of the analysed neologisms are outlined. Finally, implications for the lexicographic treatment of both types of neologisms are discussed.
We present web services implementing a workflow for transcripts of spoken language following TEI guidelines, in particular ISO 24624:2016 "Language resource management - Transcription of spoken language". The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
For a long time, the lecture dominated performatively presented scientific communication. Given academic traditions, it is possible to make a connection between the lecture and classical rhetoric, a highly differentiated instrument of analysis. The tradition of the lecture has been perpetuated in the presentation of research results, first in the use of transparencies and subsequently through computer-based projections. Yet the use of media technology has also allowed new practices to emerge, including mediation practices hitherto neglected in the theory of rhetoric.
Construction-based language models assume that grammar is meaningful and learnable from experience. Focusing on five of the most elementary argument structure constructions of English, a large-scale corpus study of child-directed speech (CDS) investigates exactly which meanings/functions are associated with these patterns in CDS, and whether they are indeed specially indicated to children by their caretakers (as suggested by previous research, cf. Goldberg, Casenhiser and Sethuraman 2004). Collostructional analysis (Stefanowitsch and Gries 2003) is employed to uncover significantly attracted verb-construction combinations, and attracted pairs are classified semantically in order to systematise the attested usage patterns of the target constructions. The results indicate that the structure of the input may aid learners in making the right generalisations about constructional usage patterns, but such scaffolding is not strictly necessary for construction learning: not all argument structure constructions are coherently semanticised to the same extent (in the sense that they designate a single schematic event type of the kind envisioned in Goldberg’s [1995] ‘scene encoding hypothesis’), and they also differ in the extent to which individual semantic subtypes predominate in learners’ input
A large database is a desirable basis for multimodal analysis. The development of more elaborate methods, data banks, and tools for a stronger empirical grounding of multimodal analysis is a prevailing topic within multimodality. Prereq- uisite for this are corpora for multimodal data. Our contribution aims at developing a proposal for gathering and building multimodal corpora of audio-visual social media data, predominantly YouTube data.Our contribution has two parts: First we outline a participation framework which is able to represent the complexity of YouTube communication. To this end we ‘dissect’ the different communicative and multimodal layers YouTube consists of. Besides the Video performance YouTube also integrates comments, social media operators, commercials, and announcements for further YouTube Videos. The data consists of various media and modes and is interactively engaged in various discourses. Hence, it is rather difficult to decide what can be considered as a basic communicative unit (or a ‘turn’) and how it can be mapped. Another decision to be made is which elements are of higher priority than others, thus have to be integrated in an adequate transcription format. We illustrate our conceptual considerations on the example of so-called L e t’s Plays, which are supposed to present and comment Computer gaming processes.The second part is devoted to corpus building. Most previous studies either worked with ad hoc data samples or outlined data mining and data sampling strategies. Our main aim is to delineate in a systematic way and based on the conceptual outline in the first part necessary elements which should be part of a YouTube corpus. To this end we describe in a first Step which components (e.g., the Video itself, the comments, the metadata, etc.) should be captured. ln a second Step we outline why and which relations (e.g., screen appearances, hypertextual struc- tures, etc.) are worth to get part of the corpus. In sum, our contribution aims at outlining a proposal for gathering and systematizing multimodal data, specifically audio-visual social media data, in a corpus derived from a conceptual modeling of important communicative processes of the research object itself.
Language attitudes matter; they influence people’s behaviour and decisions. Therefore, it is crucial to learn more about patterns in the way that languages are evaluated. One means of doing so is using a quantitative approach with data representative of a whole population, so that results mirror dispositions at a societal level. This kind of approach is adopted here, with a focus on the situation in Germany. The article consists of two parts. First, I will present some results of a new representative survey on language attitudes in Germany (the Germany Survey 2017). Second, I will show how language attitudes penetrate even seemingly objective data collection processes by examining the German Microcensus. In 2017, for the first time in eighty years, the German Microcensus included a question on language use ‘at home’. Unfortunately, however, the question was clearly tainted by language attitudes instead of being objective. As a result, the Microcensus significantly misrepresents the linguistic reality of different migrant languages spoken in Germany.
Modern theoretical linguistics lives by the insight that the meanings of complex expressions derive from the meanings of their parts and the way these are composed. However, the currently dominating theories of the syntax-semantics interface hastily relegate important aspects of meaning which cannot readily be aligned with visible structure to empty projecting heads non-reductively (mainstream Generative Grammar) or to the syntactic construction holistically (Construction Grammar). This book develops an alternative, compositional analysis of the hidden aspectual-temporal, modal and comparative meanings of a range of productive constructions of which pseudorefl exive, excessive and directional complement constructions take center stage. Accordingly, a contradiction-inducing hence semantically problematic part of literally coded meaning is locally ignored and systematically realized „expatriately“ with respect to parts of structure that achieve the indexical anchoring of propositional contents in terms of times, worlds and standards of comparison, thus yielding the observed hidden meanings.
This investigation targets a syntactic phenomenon of German which is commonly referred to as the absentive construction. The absentive is considered a universal grammatical category denoting absence. Its syntax is characterised by the occurrence of an auxiliary or copula verb accompanied by a non‐finite VP containing a main verb. The expression of absence, predicated over the clausal subject, is assumed to be based on a constructional meaning. Reviewing a wide range of syntactic and interpretive properties of this structure in German, we will demonstrate that certain empirical claims about the construction are not well founded and that its seemingly idiosyncratic properties are indeed available for compositional analyses. We will propose a structural analysis of its core syntactic and interpretive properties: The predication expresses the localisation of the subject at the location of the event, denoted by the infinitival verb. The interpretation of absence, then, can be explained by an implicature.
We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can be applied in monolingual, multilingual and cross-lingual settings and is able to produce dependencybased and span-based SRL annotations. We benchmark the labeling performance of our model in different monolingual and multilingual settings using well-known SRL datasets. We then train our model in a cross-lingual setting to generate new SRL labeled data. Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations. Our proposed architecture offers a flexible method for leveraging SRL data in multiple languages.