Refine
Year of publication
Document Type
- Part of a Book (4500)
- Article (2965)
- Book (996)
- Conference Proceeding (688)
- Part of Periodical (308)
- Review (257)
- Other (151)
- Working Paper (83)
- Doctoral Thesis (68)
- Report (35)
Language
- German (8077)
- English (1765)
- Russian (145)
- French (38)
- Multiple languages (22)
- Spanish (16)
- Portuguese (14)
- Italian (9)
- Polish (7)
- Ukrainian (5)
Keywords
- Deutsch (5140)
- Korpus <Linguistik> (940)
- Wörterbuch (605)
- Konversationsanalyse (451)
- Rezension (423)
- Grammatik (405)
- Rechtschreibung (374)
- Gesprochene Sprache (361)
- Sprachgebrauch (356)
- Interaktion (338)
Publicationstate
- Veröffentlichungsversion (3883)
- Zweitveröffentlichung (1641)
- Postprint (395)
- Preprint (10)
- Erstveröffentlichung (8)
- Ahead of Print (7)
- (Verlags)-Lektorat (4)
- Hybrides Open Access (2)
- Verlags-Lektorat (1)
- Verlagsveröffentlichung (1)
Reviewstate
- (Verlags)-Lektorat (3836)
- Peer-Review (1595)
- Verlags-Lektorat (94)
- Peer-review (56)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (44)
- Review-Status-unbekannt (14)
- Peer-Revied (12)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (10)
- (Verlags-)Lektorat (9)
- Verlagslektorat (5)
Publisher
- de Gruyter (1334)
- Institut für Deutsche Sprache (1091)
- Schwann (638)
- Narr (484)
- Leibniz-Institut für Deutsche Sprache (IDS) (263)
- De Gruyter (244)
- Niemeyer (200)
- Lang (184)
- Narr Francke Attempto (170)
- IDS-Verlag (144)
Ausgangsbasis für die vorliegende Pilotstudie bilden die dialektalen Tonaufnahmen des Zwirner-Archivs aus den Fünfzigerjahren und die hochsprachlichen Tonaufnahmen des Pfeffer-Archivs aus dem Anfang der Sechzigerjahre und das diesen Aufnahmeaktionen zu Grunde liegende Erhebungskonzept. Am Beispiel der Aufnahmen aus dem Rhein-Neckar-Raum und dort eigens durchgeführten Neuerhebungen wird untersucht, wie man ausgehend von den überlieferten Gesprächsaufnahmen eine geeignete Neuerhebung konzipieren könnte, über die der Wandel der gesprochenen Alltagssprache in einem Abstand von vier Jahrzehnten diachron untersucht werden kann. Dabei stellt sich die Frage, was unter Sprachwandel in der relativ kurzen Zeitspanne gefasst werden soll. Um eine Diskussion darüber zu umgehen, wird im Folgenden darunter sowohl soziostilistischer Wandel als auch Systemwandel auf den unterschiedlichen grammatischen Ebenen verstanden, denn es ist zu erwarten, dass in dem kurzen Zeitraum von etwas mehr als einer Generation in der Sprachverwendung der Sprecher Veränderungen eher im Sprachrepertoire und in dessen Anwendungsregeln liegen als im Systemwandel.
Dieser Beitrag beschäftigt sich mit dem Theorie-Empirie-Problem in der Linguistik. Ausgangspunkt ist die Frage, wie die Heterogenität des Sprachgebrauchs in eine synchrone Strukturbeschreibung integriert werden kann und wie aus Daten zur Sprachvariation in der Synchronie Prognosen auf künftigen Sprachwandel gezogen werden können.
Zu diesem Bändchen
(1972)
Die Diskussion der Frage, nach welcher Methode die empirische Basis in der Linguistik gewonnen werden sollte, ist weitgehend reduziert auf die Alternative Korpus versus linguistische Intuition. Diese alternativen Möglichkeiten sind jedoch nur dann gegeben, wenn die Linguistik ihre Aufgaben in der synchronen Strukturbeschreibung der Gegenwartssprache sieht. Sobald geschichtlich zurückliegende Sprachzustände zum Forschungsobjekt gemacht werden, bleibt — weil kompetente Sprecher nicht mehr verfügbar sind — nur der Rückgriff auf Korpora. Im folgenden wird kurz diskutiert, ob die beiden genannten empirischen Methoden als gleichwertige Möglichkeiten gelten können und ob nicht auch andere empirische Verfahren mit herangezogen werden müssen, damit eine dem konventionalisierten Sprachgebrauch entsprechende Datenbasis überhaupt hergestellt werden kann. Dabei geht es auch um die Frage, unter welchen Bedingungen in der Linguistik Induktion, bzw. Deduktion, bzw. welche Art kombinierter Verfahren sinnvoll sind. An einem auf Probleme des Modusgebrauchs in Konditionalsätzen der heutigen deutschen Standardsprache und der Schriftsprache hin entwickelten Testverfahren zur Kompetenzermittlung wird gezeigt, daß die Entwicklung weiterführender empirischer Konzepte notwendig ist, wenn die Linguistik die von ihr erhobene Forderung, daß das Regelwerk einer Grammatik letztlich sprachliche Normen einer Sprachgemeinschaft beschreiben sollte, nicht fallen lassen möchte.
»In other words - was gschwind in English ded's mena?« : Beobachtungen zum Pennsylvaniadeutsch heute
(1997)
Subkultur und Sprache
(1971)
Noch einmal: Sprachverfall?
(1986)
We present an approach on how to investigate what kind of semantic information is regularly associated with the structural markup of scientific articles. This approach addresses the need for an explicit formal description of the semantics of text-oriented XML-documents. The domain of our investigation is a corpus of scientific articles from psychology and linguistics from both English and German online available journals. For our analyses, we provide XML-markup representing two kinds of semantic levels: the thematic level (i.e. topics in the text world that the article is about) and the functional or rhetorical level. Our hypothesis is that these semantic levels correlate with the articles’ document structure also represented in XML. Articles have been annotated with the appropriate information. Each of the three informational levels is modelled in a separate XML document, since in our domain, the different description levels might conflict so that it is impossible to model them within a single XML document. For comparing and mining the resulting multi-layered XML annotations of one article, a Prolog-based approach is used. It focusses on the comparison of XML markup that is distributed among different documents. Prolog predicates have been defined for inferring relations between levels of information that are modelled in separate XML documents. We demonstrate how the Prolog tool is applied in our corpus analyses.
The KorAP project (“Korpusanalyseplattform der nächste Generation”, “Corpus-analysis platform of the next generation”), carried out at the Institut fUr Deutsche Sprache (IDS) in Mannheim, Germany, has as its goal the development of a modem, state-of-the-art corpus-analysis platform, capable of handling very large corpora and opening the perspectives for innovative linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse extremely large amounts of primary data and annotations, while at the same time allowing an undistorted view of the primary un-annotated text, and thus fully satisfying expectations associated with a scientific tool. The project started in July 2011 and is funded till June 2014. The demo presentation in December will be the first version following a preliminary feature freeze, and will open the alpha testing phase of the project.
The paper reviews the results of work done in the context of TEI-Lex0, a joint ENeL / DARIAH / PARTHENOS initiative aimed at formulating guidelines for the encoding of retrodigitized dictionaries by streamlining and simplifying the recommendations of the “Print Dictionaries” chapter of the TEI Guidelines. TEI-Lex0 work is performed by teams concentrating on each of the main components of dictionary entries. The work presented here concerns proposals for constraining TEI-based encoding of orthographic, phonetic, and grammatical information on written and spoken forms of the lemma (headword), including auxiliary inflected forms. We also adduce examples of handling various types of orthographic and phonetic variants, as well as examples of handling the representation of inflectional paradigms, which have received less attention in the TEI Guidelines but which are nonetheless essential for properly exposing data content to the various uses that digitized lexica may have.
It is well known that the distribution of lexical and grammatical patterns is size- and register-sensitive (Biber 1986, and later publications). This fact alone presents a challenge to many corpus-oriented linguistic studies focusing on a single language. When it comes to cross-linguistic studies using corpora, the challenge becomes even greater due to the lack of high-quality multilingual corpora (Kupietz et al. 2020; Kupietz/Trawiński 2022), which are comparable with respect to the size and the register. That was the motivation for the creation of the European Reference Corpus EuReCo, an initiative started in 2013 at the Leibniz Institute for the German Language (IDS) together with several European partners (Kupietz et al. 2020). EuReCo is an emerging federated corpus, with large virtual comparable corpora across various languages and with an infrastructure supporting contrastive research. The core of the infrastructure is KorAP (Diewald et al. 2016), a scalable open-source platform supporting the analysis and visualisation of properties of texts annotated by multiple and potentially conflicting information layers, and supporting several corpus query languages. Until recently, EuReCo consisted of three monolingual subparts: the German Reference Corpus DeReKo (Kupietz et al. 2018), the Reference Corpus of Contemporary Romanian Language (Barbu Mititelu/Tufiş/Irimia 2018), and the Hungarian National Corpus (Váradi 2002). The goal of the present submission is twofold. On the one hand, it reports about the new component of EuReCo: a sample of the National Corpus of Polish (Przepiórkowski et al. 2010). On the other hand, it presents the results of a new pilot study using the newly extended EuReCo. This pilot study investigates selected Polish collocations involving light verbs and their prepositional / nominal complements (Fig. 1) and extends the collocation analyses of German, Romanian and Hungarian (Fig. 2) discussed in Kupietz/Trawiński (2022).
The present article describes the first stage of the KorAP project, launched recently at the Institut für Deutsche Sprache (IDS) in Mannheim, Germany. The aim of this project is to develop an innovative corpus analysis platform to tackle the increasing demands of modern linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse primary data and annotations in the petabyte range, while at the same time allowing an undistorted view of the primary linguistic data, and thus fully satisfying the demands of a scientific tool. An additional important aim of the project is to make corpus data as openly accessible as possible in light of unavoidable legal restrictions, for instance through support for distributed virtual corpora, user-defined annotations and adaptable user interfaces, as well as interfaces and sandboxes for user-supplied analysis applications. We discuss our motivation for undertaking this endeavour and the challenges that face it. Next, we outline our software implementation plan and describe development to-date.
The present paper describes Corpus Query Lingua Franca (ISO CQLF), a specification designed at ISO Technical Committee 37 Subcommittee 4 “Language resource management” for the purpose of facilitating the comparison of properties of corpus query languages. We overview the motivation for this endeavour and present its aims and its general architecture. CQLF is intended as a multi-part specification; here, we concentrate on the basic metamodel that provides a frame that the other parts fit in.
In mid-2017, as part of our activities within the TEI Special Interest Group for Linguists (LingSIG), we submitted to the TEI Technical Council a proposal for a new attribute class that would gather attributes facilitating simple token-level linguistic annotation. With this proposal, we addressed community feedback complaining about the lack of a specific tagset for lightweight linguistic annotation within the TEI. Apart from @lemma and @lemmaRef, up till now TEI encoders could only resort to using the generic attribute @ana for inline linguistic annotation, or to the quite complex system of feature structures for robust linguistic annotation, the latter requiring relatively complex processing even for the most basic types of linguistic features. As a result, there now exists a small set of basic descriptive devices which have been made available at the cost of only very small changes to the TEI tagset. The merit of a predefined TEI tagset for lightweight linguistic annotation is the homogeneity of tagging and thus better interoperability of simple linguistic resources encoded in the TEI. The present paper introduces the new attributes, makes a case for one more addition, and presents the advantages of the new system over the legacy TEI solutions.
Standards in CLARIN
(2022)
This chapter looks at a fragment of the ongoing work of the CLARIN Standards Committee (CSC) on producing a shared set of recommendations on standards, formats, and related best practices supported by the CLARIN infrastructure and its participating centres. What might at first glance seem to be a straightforward goal has over the years proven to be rather complex, reflecting the robustness and heterogeneity of the emerging distributed digital research infrastructure and the various disciplines and research traditions of the language-based humanities that it serves and represents, and therefore part of the chapter reviews the various initiatives and proposals that strove to produce helpful standards-related guidance. The focus turns next to a subtask initiated in late 2019, its scope narrowed to one of the core activities and responsibilities of CLARIN backbone centres, namely the provision of data deposition services. Centres are obligated to publish their recom-mendations concerning the repertoire of data formats that are best suited for their research profiles. We look at how this requirement has been met by the particular centres and suggest that having centres maintain their information in the Standards Information System (SIS) is the way to improve on the current state of affairs.
CoMParS is a resource under construction in the context of the long-term project German Grammar in European Comparison (GDE) at the IDS Mannheim. The principal goal of GDE is to create a novel contrastive grammar of German against the background of other European languages. Alongside German, which is the central focus, the core languages for comparison are English, French, Hungarian and Polish, representing different typological classes. Unlike traditional contrastive grammars available for German, which usually cover language pairs and are based on formal grammatical categories, the new GDE grammar is developed in the spirit of functionalist typology. This implies that, instead of formal criteria, cognitively motivated functional domains in terms of Givón (1984) are used as tertia comparationis. The purpose of CoMParS is to document the empirical basis of the theoretical assumptions of GDE-V and to illustrate the otherwise rather abstract content of grammar books by as many as possible naturally occurring and adequately presented multilingual examples, including information on their use in specific contexts and registers. These examples come from existing parallel corpora, and our presentation will focus on the legal aspects and consequences of this choice of language data.
The present contribution addresses an infrastructural issue of universal relevance, addressed in the specific context of the TEI. We describe a combination of open-source tools and an open-access approach to creating knowledge repositories that have been employed in building a bibliographic reference library for the “TEI for Linguists” special interest group (LingSIG). The authors argue that, for an initiative such as the TEI, it is important to choose open, freely available solutions. If these solutions have the advantage of attracting new users and promoting the initiative itself, so much the better, especially if it is done in a non-committal way: no one using the LingSIG bibliographic repository has to be a member of the LingSIG or a “TEI-er” in general.
Textsorten und Soziolekte : Funktion und Reziprozität in gesprochener und geschriebener Sprache
(1973)
Recent typological studies have shown that socio-linguistic factors have a substantial effect on at least certain structures of language. However, we are still far from understanding how such factors should be operationalized and how they interact with other factors in shaping grammar. To address both questions, this study examines the influence of socio-linguistic factors on the number of dedicated conditional constructions in a sample of 374 languages. We test the number of speakers, the degree of multilingualism, the availability of a literature tradition, the use of writing, and the use of the language in the education system. At the same time, we control for genealogical, contact, and bibliographical biases. Our results suggest that the number of speakers is the most informative predictor. However, we find that the association between the number of speakers and the number of dedicated conditional constructions is much weaker than assumed, once genealogical and contact biases are controlled for.
In diesem Beitrag beschäftigen wir uns mit moralisierenden Sprachhandlungen, worunter wir diskursstrategische Verfahren verstehen, in denen die Beschreibung von Streitfragen und erforderlichen Handlungen mit moralischen Begriffen enggeführt werden. Auf moralische Werte verweisendes Vokabular (wie beispielsweise „Freiheit“, „Sicherheit“ oder „Glaubwürdigkeit“) wird dabei verwendet, um eine Forderung durchzusetzen, die auf diese Weise unhintergehbar erscheint und keiner weiteren Begründung oder Rechtfertigung bedarf. Im Fokus unserer Betrachtungen steht dementsprechend das aus pragma-linguistischer Sicht auffällige Phänomen einer spezifischen Redepraxis der Letztbegründung oder Unhintergehbarkeit, die wir als Pragmem auffassen und beschreiben. Hierfür skizzieren wir zunächst den in der linguistischen Pragmatik verorteten Zugang zu Praktiken der Moralisierung, betrachten sprachliche Formen des Moralisierens und deren strukturelle Einbettung in den Satz oder den Text (also kotextuelle und pragmasyntaktischen Struktureinbettungen), um anschließend Hypothesen zu kontextuellen Wirkungsfunktionen aufzustellen. Darauf basierend leiten wir schließlich anhand von exemplarischen Korpusbelegen Strukturmuster des Moralisierens ab, die wir in dem philosophisch-linguistischen Fachterminus ‚Pragmem‘ verdichten und mittels qualitativer und quantitativer Analysen operationalisieren.
In diesem Beitrag beschäftigen wir uns mit moralisierenden Sprachhandlungen, worunter wir diskursstrategische Verfahren verstehen, in denen die Beschreibung von Streitfragen und erforderlichen Handlungen mit moralischen Begriffen enggeführt werden. Auf moralische Werte verweisendes Vokabular (wie beispielsweise „Freiheit“, „Sicherheit“ oder „Glaubwürdigkeit“) wird dabei verwendet, um eine Forderung durchzusetzen, die auf diese Weise unhintergehbar erscheint und keiner weiteren Begründung oder Rechtfertigung bedarf. Im Fokus unserer Betrachtungen steht dementsprechend das aus pragma-linguistischer Sicht auffällige Phänomen einer spezifischen Redepraxis der Letztbegründung oder Unhintergehbarkeit, die wir als Pragmem auffassen und beschreiben. Hierfür skizzieren wir zunächst den in der linguistischen Pragmatik verorteten Zugang zu Praktiken der Moralisierung, betrachten sprachliche Formen des Moralisierens und deren kotextuellen und insbesondere pragma-syntaktischen Struktureinbettungen, um anschließend Hypothesen zu kontextuellen Wirkungsfunktionen aufzustellen. Darauf basierend leiten wir schließlich anhand von exemplarischen Korpusbelegen Strukturmuster des Moralisierens ab, die wir in dem Terminus „Pragmem“ verdichten und mittels qualitativer und quantitativer Analysen operationalisieren.