Refine
Year of publication
- 2014 (302) (remove)
Document Type
- Part of a Book (118)
- Article (115)
- Conference Proceeding (33)
- Book (17)
- Part of Periodical (11)
- Other (6)
- Working Paper (2)
Is part of the Bibliography
- no (302) (remove)
Keywords
- Deutsch (93)
- Korpus <Linguistik> (25)
- Linguistik (24)
- Germanistik (22)
- Institut für Deutsche Sprache (18)
- Institut für Deutsche Sprache <Mannheim> (18)
- Rhetorik (15)
- Gastwissenschaftler (13)
- Gesprochene Sprache (12)
- Computerlinguistik (11)
Publicationstate
- Veröffentlichungsversion (118)
- Zweitveröffentlichung (22)
- Postprint (10)
Reviewstate
- (Verlags)-Lektorat (102)
- Peer-Review (46)
- Verlags-Lektorat (3)
- Review-Status-unbekannt (2)
- Preprint (1)
Publisher
- Institut für Deutsche Sprache (61)
- De Gruyter (55)
- de Gruyter (33)
- Lang (7)
- Stauffenburg (6)
- Universitätsverlag Hildesheim (5)
- PaoloLoffredo (4)
- Springer (4)
- Benjamins (3)
- International Speech Communication Association (3)
Auf dem Weg in die Digitalkultur: Wir Menschen sind heute nicht mehr die Einzigen, die lesen und schreiben - Computer tun es auch. Nach Jahrtausenden des Monopols über die Schrift mussten wir diese Bastion im 21. Jahrhundert räumen. Douglas Engelbart, der Erfinder der Computermaus, hatte die Automatisierung der Schrift und des Schreibens bereits 1968 vorhergesehen.
Dieses Buch zeigt, wie sich Lesen und Schreiben verändern, wenn der Computer uns diese Kulturtechniken immer mehr abnimmt. Bücher, Bibliotheken und Verlage, Schule und Universität, Presse und Zensur befinden sich bereits tief im Umbruch - und nicht zuletzt unser Denken selbst. Henning Lobin schildert die Auswirkungen computergestützter Techniken auf unseren Alltag und gibt einen Ausblick auf die Institutionen, Praktiken und Werte einer zukünftigen "Digitalkultur".
Dependenzstruktur
(2014)
Nektiv
(2014)
Translativ
(2014)
This paper presents challenges and opportunities resulting from the application of geographical information systems (GIS) in the (digital) humanities. First, we provide an overview of the intersection and interaction between geography (and cartography), and the humanities. Second, the “GeoBib” project is used as a case study to exemplify challenges for such collaborative, interdisciplinary projects, both for the humanists and the geoscientists. Finally, we conclude with an outlook on further applications of GIS in the humanities, and the potential scientific benefit for both sides, humanities and geosciences.
Im Zuge der Mediatisierung unserer alltäglichen Lebenswelt ergeben sich neue Möglichkeiten der Partizipation an gesellschaftlichen Prozessen. Insbesondere digitale Medien begünstigen das gemeinsame Aushandeln, Mitbestimmen und Gestalten unseres Alltags, der Politik, Wirtschaft und Kultur. Die Autorinnen und Autoren in diesem Band gehen der Frage nach, welche spezifischen Partizipationskulturen sich in den einzelnen Bereichen wie der Unternehmenskommunikation, dem Journalismus, der Politik oder bei Jugendlichen herausbilden und auf welche Weise sich diese Tendenzen als kennzeichnend für eine digitale Gesellschaft beschreiben lassen. Ziel des vorliegenden Bandes ist es, einen Beitrag zur Konturierung der Anwendungsmöglichkeiten und -grenzen des Partizipationsegriffs im Bereich der Forschung zur digitalen Medienkommunikation zu leisten.
Schreiben nach Engelbart
(2014)
Douglas Engelbart hat 1968 mit seinem On-Line System das erste Mal gezeigt, wie ein Computer als interaktives Schreibwerkzeug genutzt werden kann. Der Beitrag zeichnet diese Urszene der Textverarbeitung nach, beschreibt die wesentlichen Entwicklungslinien, die das digitale Schreiben seitdem genommen hat, und erläutert die zentralen Konzepte, die es zunehmend prägen: Hybridität, Multimedialität und Sozialität.
Der folgende Artikel ist ein bearbeiteter Auszug aus Henning Lobins “Engelbarts Traum. Wie der Computer uns Lesen und Schreiben abnimmt” Frankfurt am Main / New York: Campus, 2014.
Dieser Artikel gibt einen Einblick in das GeoBib-Projekt und die Problematik der Verwendung von historischen Karten und der daraus abgeleiteten Geodaten in einem WebGIS. Das GeoBib-Projekt hat zum Ziel, eine annotierte und georeferenzierte Online-Bibliographie der frühen deutsch- bzw. polnischsprachigen Holocaust- und Lagerliteratur von 1933 bis 1949 bereitzustellen. Zu diesem Zeitraum werden historische Karten und Geodaten gesammelt, aufbereitet und im zugehörigen WebGIS des GeoBib-Portals visualisiert. Eine Besonderheit ist die aufwendige Recherche von Geodaten und Kartenmaterial für den Zeitraum zwischen 1933 und 1949. Die Problematiken bezüglich der Recherche und späteren Visualisierung historischer Geodaten und des Kartenmaterials sind ein Hauptaugenmerk in diesem Artikel. Weiterhin werden Konzepte für die Visualisierung von historischem, unvollständigem Kartenmaterial präsentiert und ein möglicher Lösungsweg für die bestehenden Herausforderungen aufgezeigt.
Uncertain about Uncertainty: Different ways of processing fuzziness in digital humanities data
(2014)
The GeoBib project is constructing a georeferenced online bibliography of early Holocaust and camp literature published between 1933 and 1949 (Entrup et al. 2013a). Our immediate objectives include identifying the texts of interest in the first place, composing abstracts for them, researching their history, and annotating relevant places and times. Relations between persons, texts, and places will be visualized using digital maps and GIS software as an integral part of the resulting GeoBib information portal. The combination of diverse data from varying sources not only enriches our knowledge of these otherwise mostly forgotten texts; it also confronts us with vague, uncertain or even conflicting information. This situation yields challenges for all researchers involved – historians, literary scholars, geographers and computer scientists alike. While the project operates at the intersection of historical and literary studies, the involved computer scientists are in charge of providing a working environment (Entrup et al. 2013b) and processing the collected information in a way that is formalized yet capable of dealing with inevitable vagueness, uncertainty and contradictions. In this paper we focus on the problems and opportunities of encoding and processing fuzzy data.
Content analysis provides a useful and multifaceted, methodological framework for Twitter analysis. CAQDAS tools support the structuring of textual data by enabling categorising and coding. Depending on the research objective, it may be appropriate to choose a mixed-methods approach that combines quantitative and qualitative elements of analysis and plays out their respective advantages to the greatest possible extent while minimising their shortcomings. In this chapter, we will discuss CAQDAS speech act analysis of tweets as an example of software-assisted content analysis. We start with some elementary thoughts on the challenges of the collection and evaluation of Twitter data before we give a brief description of the potentials and limitations of using the software QDA Miner (as one typical example for possible analysis programmes). Our focus will lie on analytical features that can be particularly helpful in speech act analysis of tweets.
Twitter Analytics
(2014)
Die Online-Forschung setzt sich in den letzten Jahren zunehmend mit Mikro-Blogs, insbesondere dem weltweit populärsten Anbieter Twitter, auseinander. Verschiedenste Disziplinen beschäftigen sich aus ihren jeweiligen Perspektiven mit der Analyse von kommunikativen Prozessen und Strukturen von Twitter und nutzen dabei eine Vielzahl an methodischen Zugängen. In diesem Artikel werden zunächst die grundlegenden Funktionen, Möglichkeiten des Zugangs zur Datenstruktur sowie Methoden der Datenerhebung und -auswertung dargelegt. Im Anschluss werden Ansätze verschiedener Fachdisziplinen vorgestellt.
Wie selbstbestimmt können wir das Internet nutzen? Wie viel wissen wir darüber,welche digitalen Spuren wir setzen und wer diesen hinterher spürt?
Wie werden die beim Surfen erzeugten Daten von Dritten weiter verwendet – mit und ohne unser Wissen? Und ist die gefühlte Nacktheit in Zeiten der digital ausspähbaren, scheinbaren Transparenz wirklich akut oder durch traditionelle analoge Denk- und Erfahrungsstrukturen geprägt?
Sentenz
(2014)
Nominalstil
(2014)
Metalepsis
(2014)
Katachrese
(2014)
Enthymem
(2014)
Topik
(2014)
Hyperbel
(2014)
Chiasmus
(2014)
Attizismus
(2014)
Topos
(2014)
Verbum proprium
(2014)
Einleitung
(2014)
“My Curiosity was Satisfied, but not in a Good Way”: Predicting User Ratings for Online Recipes
(2014)
In this paper, we develop an approach to automatically predict user ratings for recipes at Epicurious.com, based on the recipes’ reviews. We investigate two distributional methods for feature selection, Information Gain and Bi-Normal Separation; we also compare distributionally selected features to linguistically motivated features and two types of frameworks: a one-layer system where we aggregate all reviews and predict the rating vs. a two-layer system where ratings of individual reviews are predicted and then aggregated. We obtain our best results by using the two-layer architecture, in combination with 5 000 features selected by Information Gain. This setup reaches an overall accuracy of 65.60%, given an upper bound of 82.57%.
We investigate how the granularity of POS tags influences POS tagging, and furthermore, how POS tagging performance relates to parsing results. For this, we use the standard “pipeline” approach, in which a parser builds its output on previously tagged input. The experiments are performed on two German treebanks, using three POS tagsets of different granularity, and six different POS taggers, together with the Berkeley parser. Our findings show that less granularity of the POS tagset leads to better tagging results. However, both too coarse-grained and too fine-grained distinctions on POS level decrease parsing performance.
Recent work on error detection has shown that the quality of manually annotated corpora can be substantially improved by applying consistency checks to the data and automatically identifying incorrectly labelled instances. These methods, however, can not be used for automatically annotated corpora where errors are systematic and cannot easily be identified by looking at the variance in the data. This paper targets the detection of POS errors in automatically annotated corpora, so-called silver standards, showing that by combining different measures sensitive to annotation quality we can identify a large part of the errors and obtain a substantial increase in accuracy.
We discovered several recurring errors in the current version of the Europarl Corpus originating both from the web site of the European Parliament and the corpus compilation based thereon. The most frequent error was incompletely extracted metadata leaving non-textual fragments within the textual parts of the corpus files. This is, on average, the case for every second speaker change. We not only cleaned the Europarl Corpus by correcting several kinds of errors, but also aligned the speakers’ contributions of all available languages and compiled every- thing into a new XML-structured corpus. This facilitates a more sophisticated selection of data, e.g. querying the corpus for speeches by speakers of a particular political group or in particular language combinations.
This study presents the results of a large-scale comparison of various measures of pitch range and pitch variation in two Slavic (Bulgarian and Polish) and two Germanic (German and British English) languages. The productions of twenty-two speakers per language (eleven male and eleven female) in two different tasks (read passages and number sets) are compared. Significant differences between the language groups are found: German and English speakers use lower pitch maxima, narrower pitch span, and generally less variable pitch than Bulgarian and Polish speakers. These findings support the hypothesis that inguistic communities tend to be characterized by particular pitch profiles.
This article presents preliminary results indicating that speakers have a different pitch range when they speak a foreign language compared to the pitch variation that occurs when they speak their native language. To this end, a learner corpus with French and German speakers was analyzed. Results suggest that speakers indeed produce a smaller pitch range in the respective L2. This is true for both groups of native speakers. A possible explanation for this finding is that speakers are less confident in their productions, therefore, they concentrate more on segments and words and subsequently refrain from realizing pitch range more native-like. For language teaching, the results suggest that learners should be trained extensively on the more pronounced use of pitch in the foreign language.
Designing a Bilingual Speech Corpus for French and German Language Learners: a Two-Step Process
(2014)
We present the design of a corpus of native and non-native speech for the language pair French-German, with a special emphasis on phonetic and prosodic aspects. To our knowledge there is no suitable corpus, in terms of size and coverage, currently available for the target language pair. To select the target L1-L2 interference phenomena we prepare a small preliminary corpus (corpus1), which is analyzed for coverage and cross-checked jointly by French and German experts. Based on this analysis, target phenomena on the phonetic and phonological level are selected on the basis of the expected degree of deviation from the native performance and the frequency of occurrence. 14 speakers performed both L2 (either French or German) and L1 material (either German or French). This allowed us to test, recordings duration, recordings material, the performance of our automatic aligner software. Then, we built corpus2 taking into account what we learned about corpus1. The aims are the same but we adapted speech material to avoid too long recording sessions. 100 speakers will be recorded. The corpus (corpus1 and corpus2) will be prepared as a searchable database, available for the scientific community after completion of the project.
This study investigates cross-language differences in pitch range and variation in four languages from two language groups: English and German (Germanic) and Bulgarian and Polish (Slavic). The analysis is based on large multi-speaker corpora (48 speakers for Polish, 60 for each of the other three languages). Linear mixed models were computed that include various distributional measures of pitch level, span and variation, revealing characteristic differences across languages and between language groups. A classification experiment based on the relevant parameter measures (span, kurtosis and skewness values for pitch distributions for each speaker) succeeded in separating the language groups.
Recent work suggests that concreteness and imageability play an important role in the meanings of figurative expressions. We investigate this idea in several ways. First, we try to define more precisely the context within which a figurative expression may occur, by parsing a corpus annotated for metaphor. Next, we add both concreteness and imageability as “features” to the parsed metaphor corpus, by marking up words in this corpus using a psycholinguistic database of scores for concreteness and imageability. Finally, we carry out detailed statistical analyses of the augmented version of the original metaphor corpus, cross-matching the features of concreteness and imageability with others in the corpus such as parts of speech and dependency relations, in order to investigate in detail the use of such features in predicting whether a given expression is metaphorical or not.
Vorwort
(2014)
Following a welcome in Lithuanian and English to the guests and members on the occa- sion of the 10"’ anniversary of EFNIL, the history of this European language Organization is sketched. A brief survey of the sociolinguistic themes treated at previous Conferences and the state of the inajor projects is given, followed by an introduction (in German) to the general topic of the present Conference. The importance that translation and interpretation have for European language diversity and the individual national languages beside foreign language education of all Europeans is being stressed.
Wortartikel
(2014)
Körper(-Darstellungen) im Reality-TV. Herstellung von Wirklichkeit im und über das Fernsehen hinaus
(2014)
Der Beitrag beschäftigt sich mit den verschiedenen Such-, Auffindungs- und Auswahlsprozessen, die für die fremdsprachige Produktion notwendig sind und von DICONALE-online, einem onomasiologisch-konzeptuell ausgerichteten, zweisprachig-bilateral konzipierten Verbwörterbuch der spanischen und deutschen Gegenwartsspache, besonders berücksichtigt werden. Der Ausgangspunkt von DICONALE ist ein unbefriedigendes Informationsangebot in den bestehenden ein- und zweisprachigen Lernerwörterbüchern für den L2-output und bestätigt das Projektteam in der Notwendigkeit, ein neuartiges benutzer- und situationsdefiniertes online-Nachschlagewerk zu erstellen. Zwei Bezugsrahmen bilden die Grundlage für einen komplexen, konzeptuell und framegeleiteten Zugriffspfad, der dem Benutzer bei der Suche und Auswahl von Ausdrucksmöglichkeiten und der adäquaten Anwendung behilflich sein soll. Das Novum dieses Wörterbuchprojekts besteht hauptsachlich darin, eine onomasiologisch-konzeptuelle Perspektive für den fremdsprachigen Produktionsprozess nutzbar zu machen und mit einem semasiologischen Zugriff zu verbinden, durch den es möglich ist, die inter- und intralingualen Unterschiede zwischen den Lexemen eines lexikalisch-semantischen (Sub)Paradigmas hervorzuheben. Ziel des Beitrages ist es daher, den Ausgangspunkt, sowie die theoretischen und methodologischen Grundlagen von DICONALE-online unter der speziellen Perspektive der Benutzer- und Situationsorientiertheit zur Diskussion zu stellen, die einzelnen Zugriffspfade für den Such- und Auffindungsprozess vorzustellen und das Angebot zur Auswahl und zum adäquaten Gebrauch aus inter- und intralingualer Perspektive zu präsentieren.
In recent minimalist work, it has been argued that C-agreement provides conclusive support for the following theoretical hypotheses (cf. Carstens 2003; van Koppen 2005; Haegeman & van Koppen 2012): (i) C hosts a separate set of phi-features, a parametric choice possibly linked to the V2 property; (ii) feature checking/valuation is accomplished under (closest) c-command (i.e. by the operation Agree, cf. Chomsky 2000 and subsequent work). This paper reviews the significance of C-agreement for syntactic theory and argues that certain systematic asymmetries between regular verbal agreement and complementizer agreement suggest that the latter does not result from operations that are part of narrow syntax. The case is based on the observation that at least in some Germanic varieties (most notably Bavarian), the realization of inflectional features in the C-domain is sensitive to adjacency effects and deletion of the finite verb in right node raising and comparatives. The fact that C may not carry inflection when the finite verb has been elided is taken to suggest that complementizer agreement does not involve a dependency between C and the subject, but father between C and the finite verb (i.e. T). More precisely, it is argued that inflectional features present in the C-domain are added postsyntactically via a process of feature insertion (cf. e.g. Embick 1997; Embick & Noyer 2001; Harbour 2003) that creates a copy of T’s (valued) <J)-set. It will then be shown that this account can also capture phenomena like first conjunct agreement (FCA) and external possessor agreement, which are often presented as crucial evidence of the syntactic nature of complementizer agreement (cf. van Koppen 2005; Haegeman & van Koppen 2012).
Joachim Telle zum Gedenken
(2014)
This paper presents the first release of the KiezDeutsch Korpus (KiDKo), a new language resource with multiparty spoken dialogues of Kiezdeutsch, a newly emerging language variety spoken by adolescents from multi-ethnic urban areas in Germany. The first release of the corpus includes the transcriptions of the data as well as a normalisation layer and part-of-speech annotations. In the paper, we describe the main features of the new resource and then focus on automatic POS tagging of informal spoken language. Our tagger achieves an accuracy of nearly 97% on KiDKo. While we did not succeed in further improving the tagger using ensemble tagging, we present our approach to using the tagger ensembles for identifying error patterns in the automatically tagged data.
Einleitende Bemerkungen
(2014)
Faktivität
(2014)
Kausale Konnektoren
(2014)
Konditionale Konnektoren
(2014)
Ebenen der Verknüpfung
(2014)
Large classes at universities(> 1600 students) create their own challenges for teaching and learning. Audience feedback is lacking and fine tuning of lectures, courses and exam preparation to address individual needs is very difficult to achieve. At RWTH Aachen University, a course concept and a knowledge map learning tool aimed to support individual students to prepare for exams in information science through theme-based exercises were developed and evaluated. The tool was grounded in the notion of self-regul ated learning with the goal of enabling students to learn
independently.
Der Beitrag beschreibt die Grundstruktur des Forschungsprojekts ‚Standardization in Diversity (SDiv). The case of German in Luxembourg 1795–1920‘, das im Zeitraum 2013–2016 vom Fonds National de la Recherche (Luxemburg) und der DFG gefördert wird. Weitere Informationen auf der Internetseite des Projekts unter http://infolux.uni.lu/standardization.
We study the influence of information structure on the salience of subjective expressions for human readers. Using an online survey tool, we conducted an experiment in which we asked users to rate main and relative clauses that contained either a single positive or negative or a neutral adjective. The statistical analysis of the data shows that subjective expressions are more prominent in main clauses where they are asserted than in relative clauses where they are presupposed. A corpus study suggests that speakers are sensitive to this differential salience in their production of subjective expressions.
We compare several different corpus- based and lexicon-based methods for the scalar ordering of adjectives. Among them, we examine for the first time a low- resource approach based on distinctive- collexeme analysis that just requires a small predefined set of adverbial modifiers. While previous work on adjective intensity mostly assumes one single scale for all adjectives, we group adjectives into different scales which is more faithful to human perception. We also apply the methods to both polar and non-polar adjectives, showing that not all methods are equally suitable for both types of adjectives.
Figura etymologica
(2014)
Euphemismus
(2014)
Dysphemismus
(2014)
Epipher
(2014)
Hyperbaton
(2014)
Accurate opinion mining requires the exact identification of the source and target of an opinion. To evaluate diverse tools, the research community relies on the existence of a gold standard corpus covering this need. Since such a corpus is currently not available for German, the Interest Group on German Sentiment Analysis decided to create such a resource and make it available to the research community in the context of a shared task. In this paper, we describe the selection of textual sources, development of annotation guidelines, and first evaluation results in the creation of a gold standard corpus for the German language.
Der korpuslinguistische Ansatz des Projekts »Korpusgrammatik« eröffnet neue Perspektiven auf unsere Sprachwirklichkeit allgemein und grammatische Regularitäten im Besonderen. Der vorliegende Band klärt auf, wie man korpuslinguistisch nach dem Standard fragen kann, wie die Projektkorpora aufgebaut und in einer Korpusdatenbank erschlossen sind, wie man in einem automatischen Abfragesystem der Variabilität der Sprache zu Leibe rückt und sie sogar messbar macht, schließlich aber auch, wo die Grenzen quantitativer Korpusanalysen liegen. Pilotstudien deuten an, wie der Ansatz unsere grammatischen Horizonte erweitert und die Grammatikografie voranbringt.
Vorwort
(2014)
In diesem Beitrag wird das internationale Forschungsnetzwerk EuroGr@mm' und die kontrastive Komponente der Internetplattform ProGr@mm1 des Instituts für Deutsche Sprache in Mannheim vorgestellt. In Kap. 2 wird auf die unterschiedlichen universitären und außeruniversitären Zielgruppen eingegangen. Die damit verbundenen Anwendungsmöglichkeiten werden in Kap. 3 gezeigt. Sie stützen sich dabei auf die mit der Lernplattform gewonnenen Erfahrungen aus der eigenen Praxis in der universitären Lehre. Danach wird in Kap. 4 exemplarisch ein zentraler Bereich der Grammatik - die Wortstellung - kontrastiv aus deutsch-ungarischer Perspektive betrachtet. Der Beitrag schließt mit der Zusammenfassung und einer kurzen Weiterführung zur Typologie (Kap. 5).
By evaluating two corpora containing linguistic data on spoken standard language usage (with a total of 770 speakers), the current range of variation of lexical stress in loanwords will be analyzed. In doing so, the focus will be on the age and background of the speakers to be able to document processes of linguistic change and regionalisms. Regarding the phenomenon studied here, it becomes apparent that more detailed and multicausal separate analyses are required to interpret the results conclusively in spite of an overall trend that was at irst convincing (and that would support the theoretical assumptions concerning the loanwordʼs age and the source language inluencing the rate of assimilation). The results of the individual analyses contradict the assumed “overall trend”. One of the corpora was collected by experienced ield workers, while the other was collected by students. By comparing both corpora, some light can be shed onto the question as to what extent “undirected” and less rigidly collected data can support or complement more extensive and costly research projects.
Communication across all language barriers has long been a goal of humankind. In recent years, new technologies have enabled this at least partially. New approaches and different methods in the field of Machine Translation (MT) are continuously being improved, modified, and combined, as well. Significant progress has already been achieved in this area; many automatic translation tools, such as Google Translate and Babelfish, can translate not only short texts, but also complete web pages in real time. In recent years, new advances are being made in the mobile area; Googles Translate app for Android and iOS, for example, can recognize and translate words within photographs taken by the mobile device (to translate a restaurant menu, for instance). Despite this progress, a “perfect” machine translation system seems to be an impossibility because a machine translation system, however advanced, will always have some limitations. Human languages contain many irregularities and exceptions, and consequently go through a constant process of change, which is difficult to measure or to be processed automatically. This paper gives a short introduction of the state of the art of MT. It examines the following aspects: types of MT, the most conventional and widely developed approaches, and also the advantages and disadvantages of these different paradigms.
Satzpräposition
(2014)
Bezugsnomen
(2014)