Refine
Year of publication
- 2015 (46) (remove)
Document Type
- Article (22)
- Conference Proceeding (19)
- Part of a Book (4)
- Book (1)
Has Fulltext
- yes (46)
Keywords
- Korpus <Linguistik> (12)
- Deutsch (11)
- Annotation (9)
- Corpus annotation (6)
- Corpus technology (6)
- Datenbanksystem (6)
- Large corpora (5)
- Computerlinguistik (4)
- Corpus linguistics (4)
- Computerunterstützte Lexikographie (3)
Publicationstate
- Veröffentlichungsversion (28)
- Postprint (7)
- Zweitveröffentlichung (1)
Reviewstate
- Peer-Review (46) (remove)
Publisher
- Institut für Deutsche Sprache (8)
- University of Glasgow (2)
- American Psychological Association (1)
- Association for Computational Linguistics ( ACL ); Curran Associates, Inc. (1)
- De Gruyter (1)
- Dictionary Society of North America (1)
- Elsevier (1)
- Federal´noe gosudarstvennoe unitarnoe predprijatie Akademičeskij naučno-izdatel´skij, proizvodstvenno-poligrafičeskij i knigorasprostranitel´skij centr Nauka (1)
- German Society for Computational Linguistics & Language Technology (GSCL) (1)
- Gesellschaft für Sprachtechnologie and Computerlinguistik (1)
We investigate whether non-configurational languages, which display more word order variation than configurational ones, require more training data for a phenomenon to be parsed successfully. We perform a tightly controlled study comparing the dative alternation for English (a configurational language), German, and Russian (both non-configurational). More specifically, we compare the performance of a dependency parser when only canonical word order is present with its performance on data sets when all word orders are present. Our results show that for all languages, canonical data not only is easier to parse, but there exists no direct correspondence between the size of training sets containing free(er) word order variation and performance.
Some 25 years ago, a large-scale repatriation of Russian Germans began. As a result, more than 2,5 million people that grew up in the USSR, Russia, or other post-Soviet states, became German citizens who had native or near-native command of the Russian language. The uncomfortable differences they exhibited in comparison to those who were supposed to accept them as equals, yet failed to do so, compelled them to search for self-designations that would accommodate their new identity and to bond together to form a new minority. The authors examine the attempts of Soviet/Russian Germans to redefine their ethnic identity in terms of not just blood but also language and culture, focusing on two particular cases: the use of the name Rusak in the internet forums of the repatriated immigrants; and the linguistic-cultural practices of the older generation of immigrants.
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of all linguistic theories that take social interaction involving language into account. However, determining communicative functions is a notoriously difficult task both for human interpreters and systems. It involves an interpretative process that integrates various sources of information. Existing work on communicative function classification comes from either dialogue act tagging where it is generally coarse grained concerning the feed- back phenomena or it is token-based and does not address the variety of forms that feed- back utterances can take. This paper introduces an annotation framework, the dataset and the related annotation campaign (involving 7 raters to annotate nearly 6000 utterances). We present its evaluation not merely in terms of inter-rater agreement but also in terms of usability of the resulting reference dataset both from a linguistic research perspective and from a more applicative viewpoint.
Precise multimodal studies require precise synchronisation between audio and video signals. However, raw audio and audio from video recordings can be out of sync for several reasons. In order to re-synchronise them, a dynamic programming (DP) approach is presented here. Traditionally, DP is performed on the rectangular distance matrix comparing each value in signal A with each value in signal B. Previous work limited the search space using for example the Sakoe Chiba Band (Sakoe and Chiba, 1978). However, the overall space of the distance matrix remains identical. Here, a tunnel matrix and its according DP-algorithm are presented. The matrix contains merely the computed distance of two signals to a pre-specified bandwidth and the computational cost is equally reduced. An example implementation demonstrates the functionality on artificial data and on data from real audio and video recordings.
This paper presents newly developed guidelines for prosodic annotation of German as a consensus system agreed upon by German intonologists. The DIMA system is rooted in the framework of autosegmental-metrical phonology. One important goal of the consensus is to make exchanging data between groups easier since German intonation is currently annotated according to different models. To this end, we aim to provide guidelines that are easy to learn. The guidelines were evaluated running an inter-annotator reliability study on three different speech styles (read speech, monologue and dialogue). The overall high κ between 0.76 and 0.89 (depending on the speech style) shows that the DIMA conventions can be applied successfully.
In my article I argue the need for an existence of grammar in spoken language. It would have the same functions as the grammar of written language: describing and explaining the fundamental units of spoken language and their features, describing the composition of those units and their conjunction. The basic units in the grammar of spoken language can be named as: the sound, the word, the functional unit, the conversational turn and the conversation itself. Further the central characteristics of spoken language and their impact on grammar have to be taken into account. They are: the interactivity, the multimodality, the processabihty and the great variability. After displaying my concepts I discuss three alternative concepts of a grammar in spoken language: online-syntax, construction grammar and multimodal grammar. The article concludes by discussing the role of spoken language grammar in language and foreign language teaching.
Ph@ttSessionz and Deutsch heute are two large German speech databases. They were created for different purposes: Ph@ttSessionz to test Internet-based recordings and to adapt speech recognizers to the voices of adolescent speakers, Deutsch heute to document regional variation of German. The databases differ in their recording technique, the selection of recording locations and speakers, elicitation mode, and data processing.
In this paper, we outline how the recordings were performed, how the data was processed and annotated, and how the two databases were imported into a single relational database system. We present acoustical measurements on the digit items of both databases. Our results confirm that the elicitation technique affects the speech produced, that f0 is quite comparable despite different recording procedures, and that large speech technology databases with suitable metadata may well be used for the analysis of regional variation of speech.
One was a distinguished natural scientist and engineer, the other a self-taught scientist and vilified as a conman: Christian Gottlieb Kratzenstein (1723–1795) and Wolfgang von Kempelen (1734–1804). Some of the former’s postula-tions on human physiology and articulation of speech proved wrong in later years. Most of the latter’s theories are considered applicable even today. The perhaps most contrasting approaches to speech synthesis during the 18th century are linked to their names. There are many essential differences between their approaches which show that these two researchers were not only representatives of different schools of thought, but also representatives of two different scientific eras. A speculative and philosophical approach on the one hand versus an empirical and logical approach on the other hand. Both Kratzenstein and Kempelen published books on their research. But while the “Tentamen” [4] of the physician Kratzen-stein remains rather vague and imprecise in its descriptions of vowel production and synthesis, the “Mechanismus” [8] of the engineer Kempelen shows much more precision and correctness in almost every respect of human speech and lan-guage. The goal of this paper is to discuss the differences between these two con-temporaneous researchers on speech synthesis and to compare their theories with present-days findings.
Im vorliegenden Beitrag soll gezeigt werden, wie Konnektoren als sprachliche Mittel zur Aktualisierung von zwei Arten konversationeller Aktivitäten eingesetzt werden können, nämlich von intersubjektiven bzw. gesprächsorganisatorischen Verfahren. Auf intersubjektive Verfahren greift ein Sprecher zurück, um in Kooperation mit seinem Gesprächspartner einen gemeinsamen Wissenshintergrund (common ground) zu schaffen. Durch gesprächsorganisatorische Verfahren greift der Sprecher in die gesprächsthematische Struktur des Interaktionsgeschehens ein. In diesem Beitrag wird die Aktualisierung dieser beiden konversationellen Verfahren am Beispiel der kommunikativen Gattung autobiographisches Interview betrachtet. Diese Gattung ist für eine solche Analyse m. E. besonders geeignet, denn sie zeichnet sich durch eine relativ scharfe Trennung der Gesprächsrollen aus, die das Nachvollziehen des Interaktionsgeschehens erleichtert. An einem autobiographischen Interview sind zwei Subjekte beteiligt: der Interviewte, der als Wissensträger gilt, und der Interviewer, der durch seine Rolle als Gesprächsleiter die Wissensvermittlung begünstigen soll. Der Interviewer ist also mit einer zweifachen Aufgabe konfrontiert, denn er muss die anfängliche Wissensasymmetrie ausgleichen und ist zugleich für die Gesprächsorganisation zuständig. Im Folgenden soll am Beispiel des Konjunktors und veranschaulicht werden, wie der Gebrauch von Konnektoren zur Bewältigung dieser beiden kommunikativen Aufgaben beitragen kann.
The present study introduces articulography, the measurement of the position of tongue and lips during speech, as a promising method to the study of dialect variation. By using generalized additive modeling to analyze articulatory trajectories, we are able to reliably detect aggregate group differences, while simultaneously taking into account the individual variation across dozens of speakers. Our results on the basis of Dutch dialect data show clear differences between the southern and the northern dialect with respect to tongue position, with a more frontal tongue position in the dialect from Ubbergen (in the southern half of the Netherlands) than in the dialect of Ter Apel (in the northern half of the Netherlands). Thus articulography appears to be a suitable tool to investigate structural differences in pronunciation at the dialect level.
Der Aufsatz untersucht die grammatische Gestaltung zweigliedriger Nominalgruppen mit quantifizierendem nominalem Erstglied (Nquant) und quantifiziertem, durch ein Adjektivattribut erweitertem, nominalem Zweitglied (Adj+N), z.B. (mit) einem Glas kaltem Wasser. In der deutschen Gegenwartssprache ist in solchen Fügungen mit Varianten zu rechnen: (mit) einem Glas kalten Wassers, (mit) einem Glas kaltes Wasser. Insgesamt lassen sich fünf Konstruktionstypen unterscheiden. Anhand einer Belegsammlung aus literarischen Prosatexten vom 17. bis zum 20. Jahrhundert wird insbesondere die quantitative Verteilung von Konstruktionen mit Genitiv vs. Kasusübereinstimmung ins Auge gefasst. Anders als bei Nquant+N-Gruppen ohne Adjektivattribut im Zweitglied zeigt sich ein kräftiger Anstieg im Anteil der Genitivkonstruktion vom 17. bis zum 19. und nur ein leichter Rückgang im 20. Jahrhundert. Dieser Befund stimmt nur zum Teil mit den Darstellungen in der Standardliteratur überein. Eine mögliche Erklärung für die quantitative Entwicklung der Genitivkonstruktion in der Literatursprache liegt im Einfluss normativer Grammatiken.
This paper presents some results from an online survey regarding the functions and presentation of lexicographically compiled and automatically compiled corpus citations in a general monolingual e-dictionary of German (elexico). Our findings suggest that dictionary users have a clear understanding of the functions of corpus citations in lexicography.
Using the Google Ngram Corpora for six different languages (including two varieties of English), a large-scale time series analysis is conducted. It is demonstrated that diachronic changes of the parameters of the Zipf–Mandelbrot law (and the parameter of the Zipf law, all estimated by maximum likelihood) can be used to quantify and visualize important aspects of linguistic change (as represented in the Google Ngram Corpora). The analysis also reveals that there are important cross-linguistic differences. It is argued that the Zipf–Mandelbrot parameters can be used as a first indicator of diachronic linguistic change, but more thorough analyses should make use of the full spectrum of different lexical, syntactical and stylometric measures to fully understand the factors that actually drive those changes.
When formulating a request for an object, speakers can choose among different grammatical resources that would all serve the overall purpose. This paper examines the social contexts indexed and created by the choice of the turn format can I have x to request a shared good (the pepper grinder, a tissue from a box on the table, etc.) in British English informal interaction. The analysis is based on a video corpus of approximately 25 h of everyday interaction among family and friends. In its home environment, a request in the format can I have x treats the other as being in control over the relevant material object, a control that is the contingent outcome of ongoing courses of action. This contingent control over a shared good produces an obligation to make it available. This analysis is supported by an examination of similarly formatted request turns in other languages, of can I have x in another interactional environment (after a relevant offer has been made) in British English, and of deviant cases. The results highlight the intimate connection of request format selection to the present engagements of (prospective) request recipients.
Recipient design is a key constituent of intersubjectivity in interaction. Recipient design of turns is informed by prior knowledge about and shared experience with recipients. Designing turns in order to be maximally effective for the particular recipient(s) is crucial for accomplishing intersubjectively coordinated action. This paper reports on a specific pragmatic structure of recipient design, i.e. counter-factual recipient design, and how it impinges on intersubjectivity in interaction. Based on an analysis of video-recordings data from driving school lessons in German, two kinds of counterfactual recipient design of instructors' requests are distinguished: pedagogic and egocentric turn-design. Counterfactual, pedagogic turn-design is used strategically to diagnose student skills and to create opportunities for corrective instructions. Egocentric turn-design rests on private, non-shared knowledge of the instructor. Egocentrically designed turns imply expectations of how to comply with requests which cannot be recovered by the student and which lead to a breakdown of intersubjective cooperation. This paper identifies practices, sources and interactional consequences of these two kinds of counterfactual recipient design. In addition, the study enhances our understanding of recipient design in at least three ways. It shows that recipient design does not only concern referential and descriptive practices, but also the indexing intelligible projections of next actions; it highlights the productive, other-positioning effects of recipient design; it argues that recipient design should be analyzed in terms of temporally extended interactional trajectories, linking turn-constructional practices to interactional histories and consecutive trajectories of joint action.
This article reports about the on-going work on a new version of the metadata framework Component Metadata Infrastructure (CMDI), central to the CLARIN infrastructure. Version 1.2 introduces a number of important changes based on the experience gathered in the last five years of intensive use of CMDI by the digital humanities community, addressing problems encountered, but also introducing new functionality. Next to the consolidation of the structure of the model and schema sanity, new means for lifecycle management have been introduced aimed at combatting the observed proliferation of components, new mechanism for use of external vocabularies will contribute to more consistent use of controlled values and cues for tools will allow improved presentation of the metadata records to the human users. The feature set has been frozen and approved, and the infrastructure is now entering a transition phase, in which all the tools and data need to be migrated to the new version.
Der vorliegende Beitrag erkundet den Zusammenhang zwischen der Komplexität politischer Argumentationsprozesse und der Diversifikation der Semantik von Schlüsselwörtern, deren Bedeutung im Argumentationsprozess umkämpft und in zahlreichen Facetten entfaltet widAdegenstand der Untersuchung ist die Verwendung von „Ökologie" in den Schlichtungsgesprächen zum Bahnprojekt Stuttgart 21. Im Unterscheid zu bisher vorliegenden Analysen zu semantischen Kämpfen geht es weniger darum, wie ein Ausdruck von einer Partei im Gegensatz zu anderen semantisiert wird. Es wird vielmehr gezeigt, wie semantische Diversifizierung und Ambiguität von „Ökologie" im expertischen Argumentationsprozess entstehen und welche kommunikativen Effekte dies für die Möglichkeit der Bürgerbeteiligung mit sich bringt. Es werden drei Praktiken identifiziert, mit denen die Interaktionsteilnehmer selbst auf semantische Diversifizierung und Ambiguität reagieren und versuchen, den Ausdruck eindeutig interpretierbar und die Quaestio entscheidbar zu machen: Strategieunterstellungen, Popularisierungen und Populismus. Die Interaktionsanalysen zeigen dabei, dass diese Praktiken selbst die Problematik, die sie lösen sollen, reproduzieren.
In dem Beitrag wird der Frage nachgegangen, inwiefern die Frequenz eines Wortes mit seiner orthographischen Richtigschreibung zusammenhangt. Werden häufige Wörter öfter und früher richtig geschrieben? Und welche Rolle spielt dabei die orthographische Regelhaftigkeit der Wortstrukturen? Unter Zuhilfenahme maschineller Analyseverfahren aus der Großstudie "Automatisierte Rechtschreibdiagnostik" (Fay/Berkling/Stüker 2012) werden diesbezuglich über 1000 Schülertexte von Klasse 2 bis 8 untersucht. Im Ergebnis werden zum einen einige Annahmen, die bislang vor allem auf Erfahrungswerten aus der sprachdidaktischen Arbeit fußten, empirisch bestätigt, zum anderen werden sie hinsichtlich spezifischer Rechtschreibphänomene differenziert und erweitert.
Damit wir uns über Gerichte und Getränke verständigen können, benennen wir sie. Viele Benennungen informieren über Zutaten oder Zubereitung, zum Beispiel Geschmortes Lamm mit rosa Pfeffer. In diesem Beitrag geht es um Benennungen anderer, besonderer Art, zum Beispiel Benennungen wie Ich träume von Casablanca, Armer Ritter und Studentenkuss. Ich nenne sie kostümierte Benennungen, weil sie das Benannte komplett verkleiden. Wir müssen schon wissen oder noch in Erfahrung bringen, um was es sich handelt. Der Beitrag analysiert diesen speziellen Typ von Benennungen morphologisch und semantisch, er zeigt die Strukturen und Sinnhintergründe der Benennungen auf.
Contents:
1. Michal Křen: Recent Developments in the Czech National Corpus, S. 1
2. Dan Tufiş, Verginica Barbu Mititelu, Elena Irimia, Stefan Dumitrescu, Tiberiu Boros, Horia Nicolai Teodorescu: CoRoLa Starts Blooming – An update on the Reference Corpus of Contemporary Romanian Language, S. 5
3. Sebastian Buschjäger, Lukas Pfahler, Katharina Morik: Discovering Subtle Word Relations in Large German Corpora, S. 11
4. Johannes Graën, Simon Clematide: Challenges in the Alignment, Management and Exploitation of Large and Richly Annotated Multi-Parallel Corpora, S. 15
5. Stefan Evert, Andrew Hardie: Ziggurat: A new data model and indexing format for large annotated text corpora, S. 21
6. Roland Schäfer: Processing and querying large web corpora with the COW14 architecture, S. 28
7. Jochen Tiepmar: Release of the MySQL-based implementation of the CTS protocol, S. 35
Gegenstand des Beitrags sind Phraseologismen, die seit den 1990er-Jahren in den deutschen Wortschatz eingegangen sind und im Neoglogismenwörterbuch (www.owid.de) präsentiert werden. Dargestellt werden u.a. Funktionen von Phraseologismen wie Schließung von Benennungslücken und Ausdrucksverstärkung. Entstehungsprozesse wie Metaphorisierung und elliptische Kürzung, Wortbildungsprozesse auf der Basis von Phraseologismen sowie Einflüsse aus dem Englischen.
The IMS Open Corpus Workbench (CWB) software currently uses a simple tabular data model with proven limitations. We outline and justify the need for a new data model to underlie the next major version of CWB. This data model, dubbed Ziggurat, defines a series of types of data layer to represent different structures and relations within an annotated corpus; each such layer may contain variables of different types. Ziggurat will allow us to gradually extend and enhance CWB’s existing CQP-syntax for corpus queries, and also make possible more radical departures relative not only to the current version of CWB but also to other contemporary corpus-analysis software.
With an increasing amount of text data available it is possible to automatically extract a variety of information about language. One way to obtain knowledge about subtle relations and analogies between words is to observe words which are used in the same context. Recently, Mikolov et al. proposed a method to efficiently compute Euclidean word representations which seem to capture subtle relations and analogies between words in the English language. We demonstrate that this method also captures analogies in the German language. Furthermore, we show that we can transfer information extracted from large non-annotated corpora into small annotated corpora, which are then, in turn, used for training NLP systems.
Die öffentliche Akzeptanz und Wirkung natur- und technikwissenschaftlicher Forschung hängt grundlegend davon ab, ob sich die Ziele und Forschungsergebnisse an die Öffentlichkeit vermitteln lassen. Doch die Inhalte aktueller Forschungsvorhaben sind für ein Laienpublikum oft nur schwer zugänglich und verständlich. Vor dem Hintergrund, die gesellschaftliche Diskussion natur- und technikwissenschaftlicher Forschung zu verbessern, untersuchen und bewerten wir im Projekt PopSci – Understanding Science einen wichtigen Sektor des populärwissenschaftlichen Diskurses in Deutschland empirisch. Hierfür identifizieren wir die linguistischen Merkmale deutscher populärwissenschaftlicher Texte durch korpusbasierte Methoden und untersuchen deren Effekt auf die kognitive Verarbeitung der Texte durch Laien. Dazu setzen wir Vor- und Nachwissenstests ein. Außerdem messen wir die Blickbewegungen der Leserinnen und Leser, während sie populärwissenschaftliche Texte lesen. Aus dieser Kombination von unterschiedlichen Methoden versuchen wir, erste Empfehlungen zur Verbesserung des linguistischen Stils und der Wissensrepräsentation populärwissenschaftlicher Texte abzuleiten.
This article reports on the on-going CoRoLa project, aiming at creating a reference corpus of contemporary Romanian (from 1945 onwards), opened for online free exploitation by researchers in linguistics and language processing, teachers of Romanian, students. We invest serious efforts in persuading large publishing houses and other owners of IPR on relevant language data to join us and contribute the project with selections of their text and speech repositories. The CoRoLa project is coordinated by two Computer Science institutes of the Romanian Academy, but enjoys cooperation of and consulting from professional linguists from other institutes of the Romanian Academy. We foresee a written component of the corpus of more than 500 million word forms, and a speech component of about 300 hours of recordings. The entire collection of texts (covering all functional styles of the language) will be pre-processed and annotated at several levels, and also documented with standardized metadata. The pre-processing includes cleaning the data and harmonising the diacritics, sentence splitting and tokenization. Annotation will include morpho-lexical tagging and lemmatization in the first stage, followed by syntactic, semantic and discourse annotation in a later stage.
In this paper, I present the COW14 tool chain, which comprises a web corpus creation tool called texrex, wrappers for existing linguistic annotation tools as well as an online query software called Colibri2. By detailed descriptions of the implementation and systematic evaluations of the performance of the software on different types of systems, I show that the COW14 architecture is capable of handling the creation of corpora of up to at least 100 billion tokens. I also introduce our running demo system which currently serves corpora of up to roughly 20 billion tokens in Dutch, English, French, German, Spanish, and Swedish
In a project called "A Library of a Billion Words" we needed an implementation of the CTS protocol that is capable of handling a text collection containing at least 1 billion words. Because the existing solutions did not work for this scale or were still in development I started an implementation of the CTS protocol using methods that MySQL provides. Last year we published a paper that introduced a prototype with the core functionalities without being compliant with the specifications of CTS (Tiepmar et al., 2013). The purpose of this paper is to describe and evaluate the MySQL based implementation now that it is fulfilling the specifications version 5.0 rc.1 and mark it as finished and ready to use. Further information, online instances of CTS for all described datasets and binaries can be accessed via the projects website.
The availability of large multi-parallel corpora offers an enormous wealth of material to contrastive corpus linguists, translators and language learners, if we can exploit the data properly. Necessary preparation steps include sentence and word alignment across multiple languages. Additionally, linguistic annotation such as partof- speech tagging, lemmatisation, chunking, and dependency parsing facilitate precise querying of linguistic properties and can be used to extend word alignment to sub-sentential groups. Such highly interconnected data is stored in a relational database to allow for efficient retrieval and linguistic data mining, which may include the statistics-based selection of good example sentences. The varying information needs of contrastive linguists require a flexible linguistic query language for ad hoc searches. Such queries in the format of generalised treebank query languages will be automatically translated into SQL queries.
The Czech National Corpus (CNC) is a longterm project striving for extensive and continuous mapping of the Czech language. This effort results mostly in compilation, maintenance and providing free public access to a range of various corpora with the aim to offer a diverse, representative, and high-quality data for empirical research mainly in linguistics. Since 2012, the CNC is officially recognized as a research infrastructure funded by the Czech Ministry of Education, Youth and Sports which has caused a recent shift towards user service-oriented operation of the project. All project-related resources are now integrated into the CNC research portal at http://www.korpus.cz/. Currently, the CNC has an established and growing user community of more than 4,500 active users in the Czech Republic and abroad who put almost 1,900 queries per day using one of the user interfaces. The paper discusses the main CNC objectives for each particular domain, aiming at an overview of the current situation supplemented by an outline of future plans.
Learning from Errors. Systematic Analysis of Complex Writing Errors for Improving Writing Technology
(2015)
In this paper, we describe ongoing research on writing errors with the ultimate goal to develop error-preventing editing functions in word-processors. Drawing from the state-of-the-art research in errors carried out in various fields, we propose the application of a general concept for action-slips as introduced by Norman. We demonstrate the feasibility of this approach by using a large corpus of writing errors in published texts. The concept of slips considers both the process and the product: some failure in a procedure results in an error in the product, i.e., is visible in the written text. In order to develop preventing functions, we need to determine causes of such visible errors.
Natural language Processing tools are mostly developed for and optimized on newspaper texts, and often Show a substantial performance drop when applied to other types of texts such as Twitter feeds, Chat data or Internet forum posts. We explore a range of easy-to-implement methods of adapting existing part-of-speech taggers to improve their performance on Internet texts. Our results show that these methods can improve tagger performance substantially.
In this article, we explore the feasibility of extracting suitable and unsuitable food items for particular health conditions from natural language text. We refer to this task as conditional healthiness classification. For that purpose, we annotate a corpus extracted from forum entries of a food-related website. We identify different relation types that hold between food items and health conditions going beyond a binary distinction of suitability and unsuitability and devise various supervised classifiers using different types of features. We examine the impact of different task-specific resources, such as a healthiness lexicon that lists the healthiness status of a food item and a sentiment lexicon. Moreover, we also consider task-specific linguistic features that disambiguate a context in which mentions of a food item and a health condition co-occur and compare them with standard features using bag of words, part-of-speech information and syntactic parses. We also investigate in how far individual food items and health conditions correlate with specific relation types and try to harness this information for classification.
Social perception studies have revealed that smiling individuals are perceived more favourably on many communion dimensions in comparison to nonsmiling individuals. Research on gender differences in smiling habits showed that women smile more than men. In our study, we investigated this phenomena further and hypothesised that women perceive smiling individuals as more honest than men. An experiment conducted in seven countries (China, Germany, Mexico, Norway, Poland, Republic of South Africa and USA) revealed that gender may influence the perception of honesty in smiling individuals. We compared ratings of honesty made by male and female participants who viewed photos of smiling and nonsmiling people. While men and women did not differ on ratings of honesty in nonsmiling individuals, women assessed smiling individuals as more honest than men did. We discuss these results from a social norms perspective.
Pogled u e-leksikografiju
(2015)
U radu se daje pregled temeljnih pojmova i klasifikacija u području e-leksikografije. Donosi se klasifikacija e-rječnika, prikazuje se leksikografski proces izrade e-rječnika te pregled najraširenijih sustava za izradu rječnika (DWS) i sustava za pretragu korpusa (CQS). Kao primjer dobre prakse detaljnije se opisuje mrežni rječnik elexiko (Institut za njemački jezik u Mannheimu): prikazuju se njegovi ciljevi i namjena, teorijske i metodološke postavke, moduli te mogućnosti uporabe. Kao moguća osnova za izradu korpusno utemeljenoga e-rječnika hrvatskoga jezika koji bi bio u skladu s najrecentnijim leksikografskim (i uopće lingvističkim) teorijama i praksama prikazuje se rad na mrežnome leksičko-semantičkome repozitoriju hrvatskoga jezika (baza semantičkih okvira, predodžbenih shema, kognitivnih primitiva i leksičkih jedinica) u okviru projekta Repozitorij metafora hrvatskoga jezika.
We present a quantitative approach to disambiguating flat morphological analyses and producing more deeply structured analyses. Based on existing morphological segmentations, possible combinations of resulting word trees for the next level are filtered first by criteria of linguistic plausibility and then by weighting procedures based on the geometric mean. The frequencies for weighting are derived from three different sources (counts of morphs in a lexicon, counts of largest constituents in a lexicon, counts of token frequencies in a corpus) and can be used either to find the best analysis on the level of morphs or on the next higher constituent level. The evaluation shows that for this task corpus-based frequency counts are slightly superior to counts of lexical data.
In diesem Aufsatz werden Positionierungsverfahren analysiert, welche die Macher einer Talkshow einsetzen, um ihre Gäste den Fernsehzuschauern als relevante Gesprächspartner für das Thema „Steuerhinterziehung durch Prominente” zu präsentieren. Es wird untersucht, wie es den Machern der Talkshow gelingt, die Gäste bereits bei der Erstvorstellung durch das Zusammenspiel einer Stimme aus dem Off und der Kameraführung als „prototypische Vertreter” zu präsentieren und zueinander zu positionieren. Von den insgesamt fünf Teilnehmern der Talkshow werden zwei dieser Erstvorstellungen detailliert analysiert. Es handelt sich um die Präsentation zweier Gäste, die in einer deutlich antagonistischen Beziehung zueinander stehen. Diese Gäste werden unmittelbar hintereinander vorgestellt. Auf der Grundlage aller fünf Gastpräsentationen, die wir detailliert rekonstruiert haben, jedoch aus Platzgründen hier leider nicht ebenfalls präsentieren können, wird ein strukturiertes Positionierungsgeflecht deutlich. Dieses Geflecht weist im Zentrum die von uns rekonstruierte thematische und personelle „Gegnerschaft“ auf. In der Peripherie sind dann insgesamt vier Vertreter relevanter gesellschaftlicher Positionen zum Thema der Talkshow beigeordnet. Dabei handelt es sich um Vertreter der Rechtsprechung, der Politik, der Alltagsmoral und der Psychologie und Theologie. Die Analysen werden in theoretischer Hinsicht auf der Grundlage multimodaler Vorstellungen zur Positionierung und zum Recipient Design durchgeführt. In methodisch-methodologischer Perspektive orientiert sich die Analyse an der multimodalen Interaktionsanalyse.
We present studies using the 2013 log files from the German version of Wiktionary. We investigate several lexicographically relevant variables and their effect on look-up frequency: Corpus frequency of the headword seems to have a strong effect on the number of visits to a Wiktionary entry. We then consider the question of whether polysemic words are looked up more often than monosemic ones. Here, we also have to take into account that polysemic words are more frequent in most languages. Finally, we present a technique to investigate the time-course of look-up behaviour for specific entries. We exemplify the method by investigating influences of (temporary) social relevance of specific headwords.
Klaus Fischer / Fabio Mollica (Hrsg.): Valenz, Konstruktion und Deutsch als Fremdsprache [Rezension]
(2015)
Two very reliable influences on eye fixation durations in reading are word frequency, as measured by corpus counts, and word predictability, as measured by cloze norming. Several studies have reported strictly additive effects of these 2 variables. Predictability also reliably influences the amplitude of the N400 component in event-related potential studies. However, previous research suggests that while frequency affects the N400 in single-word tasks, it may have little or no effect on the N400 when a word is presented with a preceding sentence context. The present study assessed this apparent dissociation between the results from the 2 methods using a coregistration paradigm in which the frequency and predictability of a target word were manipulated while readers’ eye movements and electroencephalograms were simultaneously recorded. We replicated the pattern of significant, and additive, effects of the 2 manipulations on eye fixation durations. We also replicated the predictability effect on the N400, time-locked to the onset of the reader’s first fixation on the target word. However, there was no indication of a frequency effect in the electroencephalogram record. We suggest that this pattern has implications both for the interpretation of the N400 and for the interpretation of frequency and predictability effects in language comprehension.
Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
Cet article se penche sur un épisode radiophonique durant lequel deux animateurs effectuent un coming out hétérosexuel à l’occasion de la journée internationale du coming out (11 octobre). Dans une perspective issue de l’analyse conversationnelle d’inspiration ethnométhodologique, il étudie une collection d’occurrences de coming out, permettant non seulement d’identifier un format séquentiel récurrent et la manière dont il contribue à l’efficacité de la pratique, mais aussi de réfléchir à la façon dont il peut être utilisé dans différents contextes sociaux, notamment médiatisés et médiatiques. En particulier, l’article montre comment la pratique est au service d’une émission radiophonique sur le coming out et prépare la transition vers le traitement de l’homosexualité à la radio. Grâce à un enregistrement vidéo du travail des animateurs dans le studio de radio, l’article décrit la façon dont le thème de la journée internationale du coming out est fabriqué et orchestré dans les coulisses de la radio et sur les ondes. Ce faisant, il montre la contribution d’une analyse conversationnelle à l’approche du coming out dans les études de genre – où la pratique est largement discutée mais sans être analysée sur la base d’occurrences documentées. L’article revient ainsi sur l’épistémologie du closet chère à Eve Sedgwick, en proposant une anatomie du coming out en contexte médiatisé, qui en éclaire les enjeux non seulement épistémiques mais aussi de normativisation, publicisation et spectacularisation.
To optimize the sharing and reuse of existing data, many funding organizations now require researchers to specify a management plan for research data. In such a plan, researchers are supposed to describe the entire life cycle of the research data they are going to produce, from data creation to formatting, interpretation, documentation, short-term storage, long-term archiving and data re-use. To support researchers with this task, we built DMPTY, a wizard that guides researchers through the essential aspects of managing data, elicits information from them, and finally, generates a document that can be further edited and linked to the original research proposal.
In a previous article (Faaß et al., 2012), a first attempt was made at documenting and encoding morphemic units of two South African Bantu languages, i.e. Northern Sotho and Zulu, with the aim of describing and storing the morphemic units of these two languages in a single relational database, structured as a hierarchical ontology. As a follow-up, the current article describes the implementation of our part-of-speech ontology. We give a detailed description of the morphemes and categories contained in the database, highlighting the need and reasons for a flexible ontology which will provide for both language specific and general linguistic information. By giving a detailed account of the methodology for the population of the database, we provide linguists from other Bantu languages with a road map for extending the database to also include their languages of specialization.
Dieser Band fasst die Vorträge des 9. Hildesheimer Evaluierungs- und Retrieval-Workshops (HIER) zusammen, der am 9. und 10. Juli 2015 an der Universität Hildesheim stattfand. Die HIER Workshop-Reihe begann im Jahr 2001 mit dem Ziel, die Forschungsergebnisse der Hildesheimer Informationswissenschaft zu präsentieren und zu diskutieren. Mittlerweile nehmen immer wieder Kooperationspartner von anderen Institutionen teil, was wir sehr begrüßen. HIER schafft auch ein Forum für Systemvorstellungen und praxisorientierte Beiträge.