410 Linguistik
Refine
Year of publication
Document Type
- Conference Proceeding (19) (remove)
Has Fulltext
- yes (19)
Is part of the Bibliography
- no (19)
Keywords
- Computerlinguistik (5)
- Gesprochene Sprache (4)
- Transkription (4)
- Korpus <Linguistik> (3)
- Annotation (2)
- Deutsch (2)
- Englisch (2)
- Forschungsdaten (2)
- Langzeitarchivierung (2)
- Linguistik (2)
Publicationstate
Reviewstate
Publisher
- ELRA (2)
- Bundeskriminalamt (1)
- California State University (1)
- European Language Resources Association (ELRA) (1)
- Foi-Commerce (1)
- Fryske Akademy – Afûk (1)
- Gardez!-Verl. (1)
- Institut Universitari de Linguistica Aplicada, Universitat Pompeu Fabra (1)
- UCL Presses Universitaires (1)
- University of Leipzig (1)
In this paper, we report on an effort to develop a gold standard for the intensity ordering of subjective adjectives. Rather than pursue a complete order as produced by paying attention to the mean scores of human ratings only, we take into account to what extent assessors consistently rate pairs of adjectives relative to each other. We show that different available automatic methods for producing polar intensity scores produce results that correlate well with our gold standard, and discuss some conceptual questions surrounding the notion of polar intensity.
Patterns pertaining to 'strong' DMPs and scope in presentational there-sentences (henceforth: PTSs) have received much attention, and many attempts have been made to derive them. Building on the account of Heim 1987, this paper proposes a novel account based on temporal reference encoding and general assumptions concerning the nature of the interface between the computational system of syntax (CS) and the systems of sound and meaning (Chomsky 1999).
This contribution presents an XML Schema for annotating a high level narratological category: speech, thought and writing representation (ST&WR). It focusses on two aspects: Firstly, the original Schema is presented as an example for the challenge to encode a narrative feature in a structured and flexible way and secondly, ways of adapting this Schema to TEI are considered, in Order to make it usable for other, TEI-based projects.
This paper discusses the advantages and disadvantages of the combination of automated information and lexicographically interpreted information in online dictionaries, namely elexiko, a hypertext dictionary and lexical data information system of contemporary German (http://www.owid.de/ elexiko_/index.html), and DWDS, a digital dictionary of 20,h century German (http://www.dwds.de). Examples of automatically derived information (e.g. automatically extracted citations from the underlying corpus, lists on paradigmatic relations) and lexicographically compiled information (e.g. information on paradigmatic partners) are provided and evaluated, reflecting on the need to develop guidelines as to how computerised information and lexicographically interpreted information may be combined profitably in online reference works.
Digital or electronic lexicography has gained in importance in the last few years. This can be seen in the growing list of publications focusing on this field. In the OBELEX bibliography (http://www.owid.de/obelex/engl), the research contributions in this field are consolidated and are searchable by different criteria. The idea for OBELEX originated in the context of the dictionary portal OWID, which incorporates several dictionaries from the Institute for German Language (www.owid.de). OBELEX has been available online free of charge since December 2008. OBELEX includes articles, monographs, anthologies and reviews published since 2000 that relate to electronic lexicography, as well as some relevant older works. Our particular focus is on works about online lexicography. Systematically evaluated sources are relevant journals like International Journal of Lexicography, Lexicographica, Dictionaries, Lexikos; furthermore Euralex-Proceedings, proceedings of the International Symposium on Lexicography in Copenhagen as well as relevant monographs and anthologies. Information on dictionaries is currently not included in OBELEX; the main focus is on metalexicography. However, we are working on a database with information on online dictionaries as a supplement to OBELEX. All entries of OBELEX are stored in a database. Thus, all parts of the bibliographic entry (such as person, title, publication or year) are searchable. Furthermore, all publications are associated with our keyword list; therefore, a thematic search is possible. The subject language is also noted. With this type of content, the OBELEX bibliography supplements in a useful way other bibliographic projects such as the printed ‘Internationale Bibliographie zur germanistischen Lexikographie und Wörterbuchforschung’ by H. E. Wiegand (Wiegand 2006/2007), the ‘Bibliography of Lexicography’ by R. R. K. Hartmann (Hartmann 2007), and the ‘International Bibliography of Lexicography’ of Euralex (cf. also DeCesaris and Bernal 2006). OBELEX differs from all these bibliographic projects by its strong focus on electronic lexicography and its ability to retrieve bibliographic information.
The authors present a multilingual electronic database of lexical items with idiosyncratic occurrence patterns. Currently, our database consists of: (1) a collection of 444 bound words in German; (2) a collection of 77 bound words in English; (3) a collection of 58 negative polarity items in Romanian; (4) a collection of 84 negative polarity items in German; and (5) a collection of 52 positive polarity items in German. The database is encoded in XML and is available via the Internet, offering dynamic and flexible access.
Most research on automated categorization of documents has concentrated on the assignment of one or many categories to a whole text. However, new applications, e.g. in the area of the Semantic Web, require a richer and more fine-grained annotation of documents, such as detailed thematic information about the parts of a document. Hence we investigate the automatic categorization of text segments of scientific articles with XML markup into 16 topic types from a text type structure schema. A corpus of 47 linguistic articles was provided with XML markup on different annotation layers representing text type structure, logical document structure, and grammatical categories. Six different feature extraction strategies were applied to this corpus and combined in various parametrizations in different classifiers. The aim was to explore the contribution of each type of information, in particular the logical structure features, to the classification accuracy. The results suggest that some of the topic types of our hierarchy are successfully learnable, while the features from the logical structure layer had no particular impact on the results.
A text parsing component designed to be part of a system that assists students in academic reading an writing is presented. The parser can automatically add a relational discourse structure annotation to a scientific article that a user wants to explore. The discourse structure employed is defined in an XML format and is based the Rhetorical Structure Theory. The architecture of the parser comprises pre-processing components which provide an input text with XML annotations on different linguistic and structural layers. In the first version these are syntactic tagging, lexical discourse marker tagging, logical document structure, and segmentation into elementary discourse segments. The algorithm is based on the shift-reduce parser by Marcu (2000) and is controlled by reduce operations that are constrained by linguistic conditions derived from an XML-encoded discourse marker lexicon. The constraints are formulated over multiple annotation layers of the same text.