Refine
Document Type
- Part of a Book (3)
- Conference Proceeding (2)
- Report (1)
Language
- English (6)
Has Fulltext
- yes (6)
Is part of the Bibliography
- yes (6)
Keywords
- Personenbezogene Daten (6) (remove)
Publicationstate
Reviewstate
- Peer-Review (4)
- (Verlags)-Lektorat (1)
Privacy in its many aspects is protected by various legal texts (e.g. the Basic Law, Civil Code, Criminal Code, or even the Law on Copyright in artistic and photographic works (KunstUrhG), which protects image rights). Data protection law, which governs the processing of information about individuals (personal data), also serves to protect their privacy. However, some information referring to the public sphere of an individual’s life (e.g. the fact that X is a mayor of Smallville) may still be considered personal data (see below), and as such fall within the scope of data protection rules. In this sense, data protection laws concern information that is not private.
Therefore, privacy and data protection, although closely related, are distinct notions: one can violate someone else’s privacy without processing his or her personal data (e.g. simply by knocking at one’s door at night, uninvited), and vice versa: one can violate data protection rules without violating privacy.
The following handouts focus exclusively on data protection rules, and specifically on the General Data Protection Regulation (GDPR). However, please keep in mind that compliance with the GDPR is not the only aspect of protecting privacy of individuals in research projects. Other rules, such as academic ethics and community standards (such as CARE) also need to be observed.
Privacy by Design (also referred to as Data Protection by Design) is an approach in which solutions and mechanisms addressing privacy and data protection are embedded through the entire project lifecycle, from the early design stage, rather than just added as an additional layer to the final product. Formulated in the 1990 by the Privacy Commissionner of Ontario, the principle of Privacy by Design has been discussed by institutions and policymakers on both sides of the Atlantic, and mentioned already in the 1995 EU Data Protection Directive (95/46/EC). More recently, Privacy by Design was introduced as one of the requirements of the General Data Protection Regulation (GDPR), obliging data controllers to define and adopt, already at the conception phase, appropriate measures and safeguards to implement data protection principles and protect the rights of the data subject. Failing to meet this obligation may result in a hefty fine, as it was the case in the Uniontrad decision by the French Data Protection Authority (CNIL). The ambition of the proposed paper is to analyse the practical meaning of Privacy by Design in the context of Language Resources, and propose measures and safeguards that can be implemented by the community to ensure respect of this principle.
The debate on the use of personal data in language resources usually focuses — and rightfully so — on anonymisation. However, this very same debate usually ends quickly with the conclusion that proper anonymisation would necessarily cause loss of linguistically valuable information. This paper discusses an alternative approach — pseudonymisation. While pseudonymisation does not solve all the problems (inasmuch as pseudonymised data are still to be regarded as personal data and therefore their processing should still comply with the GDPR principles), it does provide a significant relief, especially — but not only — for those who process personal data for research purposes. This paper describes pseudonymisation as a measure to safeguard rights and interests of data subjects under the GDPR (with a special focus on the right to be informed). It also provides a concrete example of pseudonymisation carried out within a research project at the Institute of Information Technology and Communications of the Otto von Guericke University Magdeburg.
The article focuses on determining responsible parties and the division of potential liability arising from sharing language data (LD) containing personal data (PD). A key issue here is to identify who has to make sure and guarantee the GDPR compliance. The authors aim to answer 1) whether an individual researcher is a controller and 2) whether sharing LD results in joint controllership or separate controllership (whether the data's transferee becomes the controller, the joint controller or the processor). The article also analyses the legal relations of parties involved in data sharing and potential liability. The final section outlines data sharing in the CLARIN context. The analysis serves as a preliminary analytical background for redesigning the CLARIN contractual framework for sharing data.
The General Data Protection Regulation (GDPR) on personal data protection in the European Union entered into application on 25 May 2018. With its 173 recitals and 99 articles, it may be one of the most ambitious pieces of EU legislation to date. Rather than a guide to GDPR compliance for Digital Humanities researchers, this chapter looks at the use of personal data in DH projects from the data subject’s perspective, and examines to what extent the GDPR kept its promise of enabling the data subject to “take control of his data”. The chapter provides an overview of the right to privacy and the right to data protection, a discussion of the relation between the concept of data control and privacy and data protection law, an introduction to the GDPR, and an explanation of its relevance for scientific research in general and DH in particular. The main section of the chapter analyses two types of data control mechanisms (consent and data subject rights) and their impact on DH research.