Wednesday, March 31, 2010
Internet and generally modern communication technologies have radically modified current society, bringing in new risks for citizens’ privacy. The tangible effects of such technological progress have been, on the one hand, the improvements of tools for retrieval and collection of data, and, on the other hand, the increased capability of storage and aggregation of collected information. This can be interpreted positively in terms of much greater and better opportunities for the development of personality, making available information previously inaccessible (due to high cost or efforts needed to access). However, the same technical tools can be used to achieve the opposite result: prevent the expression of users’ personality through a continuous, though imperceptible, control that could shift the interpretation of user profiles from a pre-judgment into a prejudice.
From a legal point of view, different solutions have been put forward, descending from different approaches. On the one hand, we can observe the case of self-regulation, where technology itself can help to limit the aforementioned risks for personal data; on the other hand, we can take the example of legislative harmonization implemented by the Member States in the EU, where the monitoring activity is carried out by independent authorities, the so called Data protection Authorities.
A recent example that can show the market dynamics and the legal reactions concerning data protection, is the search engine Google, which received a brief but significant letter from the Art 29 Working Party (hereinafter Art 29 WP)11 due to the low level of protection assured by the Mountain-view society in the delivery of its services. The intervention, though not binding, has been the first step for Google in the direction of an improvement of its data protection policy, so as to achieve the level required by European legislation.
Download the paper from SSRN at the link.