BRUSSELS PRIVACY HUB
The Brussels Privacy Symposium, is a joint programme of Future of Privacy Forum and BPH.
Brussels Privacy Symposium 2017 - AI Ethics: The privacy challenge
On 6 November 2017, the Brussels Privacy Symposium will focus on privacy issues surrounding Aritifical intelligence. Enhancing efficiency,increasing safety, improving accuracy and reducing negative externalities are just some of AI's key benefits. However, AI also presents risks of opaque decision making, biased algorithms, security and safety vulnerabilities, and upending labor markets. In particular, AI and amrchin learning challenge traditional notions of privacy and data protection including individual control, transparecny, access adn data minimization. On content and social platforms, it can lead to narrowcasting, discrimination, and filter bubbles.
A group of industry leaders recently established a partnership to study and formulate best practices on AI technologies. Last year, the White House issued a report titled Preparing for the Future of Artificial Intelligence and announced a National Artificial Intelligence Research and Development Strategic Plan, laying out a strategic vision for federally funded AI research and development. These efforts seek to reconcile the tremendous opportunities that machine learning, human–machine teaming, automation, and algorithmic decision making promise in enhanced safety, efficiency gains, and improvements in quality of life, with the legal and ethical issues that these new capabilities present for democratic institutions, human autonomy, and the very fabric of our society.
Papers and Symposium discussion will address the following issues:
For more information see the call for papers.
Brussels Privacy Symposium 2016 - Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization.
There are deep disagreements about the efficacy of de-identification to mitigate privacy risks. Some critics argue that it is impossible to eliminate privacy harms from publicly released data using de-identification because other available data sets will allow attackers to identify individuals through linkage attacks. Defenders of de-identification counter that despite the theoretical and demonstrated ability to mount such attacks, the likelihood of re-identification for most data sets remains minimal. As a practical matter, they argue most data sets remain securely de-identified based on established techniques.
There is no agreement regarding the technical questions underlying the de-identification debate, nor is there consensus over how best to advance the discussion about the benefits and limits of de-identification. The growing use of open data holds great promise for individuals and society, but also brings risk. And the need for sound principles governing data release has never been greater.
Selected authors from multiple disciplines including law, computer science, statistics, engineering, social science, ethics and business will present papers at this full-day programme. The final programme will be available shortly. More information.
Since July 2014, the Brussels Privacy Hub and the International Committee of the Red Cross (ICRC) have been running a project exploring the relationship between data protection law and humanitarian action.
The work of organisations working in humanitarian emergencies such as armed conflicts, other situations of violence, situations of forced displacement within and across national borders, migration, natural disasters, and epidemics is essential for the protection of life, integrity, and dignity of the vulnerable persons involved. This work requires the collection and processing of a great deal of, often highly sensitive, personal data. To deal with humanitarian emergencies, it is in many cases necessary for personal data to flow between the concerned countries.
There is also increasing interest from both the humanitarian world, and the donors supporting it, in identifying innovative ways of providing better, and more efficient humanitarian assistance. This often involves exploring the possibilities offered by new technologies which in turn requires the identification of clear guidance in respect of data processing in humanitarian action. It is also crucial to ensure that data protection rules be interpreted so as not to impede essential humanitarian action.
Purpose of the project
By building bridges between stakeholders involved in international humanitarian action, the project seeks to:
Topics to be covered
The project seeks to bring together key experts in the area of data protection law, practitioners from humanitarian organisations, data protection authorities, and interested stakeholders from the private sector. The project partners will engage with stakeholders through a series of closed-door workshops, under Chatham House rule, to allow open and frank discussions between experts, with a view to deepening understanding of practical and legal issues relevant to data processing in humanitarian action, as well as identifying solutions/recommendations/guidance.
The project will: