2019 HRC Res­o­lu­tion on the right to privacy in the digital age

HRC 42nd ses­sion
2019-09-26

Analysis of precedential value

This UN Human Rights Coun­cil (HRC) res­o­lu­tion was adopted with­out a vote in Sep­tem­ber 2019. This doc­u­ment was co-drafted by rep­re­sen­ta­tives of 37 Mem­ber States, 25 of which were not mem­bers of the sit­ting HRC.
The HRC is com­posed of elected rep­re­sen­ta­tives from 47 Mem­ber States; together, they are respon­si­ble for coor­di­nat­ing inves­ti­ga­tions of and responses to human rights vio­la­tions.

Used as precedent

digital health

Rec­og­niz­ing the need for Gov­ern­ments, the pri­vate sec­tor, inter­na­tional orga­ni­za­tions, civil soci­ety, the tech­ni­cal and aca­d­e­mic com­mu­ni­ties and all rel­e­vant stake­hold­ers to be cog­nizant of the impact, oppor­tu­ni­ties and chal­lenges of rapid tech­no­log­i­cal change on the pro­mo­tion and pro­tec­tion of human rights, as well as of its poten­tial to facil­i­tate efforts, to accel­er­ate human progress and to pro­mote and pro­tect human rights and fun­da­men­tal free­domsNot­ing that the use of arti­fi­cial intel­li­gence can con­tribute to the pro­mo­tion and pro­tec­tion of human rights, and can also have far-reach­ing and global impli­ca­tions, includ­ing with regard to the right to pri­vacy, that are trans­form­ing Gov­ern­ments and soci­eties, eco­nomic sec­tors and the world of workNot­ing that the use of arti­fi­cial intel­li­gence may, with­out ade­quate safe­guards, pose the risk of rein­forc­ing dis­crim­i­na­tion, includ­ing struc­tural inequal­i­tiesNot­ing with con­cern that auto­matic pro­cess­ing of per­sonal data for indi­vid­ual pro­fil­ing, auto­mated deci­sion-mak­ing and machine learn­ing tech­nolo­gies may, with­out ade­quate safe­guards, lead to dis­crim­i­na­tion or deci­sions that oth­er­wise have the poten­tial to affect the enjoy­ment of human rights, includ­ing eco­nomic, social and cul­tural rights, and rec­og­niz­ing the need to apply inter­na­tional human rights law in the design, devel­op­ment, deploy­ment, eval­u­a­tion and reg­u­la­tion of these tech­nolo­gies, and to ensure they are sub­ject to ade­quate safe­guards and over­sightAffirms that the same rights that peo­ple have offline must also be pro­tected online, includ­ing the right to pri­vacyAcknowl­edges that the use, deploy­ment and fur­ther devel­op­ment of new and emerg­ing tech­nolo­gies, such as arti­fi­cial intel­li­gence, can impact the enjoy­ment of the right to pri­vacy and other human rights, and that the risks to the right to pri­vacy can and should be min­i­mized by adopt­ing ade­quate reg­u­la­tion or other appro­pri­ate mech­a­nisms, includ­ing by tak­ing into account inter­na­tional human rights law in the design, devel­op­ment and deploy­ment of new and emerg­ing tech­nolo­gies, such as arti­fi­cial intel­li­gence, by ensur­ing a safe, secure and high-qual­ity data infra­struc­ture and by devel­op­ing human-cen­tred audit­ing mech­a­nisms, as well as redress mech­a­nisms