The Privacy Problem
Internet users generate more than 2.5 quintillion bytes of data every day and will create vast amounts more in the future as the Internet of Things (IoT) continues to grow, according to cloud-based operating system company Domo.
There are benefits to the creation of this data—improved technology offerings, learning material for artificial intelligence programs, advances in medical science, and more. But there are also downsides if companies that collect this data are left unregulated, critics say.
One horrifying example came to light in the past two years when a Facebook propaganda campaign resulted in the emigration of more than 700,000 Rohingya who fled Myanmar to escape a campaign of ethnic cleansing targeting Muslims in the country. Thousands of others were killed.
The Facebook campaign was created by Myanmar military personnel, who coordinated over several years to spread false information—like allegations of rape and mass killings by Rohingya—across the Internet.
The posts escaped Facebook’s notice, and human rights groups allege that they were used to incite murders, rapes, and the ultimate massive migration of Rohingya out of Myanmar to escape persecution.
Almost a year after Rohingya began fleeing Myanmar, Facebook—facing mounting criticism—hired the Business for Social Responsibility (BSR) to assess Facebook’s role in the violence in Myanmar.
The report found that Facebook was not doing enough to prevent its platform from being used to create division and incite offline violence. Facebook agreed with the report’s findings, and explained in a corporate statement by Alex Warofka, product policy manager, that it was making changes to the platform to prevent future atrocities.
“BSR provided several recommendations for our continued improvement across five key areas, in order to help mitigate the adverse human rights impact and maximize the opportunities for freedom of expression, digital literacy, and economic development,” Warofka wrote. “These areas include building on existing governance and accountability structures, improving enforcement of content policies, increasing engagement with local stakeholders, advocating for regulatory reform, and preparing for the future.”
Facebook is taking these actions independently because currently there is no regulatory framework that would require it to remove propaganda like that used by the Myanmar military.
This lack of legal guidance highlights a major problem in our current “data-industrial complex” that allows information to be “weaponized against us with military efficiency,” said Apple CEO Tim Cook in a recent speech at a European privacy conference.
“Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies,” Cook explained. “Rogue actors and even governments have taken advantage of user trust to deepen divisions, incite violence, and even undermine our shared sense of what is true and what is false. This crisis is real. It is not imagined, or exaggerated, or crazy.”
In his speech, Cook praised European lawmakers and California legislators who have enacted data protections for users (the General Data Protection Regulation and the California Consumer Privacy Act). But he said more needs to be done to protect users, so they continue to trust—and use—new technology.
“It is time for the rest of the world to follow [Europe’s] lead,” Cook said. “We at Apple are in full support of a comprehensive federal privacy law in the United States.”
Specifically, Cook articulated that this law would give users the right to have their personal data minimized, give users the right to know what data is collected on them, give users the right to access that data, and require that data be kept securely.
And Apple isn’t alone in its stance. Intel recently released a draft bill for the United States designed to optimize innovation and protect privacy.
“What the United States needs is a privacy law that parallels the country’s ethos of freedom, innovation, and entrepreneurship,” Intel said in a statement about the legislation. “That law needs to protect individuals and enable for the ethical use of data.”
Having a legal framework for data usage will help new technologies, such as artificial intelligence, solve global problems while creating economic growth, Intel added.
“Ethical use of data will be critical as we use data to train artificial intelligence algorithms to detect bias and enhance cybersecurity,” the statement from Intel explained. “In short, it takes data to protect data. The U.S. needs a law that promotes ethical data stewardship, not one that just attempts to minimize harm.”
Intel’s suggestions are similar to Apple’s, including limiting the amount of data that is collected on users, requiring organizations to specify why they are collecting that data, limiting the use of that data, and requiring organizations to “adopt reasonable measures to protect personal data.”
Intel also explained that it recognized the need for a legal framework to prevent “harmful uses of technology and to preserve personal privacy so that all individuals embrace new, data-driven technologies,” in the statement. “At Intel, we know that privacy is a fundamental human right and robust privacy protection is critical to allow individuals to trust technology and participate in society.”
Microsoft has also advocated for more robust regulation of the Internet and tech firms. In 2018, it worked with the French government to create the Paris Call for Trust and Security in Cyberspace, which was announced at the UNESCO Internet Governance Forum in November.
“The French government has worked to lay the foundation for the steps the world’s governments and other stakeholders need to take,” said Microsoft President Brad Smith in a Financial Times op-ed. “We should all hope that the other participants in Paris will support efforts to protect citizens and civilian infrastructure from systematic or indiscriminate cyberattacks.”
The initiative will create international norms for the Internet, including preventing foreign actors from interfering with elections, prohibiting private companies from “hacking back,” and protecting intellectual property.
“As the Internet has become central to daily life, cyberattacks have grown more frequent and destructive,” the call said. “Only by acting together can we protect cyberspace.”
Endorsing the call were 130 private companies—including Microsoft, Facebook, Google, IBM, and HP—along with 90 nonprofits and more than 50 nations. The United States, Russia, China, Iran, and Israel did not pledge their support.
This is indicative of the reluctance of the United States on the national level to address privacy and cybersecurity regulations. This is unlikely to change in 2019 with U.S. President Trump in the White House and Republicans maintaining control of the U.S. Senate.
Varonis Tech Evangelist Brian Vecci says that private companies will lead efforts to standardize the regulatory field following actions by U.S. states—many of which have already adopted data breach disclosure laws and will likely consider legislation similar to California’s in the future.
“The big tech companies are very forward thinking,” Vecci says. “They tend not to think a quarter ahead or a year ahead, but five, 10, 20 years ahead—that’s the business they’re in.”
While in the past Americans may have assumed that a major data breach or cyberattack would press the U.S. Congress to act, Vecci adds that, following the Equifax breach, that’s unlikely to happen.
Instead, the push will come from private companies looking to ensure that revenues and trust in their products and services remain high.
“If no action is taken, we’re going to start to see a bit of a backlash,” Vecci says. “To maintain trust for their businesses to work, these firms are going to be at the forefront of putting that privacy regulation in.”
Paris Call for Trust and Security in Cyberspace
French President Emmanuel Macron unveiled the Paris Call for Trust and Security in Cyberspace at the 2018 UNESCO Internet Governance Forum in Paris. The call pledges to create a set of international norms for cyberspace. They include the mission to work together to:
- Prevent and recover from malicious cyber activity that threatens or causes harm to individuals and critical infrastructure.
- Prevent activity that damages the general availability or integrity of the public core of the Internet.
- Strengthen the ability to prevent election interference.
- Prevent intellectual property theft.
- Develop ways to prevent proliferation of malicious information communication technology tools and practices.
- Strengthen the security of digital processes.
- Strengthen cyber hygiene for all actors.
- Prevent nonstate actors from hacking back.
- Promote acceptance and implementation of international norms of responsible behavior in cyberspace.
Those participating in the call will revisit the norms at the Paris Peace Forum and at the Internet Governance Forum in Berlin in 2019.