SSL and internet security news

Monthly Archive: April 2019

Defending Democracies Against Information Attacks

To better understand influence attacks, we proposed an approach that models democracy itself as an information system and explains how democracies are vulnerable to certain forms of information attacks that autocracies naturally resist. Our model combines ideas from both international security and computer security, avoiding the limitations of both in explaining how influence attacks may damage democracy as a whole.

Our initial account is necessarily limited. Building a truly comprehensive understanding of democracy as an information system will be a Herculean labor, involving the collective endeavors of political scientists and theorists, computer scientists, scholars of complexity, and others.

In this short paper, we undertake a more modest task: providing policy advice to improve the resilience of democracy against these attacks. Specifically, we can show how policy makers not only need to think about how to strengthen systems against attacks, but also need to consider how these efforts intersect with public beliefs­ — or common political knowledge­ — about these systems, since public beliefs may themselves be an important vector for attacks.

In democracies, many important political decisions are taken by ordinary citizens (typically, in electoral democracies, by voting for political representatives). This means that citizens need to have some shared understandings about their political system, and that the society needs some means of generating shared information regarding who their citizens are and what they want. We call this common political knowledge, and it is largely generated through mechanisms of social aggregation (and the institutions that implement them), such as voting, censuses, and the like. These are imperfect mechanisms, but essential to the proper functioning of democracy. They are often compromised or non-existent in autocratic regimes, since they are potentially threatening to the rulers.

In modern democracies, the most important such mechanism is voting, which aggregates citizens’ choices over competing parties and politicians to determine who is to control executive power for a limited period. Another important mechanism is the census process, which play an important role in the US and in other democracies, in providing broad information about the population, in shaping the electoral system (through the allocation of seats in the House of Representatives), and in policy making (through the allocation of government spending and resources). Of lesser import are public commenting processes, through which individuals and interest groups can comment on significant public policy and regulatory decisions.

All of these systems are vulnerable to attack. Elections are vulnerable to a variety of illegal manipulations, including vote rigging. However, many kinds of manipulation are currently legal in the US, including many forms of gerrymandering, gimmicking voting time, allocating polling booths and resources so as to advantage or disadvantage particular populations, imposing onerous registration and identity requirements, and so on.

Censuses may be manipulated through the provision of bogus information or, more plausibly, through the skewing of policy or resources so that some populations are undercounted. Many of the political battles over the census over the past few decades have been waged over whether the census should undertake statistical measures to counter undersampling bias for populations who are statistically less likely to return census forms, such as minorities and undocumented immigrants. Current efforts to include a question about immigration status may make it less likely that undocumented or recent immigrants will return completed forms.

Finally, public commenting systems too are vulnerable to attacks intended to misrepresent the support for or opposition to specific proposals, including the formation of astroturf (artificial grassroots) groups and the misuse of fake or stolen identities in large-scale mail, fax, email or online commenting systems.

All these attacks are relatively well understood, even if policy choices might be improved by a better understanding of their relationship to shared political knowledge. For example, some voting ID requirements are rationalized through appeals to security concerns about voter fraud. While political scientists have suggested that these concerns are largely unwarranted, we currently lack a framework for evaluating the trade-offs, if any. Computer security concepts such as confidentiality, integrity, and availability could be combined with findings from political science and political theory to provide such a framework.

Even so, the relationship between social aggregation institutions and public beliefs is far less well understood by policy makers. Even when social aggregation mechanisms and institutions are robust against direct attacks, they may be vulnerable to more indirect attacks aimed at destabilizing public beliefs about them.

Democratic societies are vulnerable to (at least) two kinds of knowledge attacks that autocratic societies are not. First are flooding attacks that create confusion among citizens about what other citizens believe, making it far more difficult for them to organize among themselves. Second are confidence attacks. These attempt to undermine public confidence in the institutions of social aggregation, so that their results are no longer broadly accepted as legitimate representations of the citizenry.

Most obviously, democracies will function poorly when citizens do not believe that voting is fair. This makes democracies vulnerable to attacks aimed at destabilizing public confidence in voting institutions. For example, some of Russia’s hacking efforts against the 2016 presidential election were designed to undermine citizens’ confidence in the result. Russian hacking attacks against Ukraine, which targeted the systems through which election results were reported out, were intended to create confusion among voters about what the outcome actually was. Similarly, the “Guccifer 2.0” hacking identity, which has been attributed to Russian military intelligence, sought to suggest that the US electoral system had been compromised by the Democrats in the days immediately before the presidential vote. If, as expected, Donald Trump had lost the election, these claims could have been combined with the actual evidence of hacking to create the appearance that the election was fundamentally compromised.

Similar attacks against the perception of fairness are likely to be employed against the 2020 US census. Should efforts to include a citizenship question fail, some political actors who are disadvantaged by demographic changes such as increases in foreign-born residents and population shift from rural to urban and suburban areas will mount an effort to delegitimize the census results. Again, the genuine problems with the census, which include not only the citizenship question controversy but also serious underfunding, may help to bolster these efforts.

Mechanisms that allow interested actors and ordinary members of the public to comment on proposed policies are similarly vulnerable. For example, the Federal Communication Commission (FCC) announced in 2017 that it was proposing to repeal its net neutrality ruling. Interest groups backing the FCC rollback correctly anticipated a widespread backlash from a politically active coalition of net neutrality supporters. The result was warfare through public commenting. More than 22 million comments were filed, most of which appeared to be either automatically generated or form letters. Millions of these comments were apparently fake, and attached unsuspecting people’s names and email addresses to comments supporting the FCC’s repeal efforts. The vast majority of comments that were not either form letters or automatically generated opposed the FCC’s proposed ruling. The furor around the commenting process was magnified by claims from inside the FCC (later discredited) that the commenting process had also been subjected to a cyberattack.

We do not yet know the identity and motives of the actors behind the flood of fake comments, although the New York State Attorney-General’s office has issued subpoenas for records from a variety of lobbying and advocacy organizations. However, by demonstrating that the commenting process was readily manipulated, the attack made it less likely that the apparently genuine comments of those opposing the FCC’s proposed ruling would be treated as useful evidence of what the public believed. The furor over purported cyberattacks, and the FCC’s unwillingness itself to investigate the attack, have further undermined confidence in an online commenting system that was intended to make the FCC more open to the US public.

We do not know nearly enough about how democracies function as information systems. Generating a better understanding is itself a major policy challenge, which will require substantial resources and, even more importantly, common understandings and shared efforts across a variety of fields of knowledge that currently don’t really engage with each other.

However, even this basic sketch of democracy’s informational aspects can provide policy makers with some key lessons. The most important is that it may be as important to bolster shared public beliefs about key institutions such as voting, public commenting, and census taking against attack, as to bolster the mechanisms and related institutions themselves.

Specifically, many efforts to mitigate attacks against democratic systems begin with spreading public awareness and alarm about their vulnerabilities. This has the benefit of increasing awareness about real problems, but it may ­ especially if exaggerated for effect ­ damage public confidence in the very social aggregation institutions it means to protect. This may mean, for example, that public awareness efforts about Russian hacking that are based on flawed analytic techniques may themselves damage democracy by exaggerating the consequences of attacks.

More generally, this poses important challenges for policy efforts to secure social aggregation institutions against attacks. How can one best secure the systems themselves without damaging public confidence in them? At a minimum, successful policy measures will not simply identify problems in existing systems, but provide practicable, publicly visible, and readily understandable solutions to mitigate them.

We have focused on the problem of confidence attacks in this short essay, because they are both more poorly understood and more profound than flooding attacks. Given historical experience, democracy can probably survive some amount of disinformation about citizens’ beliefs better than it can survive attacks aimed at its core institutions of aggregation. Policy makers need a better understanding of the relationship between political institutions and social beliefs: specifically, the importance of the social aggregation institutions that allow democracies to understand themselves.

There are some low-hanging fruit. Very often, hardening these institutions against attacks on their confidence will go hand in hand with hardening them against attacks more generally. Thus, for example, reforms to voting that require permanent paper ballots and random auditing would not only better secure voting against manipulation, but would have moderately beneficial consequences for public beliefs too.

There are likely broadly similar solutions for public commenting systems. Here, the informational trade-offs are less profound than for voting, since there is no need to balance the requirement for anonymity (so that no-one can tell who voted for who ex post) against other requirements (to ensure that no-one votes twice or more, no votes are changed and so on). Instead, the balance to be struck is between general ease of access and security, making it easier, for example, to leverage secondary sources to validate identity.

Both the robustness of and public confidence in the US census and the other statistical systems that guide the allocation of resources could be improved by insulating them better from political control. For example, a similar system could be used to appoint the director of the census to that for the US Comptroller-General, requiring bipartisan agreement for appointment, and making it hard to exert post-appointment pressure on the official.

Our arguments also illustrate how some well-intentioned efforts to combat social influence operations may have perverse consequences for general social beliefs. The perception of security is at least as important as the reality of security, and any defenses against information attacks need to address both.

However, we need far better developed intellectual tools if we are to properly understand the trade-offs, instead of proposing clearly beneficial policies, and avoiding straightforward mistakes. Forging such tools will require computer security specialists to start thinking systematically about public beliefs as an integral part of the systems that they seek to defend. It will mean that more military oriented cybersecurity specialists need to think deeply about the functioning of democracy and the capacity of internal as well as external actors to disrupt it, rather than reaching for their standard toolkit of state-level deterrence tools. Finally, specialists in the workings of democracy have to learn how to think about democracy and its trade-offs in specifically informational terms.

This essay was written with Henry Farrell, and has previously appeared on Defusing Disinfo.

Powered by WPeMatico

Friday Squid Blogging: Toraiz SQUID Digital Sequencer

Pioneer DJ has a new sequencer: the Toraiz SQUID: Sequencer Inspirational Device.

The 16-track sequencer is designed around jamming and performance with a host of features to create “happy accidents” and trigger random sequences, modulations and chords. There are 16 RGB pads for playing in your melodies and beats, and up to 64 patterns per each of the 16 tracks. There are eight notes of polyphony per track too, and a Harmonizer section to quickly input pre-determined chord shapes into your pattern, with up to six saved chords.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Powered by WPeMatico

Towards an Information Operations Kill Chain

Cyberattacks don’t magically happen; they involve a series of steps. And far from being helpless, defenders can disrupt the attack at any of those steps. This framing has led to something called the “cybersecurity kill chain“: a way of thinking about cyber defense in terms of disrupting the attacker’s process.

On a similar note, it’s time to conceptualize the “information operations kill chain.” Information attacks against democracies, whether they’re attempts to polarize political processes or to increase mistrust in social institutions, also involve a series of steps. And enumerating those steps will clarify possibilities for defense.

I first heard of this concept from Anthony Soules, a former National Security Agency (NSA) employee who now leads cybersecurity strategy for Amgen. He used the steps from the 1980s Russian “Operation Infektion,” designed to spread the rumor that the U.S. created the HIV virus as part of a weapons research program. A 2018 New York Times opinion video series on the operation described the Russian disinformation playbook in a series of seven “commandments,” or steps. The information landscape has changed since 1980, and information operations have changed as well. I have updated, and added to, those steps to bring them into the present day:

  • Step 1: Find the cracks in the fabric of society­ — the social, demographic, economic and ethnic divisions.

  • Step 2: Seed distortion by creating alternative narratives. In the 1980s, this was a single “big lie,” but today it is more about many contradictory alternative truths­ — a “firehose of falsehood” — ­that distorts the political debate.

  • Step 3: Wrap those narratives around kernels of truth. A core of fact helps the falsities spread.

  • Step 4: (This step is new.) Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives.

  • Step 5: Conceal your hand; make it seem as if the stories came from somewhere else.

  • Step 6: Cultivate “useful idiots” who believe and amplify the narratives. Encourage them to take positions even more extreme than they would otherwise.

  • Step 7: Deny involvement, even if the truth is obvious.

  • Step 8: Play the long game. Strive for long-term impact over immediate impact.

These attacks have been so effective in part because, as victims, we weren’t aware of how they worked. Identifying these steps makes it possible to conceptualize­ and develop­ countermeasures designed to disrupt information operations. The result is the information operations kill chain:

  • Step 1: Find the cracks. There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the “common political knowledge” necessary for democracies to function. We need to strengthen that shared knowledge, thereby making it harder to exploit the inevitable cracks. We need to make it unacceptable­ — or at least costly — ­for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. We need to become reflexively suspicious of information that makes us angry at our fellow citizens. We cannot entirely fix the cracks, as they emerge from the diversity that makes democracies strong; but we can make them harder to exploit.

  • Step 2: Seed distortion. We need to teach better digital literacy. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

  • Step 3: Wrap the narratives around kernels of truth. Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. We need to ensure that the true narrative is legitimized and promoted.

  • Step 4: Build audiences. This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Here, the defenses center around making disinformation efforts less effective. Social media companies need to detect and delete accounts belonging to propagandists and bots and groups run by those propagandists.

  • Step 5: Conceal your hand. Here the answer is attribution, attribution, attribution. The quicker we can publicly attribute information operations, the more effectively we can defend against them. This will require efforts by both the social media platforms and the intelligence community, not just to detect information operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

  • Step 6: Cultivate useful idiots. We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth­ — in the words of one report, we must “make the truth louder.” Of course, there will always be true believers for whom no amount of fact-checking or counter speech will convince; this is not intended for them. Focus instead on persuading the persuadable.

  • Step 7: Deny everything. When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA’s Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

  • Step 8: Play the long game. Counterattacks can disrupt the attacker’s ability to maintain information operations, as U.S. Cyber Command did during the 2018 midterm elections. The NSA’s new policy of “persistent engagement” (see the article by, and interview with, U.S. Cyber Command Commander’s Gen. Paul Nakasone here) is a strategy to achieve this. Defenders can play the long game, too. We need to better encourage people to think for the long term: beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Yes, we need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of information operations, if we can publicly attribute — if we can respond either diplomatically or otherwise­ — we can deter these attacks from nation-states. But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they’re surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. Deterring them will require a different theory.

None of these defensive actions is sufficient on its own. In this way, the information operations kill chain differs significantly from the more traditional cybersecurity kill chain. The latter defends against a series of steps taken sequentially by the attacker against a single target­ — a network or an organization — and disrupting any one of those steps disrupts the entire attack. The information operations kill chain is fuzzier. Steps overlap. They can be conducted out of order. It’s a patchwork that can span multiple social media sites and news channels. It requires, as Henry Farrell and I have postulated, thinking of democracy itself as an information system. Disrupting an information operation will require more than disrupting one step at one time. The parallel isn’t perfect, but it’s a taxonomy by which to consider the range of possible defenses.

This information operations kill chain is a work in progress. If anyone has any other ideas for disrupting different steps of the information operations kill chain, please comment below. I will update this in a future essay.

This essay previously appeared on Lawfare.com.

Powered by WPeMatico

G7 Comes Out in Favor of Encryption Backdoors

From a G7 meeting of interior ministers in Paris this month, an “outcome document“:

Encourage Internet companies to establish lawful access solutions for their products and services, including data that is encrypted, for law enforcement and competent authorities to access digital evidence, when it is removed or hosted on IT servers located abroad or encrypted, without imposing any particular technology and while ensuring that assistance requested from internet companies is underpinned by the rule law and due process protection. Some G7 countries highlight the importance of not prohibiting, limiting, or weakening encryption;

There is a weird belief amongst policy makers that hacking an encryption system’s key management system is fundamentally different than hacking the system’s encryption algorithm. The difference is only technical; the effect is the same. Both are ways of weakening encryption.

Powered by WPeMatico