Navalny and his Poisoners: How Social Engineers Exploit Cracks in Social Networks

Posted on |

In an unprecedented sting, Russian opposition leader and assassination survivor Alexei Navalny tricked his own poisoners into admitting their crime. The episode highlights that even highly trained specialists – those who should know better – will fall for deception when the conditions are right.

Navalny collapsed into a coma during an internal Russian flight. He was airlifted to Germany where it was established that he had been poisoned. It transpired this had been arranged by agents of the Russian state. But this story took a further strange twist when it was reported that – assisted by the investigative outfit, Bellingcat – Navalny had turned the tables on one of his would-be assassins. Navalny caught the hapless hitman in a prank call (or ‘Vishing’): securing a full confession. In the cybersecurity world such deceptions are usually known as social engineering: actions that manipulate human judgement and behaviour in order to secure access to systems or data.

Navalny made clever use of typical social engineering tactics. These include using:

  • A credible pretext: he presented himself as a fake aide to a real high-ranking official, demanding an oral report for his boss on what went wrong.
  • The Authority principle: he opened the call by name-dropping Russian President Vladimir Putin, who – it was claimed – had authorised the call…transferring the authority of that individual into a quick transfer of trust, and reducing the likelihood the target might question or push back.
  • The Urgency principle: he called in the early morning, stating ‘let’s just say I would not call at 7am if this was not urgent’. This aimed to encourage the target to respond quickly and less reflectively.

Social engineer Rachel Tobac goes into further detail in a fascinating Twitter thread.

But a crucial aspect of this story is that Navalny had previously attempted to contact several other members of the assassination team, who had hung up on him: one even recognised who was targeting him. A key security failure in this circumstance came from poor communications amongst the assassins – either a lack of procedure, or a failure to follow procedure.

This mirrors enabling factors sometimes seen in spearphishing and whaling attacks, in which users are compromised after several other members of their social network have also been targeted, but not taken the bait. Success for the attacker depends on poor or fragmented defensive communications across a network: i.e. some targets have correctly identified an attack in progress, but this information has not been shared across the network to other potential targets. Eventually the attacker finds a weak link – and this is all that is needed to compromise the system.

To counter this, defenders need (1) to establish the means to identify a systematic social engineering attack in progress, as early as possible; and (2) to communicate the nature of the threat rapidly and compellingly to all relevant users across the network. This may require establishing collaborative security systems with users so that they become the key eyes and ears – the reconnaissance; a system of crowdsourced intelligence. But it also means being able to act on this information: identifying patterns; judging who needs to know that they might also be targeted so that they do not become the weak link. Finally, credible communication: how do you compete for attention with everything else in the user’s information space so that they really do internalise the information being provided? Will they act appropriately and on time? Effective influence typically depends upon developing a keen understanding of the target audience: communications will need to wrap around users’ motivations, desires, attitudes and beliefs. Are these known and understood?

Leave a Reply

Your email address will not be published. Required fields are marked *