Effective Cyber Security Cultures

Posted on |

Culture is often talked about as a factor that is crucial in creating successful organisations. But do we know what we mean? When we talk about culture we probably don’t mean Neolithic pottery styles or the palace rituals of imperial China. From a social-psychological point of view we are talking about the beliefs, values, and attitudes that predominate in a given group.

So when we talk of building a more effective cyber security culture, we are talking about moulding the beliefs, values and attitudes of users towards interacting with information technology. We focus on these, because all of these factors play a huge role in determining our behaviour – and it is often the user’s behaviour we need to change in order to reduce risks, and guard against cyber attackers. Ensuring users hold useful beliefs, values and attitudes towards cyber security, then, is crucial to whether a user reports that phishing link, or invests the time necessary to ensure strong password security – without them thinking anyone is constantly checking up on them.

Attitudes are thoughts and feelings. Beliefs are attitudes about what is true and false. You may train a user or increase their awareness. But if they believe that cyber security is something for the professionals to sort out, you may not see them act on this awareness or knowledge: after all, they have a day job to get on with, don’t they? Instilling a belief such as I am a key part of my organisation’s cyber security defences may drastically increase motivation to actually enact what they have learned.

Values are beliefs about what is morally correct. If we can instil values in users that not only are they a crucial line of defence, but that it is morally correct to act on this understanding, then we have most likely increased our organisational protection. So when your colleague reminds you to lock their terminal when away from the desk, they aren’t being a sanctimonious pedant. They are being a good citizen safeguarding your interests.

But moulding beliefs, values and attitudes is not solely – or even mostly – about making rational arguments. It involves incisive interventions harnessing what we know about how we humans are influenced – and is related to factors as wide-ranging as individual social and professional identity, incentives and sanctions, environmental stressors, social pressure and social norms, power, alienation – and a host more.

Is cyber security culture important to you? If we want to ensure users are our greatest line of defence and not our weakest link, it should be. Social machines is expert in baselining cyber security culture, identifying where the vulnerabilities lie, and working with our clients to strengthen these.

Social Machines – Why the Bee?

Posted on |

Bees are pretty social animals. Together, their hierarchical system and super cooperative behaviour makes them an example of a ‘social machine’: an environment comprising both social animals (such as humans) and technology, interacting together in complex ways to produce sometimes chaotic outcomes.

A bee colony, housed in a natural nest or an artificial hive, comprises thousands of individuals working collectively to produce honey and beeswax, operating in sync, interacting with one another and the natural environment, forming a colony that is integrated within a wider system.

Protecting their hive or nest is a crucial aspect of the bees’ function in their colony.

The honey within a hive or nest attracts a variety of outside interest, including animals such as wasps, badgers, birds, and bears. To protect their hive, bees have dedicated guards – who alert other workers and call for reinforcements by releasing pheromones when threatened. As bees and other insects try to come into the hive, the guard bees stop and inspect them at the entrance – determining if an insect can enter, and protecting the hive from any foreign intruders.

The hexagonal pattern seen in bee hives and nests is also a signifier of their strength. One of the few natural shapes which tessellates perfectly, the hexagon allows for overwhelming efficiency in the construction of the hive. Less wax needs to be used due to the connecting of the six sides, and as the hive grows it becomes stronger, the hexagons gain strength under compression. Within the safety of the hive, the bees work together flawlessly; nourishing themselves, raising their young, and serving their queen.

What does this mean?

Bees’ function and success as a colony relies on their ability to socialise, understand their role within their system, and interact with outsiders. Bees are intwined in one another’s lives, protecting the colony and working to ensure the survival of their hive. And we humans too are social, working together in groups and organisations, interacting with one another and working towards shared goals – which can include working to ensure the success of the groups and organisations we belong to.

Likening ourselves to bees, we can think of our group memberships as our hives or colonies. To protect our ‘hive’ from outside threats, we may need to be vigilant to threats, alerting others to their existence so the whole group is protected. Yet threats – nor mitigating them – cannot be allowed these to disrupt our core productive business.

What does this have to do with cyber-security?

Social machines behavioural scientists are experts in changing human behaviour. We help large organisations manage their cyber security risks through helping them change the behaviour of their human technology users – who are both their weakest link and their greatest line of protection. Like bees that identify foreign intruders and release pheromones – we support behaviour change initiatives to make technology users the greatest asset in preventing and mitigating cyber attacks and data breaches: and lowering an organisation’s risk exposure.

Navalny and his Poisoners: How Social Engineers Exploit Cracks in Social Networks

Posted on |

In an unprecedented sting, Russian opposition leader and assassination survivor Alexei Navalny tricked his own poisoners into admitting their crime. The episode highlights that even highly trained specialists – those who should know better – will fall for deception when the conditions are right.

Navalny collapsed into a coma during an internal Russian flight. He was airlifted to Germany where it was established that he had been poisoned. It transpired this had been arranged by agents of the Russian state. But this story took a further strange twist when it was reported that – assisted by the investigative outfit, Bellingcat – Navalny had turned the tables on one of his would-be assassins. Navalny caught the hapless hitman in a prank call (or ‘Vishing’): securing a full confession. In the cybersecurity world such deceptions are usually known as social engineering: actions that manipulate human judgement and behaviour in order to secure access to systems or data.

Navalny made clever use of typical social engineering tactics. These include using:

  • A credible pretext: he presented himself as a fake aide to a real high-ranking official, demanding an oral report for his boss on what went wrong.
  • The Authority principle: he opened the call by name-dropping Russian President Vladimir Putin, who – it was claimed – had authorised the call…transferring the authority of that individual into a quick transfer of trust, and reducing the likelihood the target might question or push back.
  • The Urgency principle: he called in the early morning, stating ‘let’s just say I would not call at 7am if this was not urgent’. This aimed to encourage the target to respond quickly and less reflectively.

Social engineer Rachel Tobac goes into further detail in a fascinating Twitter thread.

But a crucial aspect of this story is that Navalny had previously attempted to contact several other members of the assassination team, who had hung up on him: one even recognised who was targeting him. A key security failure in this circumstance came from poor communications amongst the assassins – either a lack of procedure, or a failure to follow procedure.

This mirrors enabling factors sometimes seen in spearphishing and whaling attacks, in which users are compromised after several other members of their social network have also been targeted, but not taken the bait. Success for the attacker depends on poor or fragmented defensive communications across a network: i.e. some targets have correctly identified an attack in progress, but this information has not been shared across the network to other potential targets. Eventually the attacker finds a weak link – and this is all that is needed to compromise the system.

To counter this, defenders need (1) to establish the means to identify a systematic social engineering attack in progress, as early as possible; and (2) to communicate the nature of the threat rapidly and compellingly to all relevant users across the network. This may require establishing collaborative security systems with users so that they become the key eyes and ears – the reconnaissance; a system of crowdsourced intelligence. But it also means being able to act on this information: identifying patterns; judging who needs to know that they might also be targeted so that they do not become the weak link. Finally, credible communication: how do you compete for attention with everything else in the user’s information space so that they really do internalise the information being provided? Will they act appropriately and on time? Effective influence typically depends upon developing a keen understanding of the target audience: communications will need to wrap around users’ motivations, desires, attitudes and beliefs. Are these known and understood?

Advanced Persistent Threats put Your Users in the Front Line

Posted on |

In the hyperconnected age of the internet, businesses are increasingly drawn into the geopolitical games of nation states. Sophisticated state-sponsored adversaries may pre-position for future cyber conflict in utilities, power, telecoms and technology companies; or execute operations that steal intellectual property or money, or perhaps simply aim to punish criticism of the dear leader. One element these attacks tend to have in common? They exploit the human user to gain access to systems, networks and data: our minds are the gateway.

Many Advanced Persistent Threats (APTs) use now-conventional spearphishing methods. The FIN7 threat group targets and steals payment card data from systems, amongst other operations. In 2017, FireEye reported that FIN7 sent spearphishing emails to personnel responsible for filing United States Securities and Exchange Commission (SEC) across multiple financial institutions. Messages were sent from a spoofed SEC email address. These were titled ‘important changes to form 10.K’ – an actual key item of documentation their targets were likely to have an interest in. Helpfully, the attackers attached a ‘new template’ for this form.

Chinese state-linked APT41 uses a variety of user-focused deceptions to target intellectual property across a number of industries. Typical spearphishes have spoofed credible messengers such as well-known industry representatives. But APT41 also made emotional appeals, including targeting Hong Kong Occupy activists during pro-democracy protests, with emails titled ‘help’. Making timely use of current events has also become a habit for this APT. For instance, prefiguring the types of scams that arrived with Covid 19, in 2015, APT41 targeted a Japanese media organization with a lure document on ‘Prevention of Middle East Respiratory Syndrome (MERS).’ Again, this targeted fear: respiratory infections and a potential pandemic were salient to targets in the Asia-Pacific region at that time, due to first-hand experiences with the SARS and avian flu outbreaks.

The APT Wizard Spider , which includes the Trickbot ransomware that targets banks amongst its many exploits, adopts a more scattergun approach that uses spam emails. But these are tailored to the audience at a basic level – personalising individual first and organisation names against details harvested from the target email address. An example email:

‘Dear [NAME], I am a new employee in [ORGANISATION]. I will process complaint on you till 2pm. Complaint report #10/13/20 or online preview in PDF [MALWARE LINK]’

This content abuses some key psychological principles. Let’s take each element of the example above:

  • I am a new employee…’ – this alleviates suspicion (‘they’re a newbie: so this is why you don’t recognise their name!’)
  • I will process complaint on you…’ – stimulates fear and anxiety: your emotions are being targeted. Your reflective, rational thinking that ‘should know better’ may be sidelined;
  • ‘…till 2pm’ – a time limit – even a nonsensical out of context one such as this, introduces a scarcity effect, again inviting an instinctive, non-reflective response.

But some recent attacks add further degrees of sophistication. The Silence APT targets banks and other financial institutions. Silence sends reconnaissance emails first, which look like ‘mail delivery failed’ messages to users. This stage allows those behind the APT to collect valid email information allowing them to mimic (spoof) real identified users within a targeted organisation – using these accounts to exploit collegial trust and send real identified colleagues live phishing emails containing malware.

The conclusion is that any organisation within an important supply chain or that owns something worth stealing is a target by nation state actors – and not only. An organisation’s IT users are in the front line. But research suggests that showing individuals the persuasive tricks of professional manipulators makes them subsequently more resilient to them. The question is: Do your organisation’s users understand not only what targets them, but why such deceptions so often work?

Covid-19 has left us more vulnerable to cyber attack and data breach: what might your organisation need to change?

Posted on |

You’ve provided all of your IT users with teleworking platforms and now everyone in your organisation is able to work entirely remotely. But the pandemic is keeping you extra busy. Strangers start hacking into sensitive online meetings. Employees complain of chronic stress and fatigue and seem to be making more mistakes than usual. Several employees receive an email reportedly with an update from HR. They end up clicking links containing malware. Some of your employees start using personal devices, emailing documents to themselves and storing important documents on local storage systems. Data goes missing. You’re breaching your GDPR responsibilities and exposing your company to potential legal and financial risk. Confidential information ends up being posted online which you suspect may be linked to the observation that many of your employees share a workspace with family or flatmates…

Covid-19 has upended our social and professional lives. Following stay at home orders and concerns over safety, our day-to-day interactions have increasingly shifted online, resulting in an even greater reliance on digital communications and teleworking platforms. Moreover, the pandemic and resulting lifestyle changes have deeply affected our mental states and cognitive processes. This “new normal” offers a target rich environment for cybercriminals and increased risk exposure to employee error. Whilst many organisations enable employees to practice strong cybersecurity behaviours, the events of 2020 have changed the context rapidly, potentially leaving some organisations vulnerable. Thousands of businesses have already paid the price for not adapting fast enough.

Have you covered all of your key current user-related cybersecurity vulnerabilities?

Social Machines has created the STEP Framework to help cybersecurity professionals identify and mitigate Covid-19 related vulnerabilities across four key user-centric factors: Social, Technological, Environmental, and Personal:

Social

Humans are driven by relationships, norms, and pressures. Malign actors seek to exploit these traits. Fraudsters frequently seek to manipulate their victims’ trust and the shift to teleworking has enabled these actors to more easily impersonate friends, family, colleagues, as well as professional authorities such as human resources. Malign actors will frequently use fear and create a sense of urgency in their social engineering approaches. Triggering strong affective responses in their targets can temporarily lower a victim’s ability to detect irregularities (such a typos), suspicious requests, or malicious communications, increasing their vulnerability. One such scam email proposed to have results of Covid-19 tests, another, purported to be from HR suggesting that the victim may be made redundant due to the pandemic.

Technological

In many cases, companies’ remote working systems and policies have been set up quickly, originally on a temporary basis. Operational risks could include poor data protection systems, which could result in irrecoverable data losses and expensive recovery efforts. Regarding legal risks, employees using their personal devices could facilitate accidental sharing of confidential or personal information, exposing the company to breaches of data protection regulations. Furthermore, workers now rely on a host of different platforms to communicate and do their jobs. Cybercriminals have taken advantage of this, impersonating employees, surreptitiously joining team meetings, and using URLs designed to mimic popular video calling platforms. Many companies have not enabled authentication mechanisms such as 2FA, or informal processes for employees to validate suspicious communications, which previously might have been done face-to-face in the office. These attacks rely on our awareness and trust in teleworking platforms but also our inability or unwillingness to challenge suspicious communications.

Environmental

The need to avoid discussing sensitive information in front of family or housemates, provides an additional stressor for employees, particularly younger colleagues who are more likely to live with flatmates, who they may be less able to trust. Employees’ cyber security habits may worsen after working from home and companies may not be providing employees with the required data management or cyber security training for working remotely.

Personal

Attackers seek to exploit their victims’ personal vulnerabilities. This can be taking advantage of flaws in their cognition, striking early in the morning or late at night when they are not fully concentrating. This can be particularly effective currently given many employees are working longer hours and may be experiencing additional stress as a result of the Covid-19 pandemic. These conditions also increase the chances of human error. Furthermore, our home workspaces provide more opportunity for distraction and a lack of supervision may tempt employees to stream online content or browse social media whilst working, increasing the likelihood that they make mistakes or fall victim to social engineering attacks.

The deeper STEP Framework offers an easy-to-use methodology for identifying human factors vulnerabilities caused or exacerbated by the pandemic, helping your organisation mitigate the risks of cyber attack and data breach.

Please contact justinhj@socialmachines.co.uk to find out more.

Gamification as Effective Training to Protect against Socially Engineered Cyber Attacks

Posted on |

The Research Institution for Socio-technical Cybersecurity (RISCS) features our NCSC-funded project: ‘Gamification as Effective Training to Protect against Socially Engineered Cyber Attacks’.

Research Fellows: Justin Jones

‘We are developing an evidence base for a taxonomy that signposts the most effective training and awareness techniques for defending IT users against socially engineered cyber-attack.

People and processes often present the greatest cybersecurity vulnerabilities to organisations. Any mechanism taking advantage of human operator behaviour to compromise cyber security are often described as socially engineered cyber-attacks (e.g. phishing, social network exploitation, waterholing, baiting and others). Defending against socially engineered cyber-attacks has typically focused on educating and training users. Training aims to enhance user protection, by increasing user knowledge – including how to behave in ways likely to mitigate their vulnerabilities. But training does not necessarily lead to useful habit, and can fade off rapidly. Simulation and use of games to train (gamification) in particular often aim to improve this, causing participants to simulate useful behaviour: often by stimulating users to reflect more deeply upon their own behaviour – as distinct from that of others. Such approaches may be especially important as socially engineered cyber-attack is increasingly tailored around user attributes (which are weaponised as vulnerabilities). Indeed, machine learning techniques may enable such tailored attacks to be performed at scale – rendering redundant the previous view that attacker’s faced trade offs between tailoring and scale.

In this context, the key question this research will address is: What types of training will best protect who, against what, when and why? This research will use a systematic literature review and consequent validation exercise in order to create an evidence-based taxonomy. It will be adapted to consider changes in remote working practices, as well as the impact of next generation machine learning techniques. The work will support a more robust foundation for future cyber protection training that will help organisations better optimise employee cyber security behaviours (and cyber risk management)’

This research summary was written by Social Machines Managing Director Justin Hempson-Jones for RISCS.