Far too many breaches occur due to the all-too-effective practice of social engineering. Just look at the latest SANS breach, an organization that is renowned for its security training. This only goes to prove that any organization can sit at the receiving end of a successful social engineering attack. When it comes to social engineering, since attackers are targeting people as opposed to technical assets, no organization is impenetrable.
Too often, social engineering attacks are left unreported because it can be an extremely personal issue for those affected by it. Security teams may end up unintentionally shaming recipients – and as a result, employees may be reluctant to admit that they were successfully compromised because of fear of retribution. Being socially engineered can carry with it a kind of stigma and have significant psychological ramifications. Once an employee who has been socially engineered comes forward and is greeted with chastising, they may be more unwilling to report future attempts. Traditional security organizations don’t typically consider this aspect of social engineering when building a security program. Subsequently, dealing with the human element – how to assuage the anxieties of employees who have been targeted – usually gets left out.
Organizations that do have some social engineering protections in place usually stop at Step 1, which is adding social engineering to the scope of a security assessment. Then, they rely on this benchmark annually or semi-regularly, in some instances without considering training and education programs, security controls, processes around identification, containment and eradication of a threat, and many other factors.
In this piece, we’ll review several things organizations can get wrong about social engineering and ways that they can reduce their risk of being compromised by it.
Security compromises can occur through a variety of means; targeting people is one of the more successful routes to compromise, as attackers often look at an organization’s people as “its weakest links.” Meanwhile, most organizations will invest heavily in technical controls and fail to think about their “soft” targets – i.e., their employees and contractors. Humans can be subject to manipulation, want to be a “team player” for their employer, and often genuinely wish to help others. Attackers then exploit this behavior by baiting them, such as urging employees to provide private details for work initiatives or clicking links that promise to help their team or company. In 2020, almost a quarter (22%) of all attacks included a social engineering component, with 40% of malware being installed via malicious email links and 20% via an email attachment (according to the 2020 Verizon DBIR report).
Additionally, financially motivated social engineering is on the rise, as organized criminal groups are becoming more technically capable and proficient, their techniques and methodology have advanced as a result. A timely example would be some of the ransomware groups that have emerged in recent years. Often, these groups will run their operations similar to nation-states or mature cybersecurity firms, using techniques that are reminiscent of APTs. And recently, a Tesla employee found himself the subject of not just social engineering (the attacker had previously attempted to get to know the employee), but a $500,000 bribery in return for installing ransomware in the Tesla company’s network. If this attack had been successful, the attacker would have possibly made millions from it. The example of the Tesla employee may in fact point to a new pattern in social engineering.
Because of the prevalence and evolution of social engineering attacks, organizations must address it in their incident response plans – and do their best to protect their employees from exploitation.
In my 15-plus years of security experience, I’ve seen that most organizations are only focused on being reactive, rather than adopting a proactive approach. This isn’t an unreasonable mindset – being reactive is a natural response to how security is treated right now in our industry. There’s a race to disclose new vulnerabilities, and organizations are left scrambling to remediate those risks. The downside, however, is that it leaves organizations unnecessarily at risk because of the lack of thorough planning. After detecting a potential compromise, they will engage their incident response functions to contain and eradicate any potential attacker in their network. They will often neglect to conduct an after-action review (AAR) of what happened, why it happened, and how to prevent it in the future.
Many organizations also overlook building robust social engineering assessment programs to proactively test their email, network, and endpoint controls as well as their employees. This can be due to a number of reasons such as organizational maturity, the security team’s level of experience, or a lack of knowledge of how attackers operate. Building comprehensive and full-spectrum security strategies that account for both people and technology is one solution. Security is dependent on both people and technology, not simply one or the other.
Educating employees on social engineering techniques such as phishing, vishing, pretexting, and other variants is important. But the upstream processes that handle when an employee should report and how they should report is also key. You want to remove all fear from an employee’s mind if they need to report a suspected social engineering attempt. Additionally, you’ll want to ensure that your strategy doesn’t penalize users but instead educates them if they have fallen for a social engineering attack.
Every organization will have their own definition of what an acceptable level of risk looks like. This risk level then becomes the foundation for security decisions and investments. Beyond employee training and education, organizations will want to focus on getting the basics right to ensure there are layers of controls in place to make them more resilient even if their users are socially engineered.
Some examples include the following:
More advanced examples based on the maturity of an organization’s defensive posture are:
Even if you do all of the above, there is still a very realistic chance (if only somewhat reduced) that you may find your organization has been compromised via social engineering. Should that happen, leverage your incident response plan to find out exactly what transpired – and determine what to do going forward to avoid similar issues. And avoid shaming your employees. As recent events have clearly illustrated, nearly any organization’s staff can find themselves the target of social engineering attacks.
8240 S. Kyrene Rd.
Suite A113
Tempe, AZ
85284
United States