Preventing Magecart Attacks: Content Security PoliciesJanuary 8, 2020
Client-Side Risks and CSPs
Magecart and digital skimming attacks are the biggest threat to online businesses, resulting in the loss of customers’ personally identifiable information (PII), including credit card numbers, from e-commerce sites. Such data breaches hurt customer confidence, negatively impact revenue and expose the business to brand damage, regulatory penalties and lawsuits. Content Security Policies (CSPs) have been floated as a solution to this problem, but are they really feasible or effective? In this series of blog posts, we will take a closer look at CSPs, explain how they work and discuss how they can help protect the client-side of your web applications.
Today, most websites use third-party components to enrich their customer experience while the site owner focuses on the core business. Services like live chat, ads, analytics and payments are typical third-party JS components that are integrated into today’s modern websites. In fact, up to 70% of the scripts on a website can be third party. Moreover, even first-party scripts make extensive use of open-source libraries or platforms to deliver functionality at the speed of DevOps. One such platform is Magento, which is used to build e-commerce websites and is the origin of the name “Magecart.”
What Is a CSP?
Cross-site scripting (XSS) emerged as a threat to websites over a decade ago. In a XSS attack, a malicious piece of JS code on a web page loads another script from a malicious domain. In response to this threat, the W3C consortium that defines the architecture for the web, came up with CSPs as a way to limit where JS can load from and what it can do. CSPs are sent to the browser in the response header for each page load, and compliant browsers are then responsible for enforcing them.
CSPs aim to defend against unauthorized content injections on the client side. Web developers and their infosec peers write CSPs to define permissions for page assets – specifically what network resources are allowed access.
To support that, CSPs come with a set of capabilities or directives that allow the following usage:
- Whitelist the domains that content can be fetched from (code, images, fonts, iframes)
- Whitelist the domains that scripts on the page can communicate with (XHRs, fetch, beacons, WebSockets, etc)
- Restrict form actions and URI targets
- Force client-side security policies on the browser (such as redirect HTTP to HTTPS)
- Report violations to a logging server
Figure1 : Example CSP from www.twitter.com. [Credit: securityheaders.com]
Practical Challenges with CSPs
CSPs are a useful tool for protecting web applications against client-side vulnerabilities. They can be highly effective with controlled applications like e-banking portals, which are relatively static and less likely to have third-party code. However, they can be complex to configure and maintain. Here are some of the challenges with CSPs.
- Change Management: A modern e-commerce site is a complex mesh of first- and third-party scripts. Continuous Integration/Continuous Development (CI/CD) methodologies mean that updates are posted frequently – sometimes as often as every hour. Combined with dozens of third-party vendors who themselves follow CI/CD practices, maintaining whitelists of all the necessary domains they communicate with becomes a monumental task. As seen in Fig. 1 above, even for a well-managed website like twitter.com, which doesn’t include third-party components like banner ads or live chat, the policy can be very complex.
- Lack of Visibility: Infosec teams do not always have full knowledge of all the third-party components in web applications that run during a page load within the user’s browser. Some of these components are loaded selectively at runtime and may be invisible to external scanners. Infosec teams no longer have the luxury of saying no to third-party components or enforcing hard policies that can cause applications to break. They need to enable business agility while safeguarding customer data and remaining compliant with regulations.
- Compromised Hosts: CSPs offer no guarantee that all the whitelisted hosts are trustworthy. A recent study found that almost 76% of CSP whitelists contained at least one unsafe host. The larger the whitelist, the larger the attack surface.
Protecting the Client Side
While most organizations have secured the server-side components of web applications, the client side remains exposed. Attackers will continue to find creative ways to inject malicious code into web applications and compromise user data on the client side. CSPs are a useful tool in the infosec toolbelt to help secure this attack surface, however, they are not a universal solution. Creating and maintaining these policies requires tight integration between infosec and application development teams. Also, maintaining whitelists for third-party applications requires vendor support and update mechanisms to ensure that these applications do not break.
The best approach to protecting the client side is using real-time behavioral analysis of every single script on every single browser instance. This approach can detect anomalous behavior such as a script communicating with a known malicious domain or accessing form fields that are outside its scope. Infosec teams can gain real-time visibility into all script activity on the client side while also enabling the application development teams to remain agile and tap into the growing and necessary third-party ecosystem.
In the next blog post in this series, we will take a closer look at how to configure CSPs and share a few best practices. Stay tuned!Back to posts comments powered by Disqus