Application Security

Q&A: Shadow Code Featuring Osterman Research

by
EPISODE 8: Shadow Code

Shadow Code is a growing phenomenon and security issue for digital businesses, opening up organizations to a variety of cyberattacks via so-called trustworthy code. PerimeterX and Osterman Research conducted a survey and published a report on the preparedness of businesses against Shadow Code, and how to protect against the risk of potential client-side data breaches. PerimeterX cybersecurity evangelist Ameet Naik and principal analyst of Osterman Research Michael Osterman join us to discuss Shadow Code, what it is and why businesses ought to be wary of it. Listen to the full podcast episode here.

Let’s start with the basics. Ameet, what is Shadow Code?

Ameet: Shadow Code is third-party code used in modern applications and open source libraries or containers, that is introduced into an application without a formal approval process or security validation. Now, if you remember Shadow IT from a few years ago, when CIOs were seeing uncontrolled devices and SAS applications come into an enterprise, they were forced to grapple with how to protect the enterprise and protect enterprise data when all of this external stuff was coming into the organization. Shadow Code is another version of that—another incarnation, if you will—where we are now seeing it with web applications.

When seeing the scripts on a typical website, about 70% of those may be third-party, and there may be things like your analytics scripts, payment providers, personalization engines, advertising, and all of this creates an environment on a web application where InfoSec teams and DevSecOps teams do not have full visibility into what is going on. Anytime you have this blind spot, it creates an opportunity for malware. So Shadow Code is, in essence, all the things that enter your application without a formal approval process, that you are still responsible for, and you are still responsible for what the code does once it gets onto your application.

Shadow Code seems to be a fairly big issue that few people are aware of. PerimeterX and Osterman Research just published a report on this threat and how many businesses deal with it. Michael, can you give us some high level takeaways and statistics from the report?

Michael: Sure. One of the things we found is that in the typical web property, about 70% of the code that runs on it is Shadow Code, yet the estimate that we gained from our survey results is far below that. So a lot of website operators, website managers and so forth, really don’t know how much Shadow Code is running on their sites. They tend to underestimate it pretty significantly. The problem is that they don’t really have a lot of insight into what is going on with their web properties, they don’t really understand how much Shadow Code is there, and they don’t understand the implications it has for the organization. Yet at the same time, a lot of them think that it is fairly easy for a third-party code to be abused.

We found that a lot of organizations think that it is fairly easy for bad actors to get in and inject a Magecart attack, for example, and do some pretty nefarious things. So what we are finding is that there’s been some progress from last year's survey. There is more insight, more awareness about what is going on with Shadow Code, about the implications of bad actors doing things on a web property, but there is not nearly enough. We are making slow progress, but we are not nearly where we should be.

That flows well into our next question: What changes have you seen from the 2019 survey? Are there any other changes in the landscape that you have seen since last year?

Michael: Yes we have seen some. We have seen, for example, greater use of tools that will enable organizations to gather data about script activity that seems suspicious or anomalous. We have seen more deployment of tools that will collect signals from users' browsers. There are more organizations now that can detect changes or updates that have been made on a website. So there really is some progress because decision makers are understanding more about the implications of Shadow Code, they are understanding more about the whole DevSecOps cycle and how security needs to be a greater part of the development of these codes and so forth.

The problem is they cannot really do without Shadow Code because it provides so much functionality. It’s critical that you have the ability to do payment processing and data collection and, really, all the things you need to do on a website, so they have to use Shadow Code and they are slowly understanding more about it, but they are not there yet. There is some progress from last year's survey in many areas but just not nearly enough.

You mentioned some of the priorities that businesses need to accomplish through using Shadow Code. But what are some of the biggest consequences that organizations are concerned about right now?

Michael: The most concerning things are digital skimming attacks, Magecart attacks, websites getting hacked and so forth. Supply chain attacks are another major concern.

One of the real issues is the growing trend towards privacy requirements. Things like GDPR, the California Consumer Privacy Act are top of mind. Similar types of legislation are popping up in places like Brazil, India, Japan, and Canada, as well as a number of US States. Those carry with them some fairly significant implications because if data is breached, it is now not just a matter of fixing the website and repairing that breach, but you are potentially liable to a government entity for what could be a crippling fine.

GDPR, for example, has customary fines of 2% of the previous year's annual revenue, but it can go up to 4% for more egregious violations, so for a very large organization, you are looking at potentially hundreds of millions or even billions of dollars in fines.

British Airways, for example, was fined ~$240 million for its data breach, so we are seeing some very serious implications. The California Consumer Privacy Act carries with it a pretty significant private right of action. If your data has been breached as a California resident, you don’t even have to prove that anything bad happened. It could just be that the data breach occurred and you can get $750 from the organization that was breached. If you have maybe a million records that were breached, you are looking at a very significant fine, upwards of three quarters of a billion dollars. So the penalties associated with these kinds of website attacks and breaches are getting much more serious.

That can be very concerning. So how secure, in general, are websites and web applications today from these kinds of threats?

Michael: Not very. What we are finding is that a lot of organizations just do not have really secure websites.

We asked the decision makers we surveyed if they had to go to their senior management, board of directors, and so forth, and answer the question: is our website completely secure and protected from a privacy regulation standpoint? Only 30% said that they could answer that in the affirmative and say, "Yes, we have got this covered."

70% said that they could not. And there was some percentage of that 70% that said, "We have no clue, there is no way we could do this." So we are seeing that a lot of websites are secure, but most are not, and they are subject to these kinds of attacks and they are difficult to detect.

If you take just a cursory review of the code, you might not see the error. You have to have the technologies and the processes in place to be able to detect this stuff and today, that is absent in most cases.

Michael, you mentioned GDPR and CCPA. How much progress are websites and web applications making towards compliance in data privacy regulations?

Michael: We are seeing progress. There are, as I have mentioned, more tools in place. There is more ability to detect what is going on on users’ web browsers. So organizations are really understanding that they need to do something about this and many of them are. There is certainly an awareness of what needs to be done if the step is not being put into practice, but we are making slow progress toward more secure web environments.

DevOps teams are at the center of this issue since they are the ones to implement this kind of code. What can they do to manage the effects of Shadow Code?

Ameet: Third-party code and open source libraries have been a big boon and a big life-changer for most development teams. They enable faster innovation, they enable more agility, they enable you to make your web experience or app experience a lot richer in a short period of time without reinventing the wheel. The challenge with it is all the stuff that comes along with it. So as DevOps teams and DevSecOps teams think, it is too late to say, "No, do not go in that direction."

I think we are fairly well down the road in the Shadow Code direction, so I think it is too late to say no. I don’t think pulling things, changing things, or not using certain libraries is the answer. The answer is really the “trust-but-verify” model. This is something that worked in the Shadow IT era. In the Shadow IT era, we saw that CIOs were unsuccessful in stopping unsanctioned applications from coming into the organization. In the end, they just had to figure out a way to coexist and live with it, and to implement the controls necessary to where they had visibility and some level of verification. We feel like the same approach is going to work really well for Shadow Code, where DevOps teams have visibility and do what is actually running in the code, what is included all the way from the development stage of the process to runtime and production. That is the key to living with Shadow Code and taking advantage of the benefits, while still managing the security posture for your web application and organization.

There are also effects of Shadow Code on application security beyond DevOps. How far can these effects reach, Michael?

Michael: What we are finding is that there is not a lot of trust in third parties. We asked the question, "To what extent do you trust your partners not to be the source of security threats and third-party client-side scripts that run on your website?" What we found is that it was only 16%. One out of six organizations said, “We trust these third parties completely,” and another 54% said, "We mostly trust them."

Comparing that to last year's survey, things look like they got worse. It was about the same number that mostly trusted, but 27% said that they trusted them completely last year. Now, while that decreasing level of trust looks like bad news, we actually interpret that as good news—because it means that website operators are becoming more aware of these issues. They are becoming more aware of the importance of doing good security and of third-party client-side scripts, and the implications of bad Shadow Code.

This level of increasing awareness means that there is less trust as organizations are progressing toward developing better security models, more properly vetting their third-party providers, and more properly vetting their supply chain partners. So we see over time that this level of trust is going to go up as organizations take a more active role in understanding what is running on their websites and becoming more cognizant of the implications of having bad code running on them.

Let’s talk about the effects of the COVID-19 pandemic. How has this impacted the overall security posture of websites, Michael?

Michael: It really has had an important impact. We asked a question about what organizations have deployed in terms of protecting their websites and protecting themselves against malicious Shadow Code, given the government-imposed lockdowns, and what they would have done in the complete absence of the COVID-19 pandemic. What we found is that today, about a third of organizations have implemented what they consider to be appropriate technologies, processes, best practices, and so forth to protect themselves against malicious Shadow Code. That would have been 47% had the COVID-19 pandemic never occurred. So we found that most organizations would have implemented these technologies and practices to a much greater degree, had they not had to deal with the pandemic.

We also did a survey in early April of this year and asked organizations about how well-prepared they were for the pandemic, the sudden work from home phenomenon, and other effects. We found that only 19% of organizations considered themselves very well-prepared for employees working from home. The vast majority were not, and security teams had to really scramble to implement VPNs and provide laptops and set up the infrastructure, expand the bandwidth, and that took away from initiatives focused on website security. So we have seen website security suffer as a result, and that was compounded by bad actors taking advantage of the pandemic and launching a lot more attacks against websites, as well as doing things like email phishing. But websites have really come under more attack because of the pandemic, because the bad actors know that organizations are certainly more vulnerable during this time.

It makes sense that people would be distracted from certain aspects of security when they are focused on a crisis. So Ameet, what best practices do you recommend to alleviate these kinds of problems?

Ameet: Living with Shadow Code is a new normal, a new reality for DevOps, DevSecOps and InfoSec teams right now. The key is to establish a trust-but-verify model, where you allow the innovation and the agility that is provided by Shadow Code but, at the same time, you have the visibility to know what is going on and the ability to stop bad things when there is a negative impact from Shadow Code.

So a few sort of foundational best practices, like having better collaboration and communication between InfoSec and development teams, are really key.

I was surprised last year when a lot of organizations were hit with Magecart attacks, and we talked to several InfoSec teams and many were not fully aware of how much Shadow Code they had or how many third-party scripts were running on their web applications. I think communication is the key to having a more effective process. For first-party scripts and code and applications, definitely shifting security left, addressing security earlier in the Dev Cycle, doing static analysis, doing software composition analysis, that is going to go a long way in preventing threats from entering an application pipeline. This is all good for first-party code but it does not help you with the third-party suppliers because all those scripts are running directly on your user's browsers, but they are being served by third-parties.

There are a few tools you can use to manage that, like Content Security Policy, which is a great way to put some restrictions on where the software can be loaded from and what domains it can communicate with, and get some level of security.

The challenge with CSP is that it is not sufficient in covering all of the threats, so if there is a first-party compromise, the CSP is not going to help you against that. And then, there is a new category of solutions called runtime client-side application security solutions that can really analyze all the code as it is running on the end user's browsers, and it can profile, baseline and analyze the behavior of code and when it changes. That’s what you need to pay attention to. You need to watch out for changes in code behavior and unauthorized changes to the DOM. You need to watch out for communication with domains of lower reputation, as those are the things that can signal a potential threat entering the application. So a trust-but-verify model is really going to go a long way in helping the industry manage the Shadow Code risk.

For more statistics and how to protect against Shadow Code, download the whitepaper, Shadow Code: The Hidden Risk to Your Website.

PerimeterX is Named as a Leader in Bot Manangement by Forrester

Download Report
© PerimeterX, Inc. All rights reserved.