Application Security

How We Conduct Research and the Complexity of Moving Target Defense


Research is defined as the systematic investigation into and study of materials and sources to establish facts and reach new conclusions. Here’s how we do it at PerimeterX.

Usually, we start with a “research question” - What do we want the research to answer? What type of new data and knowledge do we want to find? An example of a research question can be “How have economic, political and social factors affected patterns of homelessness in San Francisco over the past ten years?” In our world of cybersecurity, a research question can be “How does a shoe bot perform its operation during a shoe launch?” The questions should be currently un-answered, not too specific, but not too broad. And most importantly, when researching a threat, such as attacks that affect hundreds of users and websites, the question should be action- or problem-orientated. We should check, does answering the question allow me to act to solve the problem?

The next stage is defining hypotheses. According to the Oxford dictionary, a hypothesis is “a supposition or proposed explanation made based on limited evidence as a starting point for further investigation.” These are statements or propositions about the possible outcomes of the research. When we at PerimeterX define hypotheses, we base them on one of the following:

  1. Common sense/logic - the founding blocks of our rational thought.
  2. Past experience and knowledge - previous research we have conducted or experience we have in the field. Researchers use either their prior personal knowledge or the knowledge of a colleague. We share all of our past research internally and use it as a basis for other investigations and hypotheses.
  3. Other studies - often, we look to verify or disprove an existing theory into our research question by other external studies.

After we’ve defined the hypotheses, we need to prioritize them and have a specified time to the point where we either drop the research or get a result for that hypothesis. This part is one of the more complex for us researchers in the cybersecurity world since it requires us to predict how long it will take us when we might not even know how many leads we have and how big the issue is. However, we live in a fast-changing world, and when we talk about threat intelligence, this research affects the detection mechanisms of our products. For the most part, we need to have an answer in a given time, and it requires the researchers to adjust while they research.

One of the methods we use to make the priority and timing easier is as follows. While defining and prioritizing the hypotheses, we build a small dataset where we use laboratory conditions or control groups to have some prior data on the theory. Another method is peer review where one of our peers goes over our hypotheses and priority to help re-prioritize based on their knowledge.

After we've defined hypotheses and prioritized them, we start the research which is designed to verify or disprove the theories. In this part, we hit another very intricate part of our type of research - the laboratory data usually does not show all the edge-cases that might exist in the real world. Different kinds of web architectures exist, the Internet is changing at a high pace, the attackers' effect on reality varies greatly, and the changes that our products create on the customers' traffic. Keeping all this in mind, we aim to produce high-quality research solutions by creating a rich dataset to test on. This part can be challenging for multiple reasons: the variety of customers from different industries where data highly varies, and the different attack vectors that can occur from account takeover (ATO) to digital skimming. Timing is also a significant factor and should not be limited to a specific point in time but should be taken on as significant a time-span as possible to avoid singularities in traffic.

One major challenge of threat research is to decide if we want to pursue a lead and initiate a research. Research, by nature, has a tendency to linger on as we discover more findings. For example, when examining the dataset in depth, we often discover new attack mechanisms that our hypothesis did not take into account. Now arises the question, should we include these new hypotheses in the research itself? Is this a completely new research? These kinds of questions will arise as part of the core research work and can have an impact on time to market of our solution.

Another challenge is to answer the question: can the result of this research have a long-term impact on our clients? Or will this be a solution that is easily bypassed by an attacker? If so, how long will it take to create a solution that is robust enough? These questions are important to answer as the attackers in 2020 are sophisticated, quick learners and adjust quickly to changing technology.

In order to make sure the research stays on track, we typically apply one of the following techniques:

  1. Assign a dedicated person who will oversee the research work. This cannot be the same person doing the actual research. (It can be a manager or an experienced researcher). They will serve as both a rubber ducky and a guide for the researcher in day-to-day work.
  2. Create predefined milestones for the research. These milestones will serve as checkpoints for the researcher and the manager to make sure the research is on track. Milestones can be readjusted as the research continues and more findings are discovered. An example of a milestone can be: after we’ve tested the first hypothesis, new findings might arise and that checkpoint is a great time to look ahead and re-evaluate future milestones, if necessary.
  3. Create a dedicated milestone that is a go/no go gate. This milestone should include all the relevant stakeholders to the research to make sure the decision is holistic. This gate should take into account several key aspects: how many customers will benefit from a solution to this problem, how often does this problem happen and how complex will the solution be.

Once the datasets have been analyzed and the research has been completed, now comes the time to apply the new knowledge in the real world. In order to measure the impact of our conclusions, two key metrics are useful: False Positive (FP) and False Negative (FN). For example, if our research discovered a new way to detect an ATO attack, FP would be identifying an attack although it has not occurred while FN would be missing an attack that did occur. The process of defining the balance between these two metrics is very delicate and can be highly time-consuming, especially when customers have different traffic patterns and FP/FN sensitivity levels. Striking the balance between optimizing the FP/FN ratio and time-to-market of our changes is difficult, and there is no single right answer on how it should be done, nonetheless, there are a handful of useful guidelines one could follow. First, define what would be a good metric of each of the indicators - what will be denied as a FP of the conducted research and what will be defined as FN. Second, define a range for acceptable FP/FN values, don’t stick to precise numbers. Important thing is to strive for a solution sufficient enough for most of the relevant customers. Those that are in the 10% group, that have extreme edge cases - should have a dedicated time for special optimization and fine-tuning.

To conclude the above, researching and creating a moving target defense for customers can be quite complex, but by following some of the methods we at PerimeterX use, you can achieve more out of your research.

Examples of the threat research we conduct at PerimeterX can be found in the following published research blogs:

Forrester Report

PerimeterX Named a Leader in the Forrester Wave™: Bot Management, Q2 2022

Download Report
© PerimeterX, Inc. All rights reserved.