Breaking New Ground on CyberdefensesAlliance Attempts to Advance the Science of Cybersecurity
The Army Research Laboratory is collaborating with five universities to develop what's being characterized as a new science to detect, model and mitigate cyber-attacks.
See Also: The Global State of Online Digital Trust
The collaborative research alliance with Carnegie Mellon University, Indiana University, Penn State University, University of California at Davis and University of California at Riverside will give investigators the chance to advance the theoretical foundations of a "science of cybersecurity," says Ananthram Swami, the project's manager. "Such a science will eventually lead to network defense strategies and empirically validated tools."
Researchers will focus on four areas:
- Detecting adversaries and attacks in cyberspace;
- Measuring and managing risk;
- Altering the computing environment to prevent cyber-assaults while holding down costs; and
- Developing models of human behaviors and capabilities that enable understanding and predicting motivations and actions of users, defenders and attackers - and integrating these models into the first three areas.
"The understanding and operation of complex, heterogeneous Army battlefield networks in the presence of unrelenting cyber-attacks is a formidable challenge for the Army's scientific and network operations communities," says John Pellegrino, director of the Army Research Laboratory's computational and information sciences directorate.
Though the project is primarily designed to protect military systems, the results of the five-year, $23.2 million initiative should have wider application. "I expect the scientific knowledge that results from this project to be directly applicable to civilian cyberdefense as well," says Lorrie Cranor, director of Carnegie Mellon University's CyLab Usable Privacy and Security Laboratory and CMU's lead investigator on the initiative. "Across all industries, cyberdefenders are trying to understand their attackers, anticipate attacks and reconfigure their systems to withstand attack."
CMU's research on the project will focus on psychosocial activities. "The psychosocial work cuts across all of the research areas that are part of this collaboration: detection, agility and risk," Cranor says. "By developing models of humans - both attackers and defenders - we will be able to better anticipate and respond to attacks."
Cranor says her team will test user interfaces to see how well they succeed in supporting cyberdefenders as they do their jobs.
Patrick McDaniel, a Penn State University computer science and engineer professor who serves as the project's lead investigator, provides the following example: A soldier photographs a disreputable looking person in the field of combat. The soldier wants to send the image to intelligence resources to determine if the person poses a threat. The enemy's cyber-objective is to stop, alter or slow down the transfer of the image and the subsequent analysis back to the soldier.
"We would like to be able to see that the image gets to its intended destination and if the pathway is attacked, if the network is attacked, we want to be able to reorganize so the information can flow both ways," McDaniel says in a statement announcing the initiative. "We want to be able to make decisions to drive attackers to a state of ineffectiveness. If a network or computer is under attack, we want to be able to assess the situation, make decisions and alter the environment to prevent the attack from being successful."
Researchers from Indiana University will concentrate on the risk component of the project by developing repeatable, testable methods to assess online risk. Indiana researchers will work with the detection team to assess the state of the attackers, determine what motivates them and identify their goals and abilities.
Diagnosing Intrusion Detection
Another team, headed by Srikanth Krishnamurthy, a computer science professor at the University of California at Riverside, will employ an approach called diagnosis-enabling intrusion detection to identify threats and their influence on an organization's mission. By collecting large amounts of data from humans, sensors and applications, Krishnamurthy says, the team hopes to map behavioral models extracted from the information onto existing models to distinguish normal behavior from attacks with high accuracy and in a highly volatile environment.
"The project bridges various disciplines, such as security, software engineering, networking and modeling of human behaviors to forge a holistic view of cyberthreats and their impact," Krishnamurthy says. "This holistic view will lead to new ways of proactively preventing attacks and allow the rapid reactive reconfiguration of the system at large, in the wake of an attack."
Another $25 million in funding could be made available to extend the collaborative research for another five years.