Decoding Bad
Frank Greitzer ’68 employs cyber and behavioral monitoring to help identify threatening behavior.
HOW DO YOU SPOT THE GOVERNMENT EMPLOYEE WHO is about to sell state secrets, the worker who plans to loot his company or the employee plotting to harm his co-workers?
One answer may be cyber surveillance, widely used by corporations and the federal government. Such efforts monitor vast quantities of cyber data, looking for evidence of crime. But Frank Greitzer, who did research on decision making and intelligence/counterintelligence analysis at the Department of Energy’s Pacific Northwest National Laboratory, doubts that cyber surveillance by itself will work.
“If you focused on cyber data alone, you soon would be overloaded with information,” says Greitzer ’68, a mathematics alumnus whose undergraduate preparation included a fair dose of courses in psychology. His passion for the topic led him to combine the mathematical and physical sciences with the behavioral through the mathematical psychology program at UCLA, where he received his PhD. His applied research on human behavior, information processing and decision making spans more than 40 years. Among his peer-reviewed papers, Greitzer has published about 20 papers in scientific journals and books and over 64 papers for technical conferences.
“Using traditional cyber monitoring methods, by the time you find evidence of an insider threat, the attack would most likely already have occurred,” Greitzer says.
He favors a more comprehensive approach that would also identify unusual patterns of employee behavior that might indicate wrongdoing is in the works. Greitzer has long been interested in the interplay between psychology and online behavior. In 2010, while researching behavioral factors in insider threat, he came across a series of papers by researchers in the fields of personality and social psychology. These papers examined the intersection of language, personality and behavior. The researchers studied samples of informal writing in social media and found that subtle but meaningful differences in the use of common words provide clues to an individual’s psychological state. “This research suggested to me that a statistical analysis of certain word categories—such as negative, angry or profane words—may reveal attitudes or personality traits that might indicate a higher risk of committing insider crimes,” Greitzer says.
Behind cyber attacks, terrorism or espionage, there is a human being. I try to characterize that behavior so that we have a chance of identifying suspicious activity before something happens.
– Frank Greitzer ’68
He has theorized that companies or government agencies might be able to spot potential troublemakers by analyzing word use in employee emails. Importantly, such scans wouldn’t examine the content of the emails, but rather only word frequencies. The appeal of including this “psycholinguistic approach” to threat detection is that it is less intrusive because it maintains privacy of semantic content. Also, it has a stronger legal standing because organizations own employee emails generated through corporate email systems, says Greitzer.
Nevertheless, he warns that there may be innocent explanations for seemingly odd behavior.
“If someone starts working odd hours, begins accessing parts of the network he or she doesn’t normally access and exhibits concerning behaviors or possible personality issues, this doesn’t tell you that he or she is doing something wrong,” he says. “It might turn out that an employee is working odd hours to meet a deadline.” And behavioral or personality issues might be explained by things going on outside the job—serious health problems, for example. But these risk indicators do suggest that there are things to be concerned about that ought to be looked at more closely.
He has outlined his ideas in a series of scientific journal articles. Anticipating questions about privacy issues surrounding this form of monitoring, Greitzer was among the first to address ethical and privacy concerns as part of a comprehensive approach that included both cyber and behavioral monitoring.
Greitzer, who spent most of his career doing research for the federal government, continues to perform human factors research in cyber security through his consulting firm, PsyberAnalytix. His most recent consulting work supported government-funded research on insider threat and cybersecurity job performance. He also has consulted informally with financial institutions that expressed interest in his work.
Efforts to head off catastrophic crimes took a quantum leap forward as a result of the attacks of 9/11. Some officials said there had been a failure to connect the dots that could have spotted the terrorists before they crashed airliners into the World Trade Center and the Pentagon and into a field in Pennsylvania. For instance, several of the terrorists took flight-simulation lessons but never showed any interest in learning how to land an airliner. After the attacks, it became obvious why.
Greitzer believes the job of spotting terrorists or cyber criminals before they act is a far more complex matter than just “connecting the dots.” He says that this task is more like reassembling several jigsaw puzzles that have been randomly thrown together with pieces shredded. Moreover, the puzzle boxes have been thrown away, so there aren’t any guiding pictures.
“You literally don’t know what you’re looking for,” he says. “You’re reconstructing the puzzle, looking for some kind of pattern that’s indicative of something wrong.” For that reason, a variety of tools are needed to solve the puzzle.
“Behind cyber attacks, terrorism or espionage, there is a human being,” he says. “I try to characterize that behavior so that we have a chance of identifying suspicious activity before something happens.” But, he says, he would never propose using his tools in isolation, as a stand-alone method of spotting potential perpetrators.
Criminals aren’t the only focus of his work. It’s long been known that employees who are stressed, pre-occupied or sick can make mistakes— potentially disastrous ones if they work in critical functions, such as an operator at a nuclear power plant. Greitzer argues that these behavioral analytic methods could be used to spot at-risk employees and get them help before they make a costly mistake.
Greitzer emphasizes that his ideas need formal testing before they can be implemented. And any combination of cyber and behavioral monitoring should be accompanied by appropriate privacy safeguards, he says, lest we take a step toward the nightmarish society portrayed in 1984, George Orwell’s novel of a totalitarian surveillance state.
“You don’t want somebody committing an attack,” he says. “But you don’t want to live in Orwell’s world either.”