Technology like minority reporting can predict mass shootings, but should we use it?

Technology like minority reporting can predict mass shootings, but should we use it?

SAN FRANCISCO -- We have the technology to predict who is likely to commit a shooting and intervene before a shooting occurs, a security researcher said at the RSA conference here this week

The question is whether "pre-crime" systems such as this Minority Report should be implemented [Jeffrey J Blatt of X Ventures asked, "What if it saved hundreds of lives a year? The result would be a significant loss of privacy, anonymity, and legal rights in every aspect of our lives Would it be worth it?

Blatt explained that he is not advocating the creation of such a system and has serious concerns about whether it should be built He is merely pointing out that this is possible because all the necessary data and technology is already available (The British police may already be testing such a system)

Known active shooters (defined as someone who attempted to kill someone indiscriminately in a limited or populated area) very often do the same kind of behavior before they try to kill someone, Blatt says

Shooters "claim to have close ties to weapons, law enforcement, and the military," Blatt says Regardless of age, race, ethnicity, or gender, "they self-identify as agents of change and may identify with known mass murderers of the past"

They also often warn or threaten their targets before they act, "often on social media" There are also "leaks" in the form of jokes or ad hoc comments about committing mass murder"

"These behavioral threat indicators appear as data inputs such as social media interactions, school behavior records, friends and family, financial transactions, and web search history," says Blatt

Considering all of these warning signs, the public is often alerted when, for example, a high school student talks about killing someone at school and a classmate reports it However, there is so much data out there on the signs of active criminal activity that it is impossible for police, school officials, or employers to look through it all

That's where data mining, machine learning, and artificial intelligence can help, says Blatt

"Instead of a team of humans looking for threat indicators, could a data processing system identify potential active shooters before the crime actually occurs?" he wondered If we knew about all of them, we might be able to predict incidents and outcomes"

Not only is this possible, says Blatt, but it is already being done: in 2013, the National Security Agency's PRISM program, revealed by a leaked PowerPoint by Edward Snowden, collected and analyzed vast amounts of dangerous patterns to look for Internet data was collected and analyzed to look for patterns of compromise

A decade earlier, the Pentagon's Total Information Awareness program, developed in the wake of 9/11, attempted to detect and predict terrorist attacks, but Congress cut off its funding after public outcry An active shooter prediction and detection system would be narrower in scope and methods than either of these massive global systems

"What if we created a predictive policing system based on active shooter behavioral threat indicators?" Blatt said

"I asked a well-known Israeli artificial intelligence expert if this could be built [It] could scan everyone from age 10 to 90 There would be DMV records, criminal records, financial records, medical records, employer records, and other things that would be illegal, but let's consider them for a thought experiment"

A machine learning system can quickly sift through all of everyone's information, connect the dots, identify who is a potential active shooter, report its findings to human managers, and have them take any further action possible

If the system works well enough to predict and prevent shootings from occurring, Blatt says, it could also be applied to preventing robberies, rapes, and assaults

However, Blatt foresees a number of problems First of all, we need to counter biases on the part of human managers How can we be sure that they are being fair in their decisions to act against potential culprits?

While it is possible to keep administrators in the dark about how the AI reached its conclusions, that "black box" approach would lead them to suspect that the algorithm itself might be biased, which, says Blatt, "brings us closer to Skynet territory"

Blatt said that since someone may soon try to develop such a system, we should expect this to disrupt our notions of fairness, privacy, due process, and public safety

[7] "As a society, we need to consider that it may be worse to be intruded upon than to prevent criminal activity," he said

Categories