There have been renewed calls for the United States to address gun violence following a series of six mass shootings in California in less than two weeks that left 30 people dead and 19 injured.
While Republicans, who oppose such a move, have generally kept quiet in the wake of the assaults, President Joe Biden earlier this month called for a national assault rifle prohibition. Republicans have demanded greater mental health treatment in reaction to prior horrific shootings.
People are looking for alternatives because of the deadlock in Congress and the apparent inefficiency of California’s strict state gun legislation. The use of artificial intelligence-enhanced security, a relatively recent potential solution, has generated interest since it holds the possibility of catching gunmen before a shot is fired.
The AI security sector promotes cameras that identify suspects with weapons loitering outside of a school, high-tech metal detectors that detect hidden guns, and predictive algorithms that analyze data to identify a potential mass shooter.
Officials at the company that developed the AI-enhanced security cameras claim that the system compensates for flawed security officers, who they claim frequently struggle to monitor several video streams and identify developing risks. Instead, according to firm officials, AI accurately identifies assailants as they prepare for an attack, saving security personnel valuable minutes or seconds and maybe saving lives.
“This is the best-kept secret,” said Sam Alaimo, co-founder of ZeroEyes, an AI security business. “People want to know more when they see an assault rifle outside of a school. Success is when one life is saved.”
Critics, on the other hand, criticize the devices’ effectiveness, claiming that companies have failed to give independently verifiable data on accuracy. Even if AI is helpful, they say, the technology raises serious issues about privacy invasion and potential prejudice.
“If you’re going to give up your privacy and independence for security, the first thing you should ask is whether you’re getting a good deal.” According to ABC News, Jay Stanley is a senior policy analyst at the ACLU Speech, Privacy, and Technology Project.
The AI security market
The market is primed for expansion as schools, stores, and businesses investigate implementing AI security. According to research firm Future Market Insights, the market for technologies that detect concealed guns will nearly double from $630 million in 2022 to $1.2 billion by 2031.
The rising prevalence of security cameras, which allows AI companies to sell software that enhances the systems already in use at many buildings, contributes to optimism.
According to the National Center for Education Statistics, 83% of public schools deploy security cameras in the 2017-18 school year. According to the organization’s poll, the percentage represents a considerable increase from the 1999-2000 school year, when only 19% of schools had security cameras.
“We work with an existing surveillance system,” Kris Greiner, vice president of sales at AI security company Scylla, told ABC News. “We just give it a brain.”
AI security companies are attempting to avoid shootings.
Scylla, an Austin, Texas-based firm founded in 2017, provides AI that assists security cameras in spotting not only concealed weapons but also suspicious activities, such as attempts to circumvent security or incite a brawl, according to Greiner.
When the completely automated system detects a weapon or a questionable actor, it alerts school or company officials, he added, stressing that mass shooters frequently draw their weapons before entering a location. According to him, the system can also be designed to promptly deny entrance and shut doors.
“It’s entirely feasible that it would have a significant influence at a moment when every second matters,” Greiner said.
The company, which has performed about 300 installations across 33 countries, allows client institutions to overcome the common shortcomings of security officers, he added.
“Imagine a human sitting in a command center watching a video wall, the human can only watch four to five cameras for four to five minutes before he starts missing things,” Greiner said. “There’s no limit to what an AI can watch.”
Another AI security company, ZeroEyes, offers similar AI-enhanced video monitoring but with a narrower purpose: Gun detection.
The company, launched by former Navy Seals in 2018, entered the business after one of its founders realized that security cameras provided evidence to convict mass shooters after the fact but did little to prevent violence in the first place, said Alaimo, ZeroEyes Co-founder.
“In the majority of cases, the shooter has a gun exposed before squeezing the trigger,” Alaimo said. “We wanted to get an image of that gun and alert first responders with it.”
As with Scylla’s product, the ZeroEyes AI tracks live video feeds and sends an alert when it detects a gun. However, the alert at ZeroEyes goes to an internal control room, where company employees determine whether the situation poses a real threat.
“We have a human in the loop to make sure the client never gets a false positive,” Alaimo said, adding that the full process takes as little as three seconds from alert to verification to communication with a client.
Accuracy in AI security
AI-enhanced security sounds like a potentially life-saving breakthrough in theory, but the accuracy of the products remains uncertain, said Stanley, of the ACLU. “If it isn’t effective, there’s no need to get into a conversation about privacy and security,” he said. “The conversation should be over.”