Archives

HiddenLayer Creates a Threat Intelligence Team Focused on Thwarting ML Attacks

HiddenLayer Creates a Threat Intelligence Team Focused on Thwarting ML Attacks

HiddenLayer, the developer of a unique security platform that safeguards the machine learning models enterprise organizations use behind their most important products, today announced the formation of its Synaptic Adversarial Intelligence (SAI) team to raise awareness surrounding the threats facing machine learning (ML) and artificial intelligence (AI) systems.

The SAI’s primary mission is to educate data scientists, MLDevOps teams, and cyber security professionals on how to evaluate the vulnerabilities and risks associated with ML/AI so they can make more security-conscious implementations and deployments. The insights gathered by the SAI team are leveraged to conduct risk assessments and generate intelligence reports that expose the adversarial ML threat landscape. Collectively, the multidisciplinary cyber security experts and data scientists have many decades of experience in cyber security and deep backgrounds in malware detection, threat intelligence, reverse engineering, incident response, digital forensics, and adversarial machine learning.

Until recently, most adversarial ML/AI research has focused on the mathematical aspect, making algorithms more robust in handling malicious input. Now security researchers are increasingly exploring ML algorithms and how models are developed, maintained, packaged, and deployed, hunting for weaknesses and vulnerabilities across the broader software ecosystem. They have uncovered a number of new attack techniques and, in turn, developed a greater understanding of how practical attacks are performed against real-world ML implementations.

Also Read: GreyNoise Intelligence Acquires Krit to Uplevel Product Design and Development Capabilities

“Alongside our commitment to increasing awareness of ML security, we will also actively assist in the development of countermeasures to thwart ML adversaries through the monitoring of deployed models, as well as providing mechanisms to allow defenders to respond to attacks,” said Tom Bonner, Senior Director of Adversarial Machine Learning Research at HiddenLayer. “There has been a tremendous effort from several organizations, such as MITRE and NIST, to better understand and quantify the risks associated with ML/AI. We look forward to working alongside these industry leaders to broaden the pool of knowledge, define threat models, drive policy and regulation, and most critically, prevent attacks.”