Using AI to Improve Security and Compliance
They argue that the United States is currently the world’s leader in AI innovation, and strict regulations would severely hinder that. Upon reading your post, one thought that occurred to me is that we would benefit from a standardization of what components comprise of an AI system, as a key ingredient required for AI safety, security and trustworthiness in the supply chain. This being similar to the SBOM (Software Bill of Materials), a concept for application software per nascent definition in US cyber mandate in 2021 Executive Order. Many benefits come to mind including using an AI bill of materials data in conjunction with the measures specified, such as testing results and AI vulnerabilities, over time to track and understand objectively how we are doing.
This “dual use” nature is not unique to AI attacks, but is shared with many other cyber “attacks.” For example, the identical encryption method can be used by dissidents living under an oppressive regime to protect their communications as easily as it can be by terrorists planning an attack. Different industries will likely play into one of these scenarios, if not a hybrid of both. Autonomous vehicle companies are largely operating under the first “every firm on its own” scenario. At the same time, Artificial Intelligence as a Service, a key component of the second “shared monoculture” scenario, is also becoming more common.
Understanding whether the digital innovation technology has sorted the purpose it was implemented for
It’s already been established that the government collects an incredible amount of data. With a secure cloud fabric, agencies can build a framework to deploy and augment their future AI infrastructure. In addition to creating a secure, private multi-cloud connectivity environment, agencies also benefit from the ability to connect easily and securely to data lakes. Data lakes store raw data that can be used for various purposes, mostly focused on analytics, machine learning, and data visualization.
Second, attempts to “bake in” misuse prevention features at the model level, such that the model reliably refuses to obey harmful instructions, have proved circumventable due to methods such as “jailbreaking.” For example, one class of jailbreaks uses role-playing to have the model ignore its safety instructions. In April 2023, a user found that ChatGPT would provide instructions for producing napalm when asked to pretend to be the user’s recently deceased grandmother who would recount the instructions as a bedtime story. Finally, distinguishing instances of harmful and beneficial use may depend heavily on context that is not visible to the developing company. Policymakers and relevant regulatory agencies should educate stakeholders about the threat landscape surrounding AI.
AI and Cybersecurity in Federal, State, and Local Governments
As AI continues to transform our world, it is imperative that we act swiftly and wisely to navigate these uncharted waters. Artificial Intelligence brings a host of challenges to how we will live our lives on the internet and protect our data, particularly in terms of regulation. As we stand on the precipice of a new era of AI, the role of governments in overseeing and regulating this powerful technology is more critical than ever. On October 30, 2023, President Biden issued an executive order (EO) to set new standards for the safety and security of Artificial Intelligence (AI). The move sets out the government’s intentions to regulate and further advance the growth of AI technology in the years ahead.
In already deployed systems that require both verified fairness and security, such as AI-based bond determination,74 it will be difficult to balance both simultaneously. New methods will be needed to allow for audits of systems without compromising security, such as restricting audits to a trusted third party rather than publishing openly. Response plans should be based on the best efforts to respond to attacks and control the amount of damage. Continuing the social network example, sites relying on content filtering may need response plans that include the use of other methods, such as human-based content auditing, to filter content.
In this case, the fear of an attack that could be turned against the host country and find its way into the public sphere may outweigh the benefits the attack may provide, creating an incentive against offensive weaponization. However, these risks only apply if the host country or its allies are utilizing a similar system vulnerable to the same attack. The creation of offensive attacks against state-of-the-art systems that are deployed would risk the diffusion of these attacks into enemy hands. Further, it will be difficult to stop or even detect these attacks on content filters because they will likely go wholly unnoticed. Because content filtering is applied to digital assets, it is particularly well suited to the “imperceivable” input attacks.
- Partnering and sharing best practices better addresses these concerns in sustainable ways.
- This combination ensures captions are clear and easy to read, while keeping video content fully visible.
- (iv) encouraging, including through rulemaking, efforts to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI and to deploy AI technologies that better serve consumers by blocking unwanted robocalls and robotexts.
- Steps taken by governments to address data privacy and security concerns are crucial in an AI-driven world.
- That is why many industry leaders are urging Congress to adopt a lighter touch when it comes to AI regulations in the United States.
One could imagine AI attacks on facial recognition systems as the 21st century version of the time-honored strategy of cutting or dyeing one’s hair to avoid recognition. For example, attack patterns can be added in imperceivable ways to a physical object itself. Researchers have shown that a 3D-printed turtle with an imperceivable input attack pattern could fool AI-based object detectors.15 While turtle detection may not have life and death consequences (yet…), the same strategy applied to a 3D-printed gun may. In the audio domain, high pitch sounds that are imperceivable to human ears but able to be picked up by microphones can be used to attack audio-based AI systems, such as digital assistants. Regulators should require compliance both for government use of AI systems and as a pre-condition for selling AI systems to the government.
Conducting safety tests in the AI era
Attackers can hack the systems holding these models, and then either alter the model file or replace it entirely with a poisoned model file. In this respect, even if a model has been correctly trained with a dataset that has been thoroughly verified and found not poisoned, this model can still be replaced with a poisoned model at various points in the distribution pipeline. Discovering poisoned data in order to stop poisoning attacks can be very difficult due to the scale of the datasets. These samples many times come from public sources rather than private collection efforts.
Using a secure cloud fabric to connect to a data lake offers agencies the advantages of maintaining control over their data while leveraging cloud-based storage’s scalability and cost-effectiveness. This is especially important for government agencies, which often have strict security and privacy requirements that need to be met when handling sensitive information. Organizations can harness the power of AI to help keep data secure and bring systems into compliance with government and industry standards. Any industry that involves labor-intensive documentation like healthcare, insurance, finance, and legal is a suitable candidate for artificial intelligence.
Senator Markey Introduces EPIC-Supported Bill to Ensure Federal Agencies Monitor and Combat AI Harms
Read more about Secure and Compliant AI for Governments here.
What is security AI?
AI security is evolving to safeguard the AI lifecycle, insights, and data. Organizations can protect their AI systems from a variety of risks and vulnerabilities by compartmentalizing AI processes, adopting a zero-trust architecture, and using AI technologies for security advancements.
How is AI being used in national security?
AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria.
How is AI used in the Defence industry?
An AI-enabled defensive approach allows cyber teams to stay ahead of the threat as machine learning (ML) technology improves the speed and efficacy of both threat detection and response, providing greater protection.
Is AI a security risk?
AI tools pose data breach and privacy risks.
AI tools gather, store and process significant amounts of data. Without proper cybersecurity measures like antivirus software and secure file-sharing, vulnerable systems could be exposed to malicious actors who may be able to access sensitive data and cause serious damage.