AI Regulation: Are Governments Up to the Task?

Secure and Compliant AI for Governments

For example, in the case of weapon systems, this may be impossible because the enemy has jammed the communication channels. In the case of consumer applications such as autonomous cars, this may be impractical because the device will not receive a response fast enough to meet application requirements. Even if the data is properly secured and an uncompromised model is trained, the model itself must then be protected. A trained model is just a digital file, no different from an image or document on a computer. If an uncompromised model is corrupted or replaced with a corrupted one, all other protection efforts are completely moot.

What would a government run by an AI be called?

Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information.

For instance, by using ChatGPT, you’re risking exposing your proprietary information to people outside of your organization who shouldn’t have access to such data and pull in information that may not be factual. To mitigate this, you can instead opt to use tools such as Bing Enterprise Chat, where you can ensure your organization data is kept within your organization, that outside resources aren’t pulling it, and that your users are only pulling in organization data that they have access to. It is then equally critical to put in place tools that measure the bias within the models to ensure transparency and certification of your models. This allows you to see the possible biases it may produce and what mitigation standards must be implemented to eliminate or reduce said biases. To tackle the challenges of the ethical use of AI, it’s then critical for organizations to include all communities involved – more importantly, minorities and other communities – not only in the research phase but also in the governance aspect of their programs.

Build Secure Software

Research should prioritize the creation of defense mechanisms for the current state-of-the-art AI methods, as well as the development of new more robust AI methods. Given the success of deep learning and its already established footprint, these vulnerable methods will be the primary methods used for a substantial amount of time. As such, even if complete mitigation is provably impossible, techniques to “harden” the methods, such as making attacks more difficult to execute by modifying the structure of the models themselves, will be of significant interest to AI users. Similar hardening techniques have found great success in cybersecurity, such as Address Space Layout Randomization (ASLR), and have imposed significant technical hurdles for performing once common and easy cyberattacks. However, in a second scenario, individual companies may utilize shared AI systems provided by a third party.

  • Artificial intelligence covers a wide array of functions from classification to pattern recognition to making predictions.
  • The report shall include a discussion of issues that may hinder the effective use of AI in research and practices needed to ensure that AI is used responsibly for research.
  • While AI attacks can certainly be crafted without accompanying cyberattacks, strong traditional cyber defenses will increase the difficulty of crafting certain attacks.
  • If good alternatives exist that are capable of performing similar function with similar costs, AI should not necessarily be adopted over an alternative in the name of innovation or progress.
  • From there, you can put that in a pipeline, run it at scale across a large set of documents, and apply it to a line of your business applications.
  • We now turn our attention to which systems and segments of society are most likely to be impacted by AI attacks.

Washington is still reeling, and concerns about these dangers have given rise to a flurry of new Moreover, the recent board turmoil at OpenAI highlighted the shortcomings of self-regulation and, more broadly, the challenges of private-sector efforts to govern the most powerful AI systems. In the coming year, a new generation of generally capable models similar to GPT-4—trained using record amounts of computational power—will likely hit the market.

AI in government: Risks and challenges

Such actions may include a requirement that United States IaaS Providers require foreign resellers of United States IaaS Products to provide United States IaaS Providers verifications relative to those subsections. Such standards and procedures may include a finding by the Secretary that such foreign reseller, account, or lessee complies with security best practices to otherwise deter abuse of United States IaaS Products. (ii)  Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster. (p)  The term “generative AI” means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. (g)  The term “crime forecasting” means the use of analytical techniques to attempt to predict future crimes or crime-related information.

Secure and Compliant AI for Governments

For government organizations, understanding the role of AI in government is crucial for staying up-to-date on the latest technological advancements and their potential impact on efficiency and productivity. Although the EO places potential restrictions on developers and companies alike, it encourages investment in the space. There is immense potential to democratize AI advancements, giving people and private companies more autonomy rather than relying on major tech companies. Moreover, with proper regulations, the government can drive more innovation with AI technology to prioritize societal benefits. With advanced technologies, government agencies can cut labor costs, speed up processes, save man-hours and provide smooth and quicker services to the public.

The Most Critical Elements of the FTC’s Health Breach Rulemaking

Led by Nic Chaillan, the Ask Sage team leverages extensive industry experience to address the unique challenges and requirements of its clients. Currently, over 2,000 government teams and numerous commercial contractors benefit from Ask Sage’s expertise and security features, including data labeling capabilities and zero-trust cybersecurity. By providing accurate answers and performing various tasks in a natural language format, Ask Sage helps teams make informed decisions, improve efficiency, and reduce costs. Generative AI can help government reimagine and transform government services in critical areas, including HHS, education, sustainability and more. But the technology also poses new challenges and risks for government agencies and the public at large.

Secure and Compliant AI for Governments

Governments at all levels are using AI and Automated decision-making systems to expand or replace law enforcement functions, assist in public benefit decisions, and intake public complaints and comments. An experienced broadcast journalist, Delaney conducts interviews with senior cybersecurity leaders around the world. Previously, she was editor-in-chief of the website for The European Information Security Summit, or TEISS. Earlier, she worked at Levant TV and Resonance FM and served as a researcher at the BBC and ITV in their documentary and factual TV departments.

Governments actively seek input from industry experts, civil society organizations, academia, and citizens themselves when formulating policies related to AI-driven governance systems. One major step is the enactment of strict laws and regulations governing the collection, storage, and use of individuals’ personal data. Governments have introduced comprehensive frameworks that outline organizations’ responsibilities in handling sensitive information. These regulations often include requirements for obtaining consent from individuals before collecting their data, as well as guidelines on how long such information can be retained. By working together, governments can agree on common standards for data privacy and security. International cooperation further opens up the opportunity for the sharing of knowledge and technical expertise on emerging threats and vulnerabilities in AI systems.

Today’s most widely used and advanced systems, by contrast, like Google’s recently announced Gemini, can see, hear, read, write, speak, code, and produce images. Although there is significant uncertainty, the next generation of foundation models—in particular, those trained using substantially greater computational resources than any model trained to date—may have these kinds of dangerous capabilities. We believe the likelihood for this is high enough to warrant their targeted regulation. From a political standpoint, a difficulty in gaining acceptance of this policy is the fact that stakeholders will view this as an impediment to their development and argue that they should not be regulated either because 1) it will place an undue burden on them, or 2) they do not fall into a “high-risk” use group. Regulators must balance security concerns with the burdens placed upon stakeholders through compliance. From an implementation standpoint, a difficulty in implementing this policy will be managing the large number and disparate nature of entities, ranging from the smallest startups to the largest corporations, that will be implementing AI systems.

Military

Read more about Secure and Compliant AI for Governments here.

What are the trustworthy AI regulations?

The new AI regulation emphasizes a relevant aspect for building trustworthy AI models with reliable outcomes: Data and Data Governance. This provision defines the elements and characteristics to be considered for achieving high-quality data when creating your training and testing sets.

How can AI be secure?

Sophisticated AI cybersecurity tools have the capability to compute and analyze large sets of data allowing them to develop activity patterns that indicate potential malicious behavior. In this sense, AI emulates the threat-detection aptitude of its human counterparts.

What is security AI?

AI security is evolving to safeguard the AI lifecycle, insights, and data. Organizations can protect their AI systems from a variety of risks and vulnerabilities by compartmentalizing AI processes, adopting a zero-trust architecture, and using AI technologies for security advancements.

How AI can be used in government?

The federal government is leveraging AI to better serve the public across a wide array of use cases, including in healthcare, transportation, the environment, and benefits delivery. The federal government is also establishing strong guardrails to ensure its use of AI keeps people safe and doesn't violate their rights.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment