InterVision Response to the AI Innovation Executive Order

Overall, the AI executive order is a positive step forward, but it is important to understand and be aware of its limitations. The executive order has a limited scope and does not cover all of the potential risks and challenges associated with AI, and it is likely to face legal challenges. However, a clear agenda has been set for the US federal government’s role in the development and use of AI.

 

Observed limitations of the Executive Order

  • It is limited in scope. The executive order only applies to federal agencies and their contractors, and it does not regulate the private sector. This means that the order will not have a direct impact on the development and use of AI by companies like Google, Microsoft, and Amazon.
  • It is reliant on voluntary cooperation. The executive order does not give the federal government any new enforcement powers. Instead, it relies on federal agencies and their contractors to voluntarily comply with its provisions. This means that the effectiveness of the executive order will depend on the goodwill of the companies and organizations that it is intended to regulate.
  • It is subject to legal challenge. The executive order is likely to be challenged in court by groups that oppose government regulation of AI. If the order is successfully challenged, it could be overturned or significantly weakened.
  • It does not address all of the potential risks and challenges associated with AI. The executive order does not address some of the most pressing concerns about AI, such as the potential for AI to be used to develop autonomous weapons systems or to create mass surveillance networks.

 

Impact of the Executive Order on InterVision’s AI Service Portfolio and Clients

InterVision’s AI strategy mainly revolves around using models from major cloud providers such as AWS, Azure, and GCP. Consequently, any compliance measures these providers adopt, particularly in alignment with mandates from the US Federal Government, will benefit us.

We expect agencies like NIST to develop standards for “red team” testing, which we plan to apply across our government accounts and as a general best practice. Although adherence to NIST standards is not mandatory, incorporating these tests and safeguards into our AI solution range, especially in our models, ensures fundamental safety and compliance for our clients, partners, and their users.

Part of the mandate also involves tagging AI-generated content. This requirement could challenge our goal of delivering straightforward, engaging, and exceptional user experiences. Nevertheless, our innovative approach and methodology will enable us to incorporate these mandates seamlessly and user-centrically into our offerings.

We’re aware of the prevalence of information suppression and biases in AI, evident in scenarios like election reporting, selective content visibility, and biased response encouragement stemming from data distortion. AI has been used by small, organized groups to skew public opinion significantly. Human nature, with its capacity for both good and bad, often leads to ethically dubious decisions in the pursuit of better outcomes, particularly in business. Furthermore, employees, dependent on their jobs, might tacitly support their leaders’ choices, regardless of the morality.

Given that data—and by extension, AI responses—can be easily influenced, InterVision advocates for some level of oversight and compliance to avoid stifling innovation. We that businesses that lead in adopting and potentially shaping these compliance standards are likely to gain a competitive edge.

Heading to AWS re:Invent Dec 2-6? We will be at Booth 1764!

X