Senator Cynthia Lummis (R-WY) has unveiled the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a significant legislative effort aimed at establishing clear liability frameworks for the use of artificial intelligence (AI) in professional fields. This bill is poised to enhance transparency among AI developers while ensuring that professionals—including physicians, attorneys, engineers, and financial advisors—remain accountable for the guidance they offer, even when it involves insights derived from AI systems.
According to Lummis, the RISE Act aims to create a balance between innovation and responsibility, providing predictable standards that support safer AI development without undermining professional autonomy. The proposed legislation introduces the concept of “model cards,” which are technical documents detailing key aspects of an AI system, such as its training data, intended applications, and performance metrics. These documents are crucial in helping professionals evaluate the suitability of AI tools for their practice.
“Wyoming values both innovation and accountability; the RISE Act creates predictable standards that encourage safer AI development while preserving professional autonomy,” Lummis stated.
The bill specifically delineates the boundaries of immunity for AI developers, ensuring that they cannot evade liability in cases of recklessness or fraud. Furthermore, it mandates continuous accountability by requiring updates to AI documentation within 30 days of releasing new versions or discovering significant issues, thereby reinforcing the commitment to transparency.
While the RISE Act takes steps to improve transparency, it stops short of mandating that AI models be fully open source. Developers are allowed to protect certain proprietary information, provided it does not pertain to safety concerns and is justified adequately. This aspect of the legislation has sparked discussions regarding the implications of closed-source AI, with experts warning of the potential risks associated with centralized models that lack sufficient oversight.
“OpenAI is not open, and it is controlled by very few people, so it’s quite dangerous,” noted Simon Kim, CEO of Hashed, reflecting on the complexities surrounding AI development.
Key Points on the RISE Act of 2025
Senator Cynthia Lummis introduces a critical legislative proposal concerning AI accountability and transparency.
- Legislation Introduction
- Senator Cynthia Lummis has introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025.
- The Act aims to clarify liability frameworks for AI used by professionals.
- Professional Responsibility
- Professionals, including physicians and attorneys, remain legally accountable for advice influenced by AI.
- AI developers can only shield themselves from liability if they release comprehensive model cards.
- Model Cards Definition
- Model cards contain details about AI systems, such as training data, use cases, metrics, and limitations.
- These documents are designed to aid professionals in evaluating AI tools for their work.
- Immunity Boundaries
- The Act does not provide blanket immunity for AI developers.
- Excludes protection in cases of recklessness or fraud.
- Ongoing Accountability
- Developers must update AI documentation within 30 days of new versions or known failures.
- Reinforces the necessity for continuous transparency in AI development.
- Proprietary Information vs. Safety
- Developers may withhold proprietary details unless related to safety concerns.
- Justifications are required for any omitted information to maintain transparency.
- Concerns About Closed-Source AI
- Opinions from industry leaders highlight risks associated with centralized, closed-source AI systems.
- Warnings about “black box” AI models emphasize the dangers of lacking transparency.
Analyzing the RISE Act and Its Competitive Landscape in AI Legislation
Senator Cynthia Lummis’s introduction of the Responsible Innovation and Safe Expertise (RISE) Act of 2025 highlights a significant shift in AI regulation. While the act aims to provide a balanced framework of liability for AI developers and professionals, it presents both advantages and disadvantages when compared to other legislative efforts in the technology sector.
Competitive Advantages: The RISE Act sets clear boundaries of liability, ensuring that professionals maintain accountability in the advice they deliver, which could instill greater public trust in AI-assisted services. Unlike other frameworks that promote complete immunity for AI developers, this act fosters a culture of responsibility by requiring ongoing updates and transparency, especially concerning the technical performance and limitations of AI models. Such a stance could attract practitioners from various sectors—including healthcare, law, and finance—who might otherwise hesitate to integrate AI into their practices due to potential legal ambiguities.
Competitive Disadvantages: On the flip side, the act’s limitation on requiring open-source models may disadvantage those advocating for greater transparency and collaboration in AI development. The hesitation to mandate full disclosure could hinder innovation, as developers may prioritize proprietary interests over community-driven advancements. Furthermore, companies reliant on maintaining competitive advantages through secretive algorithms might face backlash from advocates favoring ethical AI practices, potentially damaging their reputations and customer trust.
Beneficiaries and Challenges: The RISE Act may benefit professionals who seek to leverage AI while safeguarding themselves legally. However, it could also create challenges for smaller AI firms unable to navigate the stringent documentation and accountability requirements. As regulatory standards tighten, emerging developers might face barriers to entry, intensifying an already competitive landscape where larger firms dominate. Thus, the RISE Act not only establishes a foundation for responsible AI use but also underscores the friction between innovation and compliance in a rapidly evolving technological environment.