Senator Dave McCormick (R-Pa.) outlined AI policy proposals in a CBS News interview on April 12, 2026. He called for software standards and regulatory tech to balance innovation with safety in finance and technology sectors.
CBS News released the full transcript on April 13, 2026. McCormick, former CEO of Bridgewater Associates, discussed AI's economic effects on markets and jobs.
McCormick's Core AI Policy Proposals
"We need mandatory safety audits for AI systems exceeding 1 teraflop capacity," McCormick told CBS News. Companies must submit annual reports to the Federal Trade Commission (FTC).
The proposals target large language models and autonomous agents. McCormick praised the National Institute of Standards and Technology (NIST) AI Risk Management Framework. "Developers should embed NIST standards directly into codebases," he said.
Regulatory tech firms stand to gain, McCormick added. He named Palantir Technologies as a leader in AI governance software.
Software Standards Advance AI Safety
Verifiable AI software fosters trust, McCormick stated. Open-source tools like Hugging Face Transformers allow production safety checks, he noted.
The National Science Foundation (NSF) announced a USD 500 million allocation for AI safety research on April 1, 2026, per NSF press release. McCormick urged faster fund distribution to developers.
Microsoft integrated NIST standards into Azure AI in March 2026, according to the company's quarterly filings. Oracle issued similar updates, Oracle stated in a blog post.
These software advancements cut compliance costs by 30%, McCormick estimated. A Deloitte report dated March 15, 2026, confirms the figure. Finance firms deploy them for fraud detection and risk modeling.
Regulatory Tech Enhances AI Compliance
Regtech automates AI oversight. Theta Lake scans models for bias in real time, an approach McCormick endorsed during the interview.
Sequoia Capital announced a USD 200 million investment in ComplyAI on April 10, 2026, per company statement. The funding targets AI compliance tools for banks.
AI trading algorithms demand monitoring, McCormick said. The Securities and Exchange Commission (SEC) reviewed related proposals on April 13, 2026, SEC docket shows.
Blockchain strengthens audit trails. Chainalysis applies it to AI logs, as McCormick recommended. Finance applications include trade surveillance.
Markets React to McCormick's AI Policy Remarks
Crypto markets fell after McCormick's comments. Alternative.me's Fear & Greed Index dropped to 16 on April 12, 2026.
Bitcoin fell 3.4% to USD 71,070. Ethereum declined 4.9% to USD 2,198. Nasdaq data from April 13, 2026, show BNB dropped 3.3% to USD 592.19, while USDT stayed at USD 1.00.
The dip reflected concerns over regulation for decentralized AI projects. AI stocks showed mixed results: Nvidia shares rose 2.1% to USD 145.20 on April 13, Nasdaq data indicate, driven by safety tool demand. Regtech provider BigID climbed 4.5%. The S&P 500 AI Index gained 1.2%.
Decentralized networks like Render face scrutiny under the proposed AI policy, McCormick noted.
Tech and Finance Implications of AI Policy
Unchecked AI threatens finance volatility, McCormick warned. Regtech enables real-time monitoring of trading algorithms.
JPMorgan Chase piloted AI governance tools in Q1 2026, cutting error rates by 25%, per the bank's earnings report. Goldman Sachs tested similar systems, company filings confirm.
Google DeepMind sped up safety features post-interview. OpenAI revealed governance upgrades on April 14, 2026.
The EU AI Act entered force in August 2025. McCormick seeks U.S. alignment to ease transatlantic trade in AI tech.
Economic Outlook Under Proposed AI Policy
PwC's 2026 Global AI Report forecasts AI adding USD 15.7 trillion to global GDP by 2030. "Regulations ensure equitable benefits," McCormick said.
The World Economic Forum projects AI displacing 85 million jobs but creating 97 million by 2030. McCormick backs retraining programs funded by tech firms.
McCormick co-sponsors the AI Safety Act mandating high-risk audits. The Senate plans a vote in May 2026.
House Speaker Mike Johnson (R-La.) endorsed similar measures on April 11, 2026. TechNet voiced support for targeted audits. California requires AI disclosures since January 2026, but McCormick prefers federal AI policy standards.
The Brookings Institution analyzed the interview's AI policy impacts on April 13, 2026.



