Artificial Intelligence began as an academic dream, a thought experiment about machines that could think. Today, it has become the foundation of economies, militaries, and digital societies. But as AI systems grow in power and autonomy, a fundamental question has emerged: Who controls AI governments or corporations? The answer defines not only the future of technology, but also the future of power itself.

In the United States, the balance between public authority and private enterprise has always defined technological progress. The Internet was born from government-funded research but commercialized by private firms. Space exploration was once solely the domain of NASA, but today space is shared with private companies SpaceX and Blue Origin. The same pattern now governs AI: public innovation, private acceleration, and political oversight hand-in-hand with exponential growth.
The American AI ecosystem is dominated by a handful of private corporations like the Mag7 tech giants (Microsoft, Apple, Alphabet, Amazon, Meta, Tesla, and NVIDIA), along with emerging research powerhouses like OpenAI and Anthropic. Their resources dwarf those of most nations. Microsoft alone has committed billions to AI infrastructure and model development, NVIDIA's chips are the gold standard for training large models, and Google's DeepMind pushes the frontier of machine reasoning.
Governments, meanwhile, are trying to keep pace not by competing in innovation directly, but by controlling the conditions of AI deployment through executive orders, laws, standards, and ethical guidelines. America leads the way.
For decades, AI operated in a regulatory vacuum. It was treated as just another software discipline until the emergence of generative AI shattered that illusion. When ChatGPT reached 100 million users in two months, policymakers realized that AI was not a distant research field, but rather a mass-market force with immediate social, economic, and political consequences.
Executive Order 13859, signed February 11, 2019 by President Trump, titled "Maintaining American Leadership in Artificial Intelligence," was the first wide-ranging executive order on American AI governance. It was one of the earliest activities directly addressing artificial intelligence technology at the national level, years before the ChatGPT era we're in now. It set the tone for a White House-led approach to AI that continues to this day.
EO 13859 started the trend toward regulating AI from the White House. That order, signed during President Donald Trump's first administration, directed federal agencies to prioritize AI research and development and foster workforce development. It aimed to make federal data and models available for AI development work. It instructed agencies to create guidance for the use of AI in the industries they regulate. And it called for an action plan to protect the U.S.'s technological advantage in AI.
The Trump Administration's philosophy on AI is driven by a mandate to secure global technological supremacy and foster economic competitiveness. The strategy is characterized by an emphasis on private-sector leadership, minimal federal regulation, and the strategic integration of AI into national security and defense apparatuses. The prevailing belief is that rapid innovation, unimpeded by bureaucratic oversight, is the most effective means to secure American dominance in the 21st-century technological landscape.
The primary driver of the administration's AI policy is the belief that technological leadership is a direct result of economic freedom and the velocity of innovation. The philosophy dictates that the U.S. advantage stems from its technology companies. The government's role is not to dictate development, but to facilitate it by removing roadblocks.
Just as DARPA funding led to NASA and the internet, there is a fundamental belief today in the value of strategic, federal investments in research. Federal funding is directed toward specific, high-leverage areas, focusing on fundamental research and the construction of high-performance computing, resources critical for training trillion-parameter models. This ensures the foundational infrastructure remains robust and domestically controlled. Here are some of the basic principles:
Talent Retention: Policies emphasize immigration and educational initiatives designed to attract and retain top global AI talent to prevent a "brain drain" to competitor nations.
Approach to Regulation and Ethics: In contrast to the broader regulatory efforts seen internationally (such as the Kyoto Protocol for Responsible AI Deployment), the administration advocates for a light-touch, non-prescriptive regulatory environment.
Skepticism of Broad Rules: The philosophy holds that imposing wide-ranging, technology-specific regulations could stifle innovation and inadvertently create compliance burdens that disproportionately affect smaller companies.
Sector-Specific Oversight: Regulation is preferred only in sectors where AI poses a measurable risk to human safety or financial stability (e.g., specific medical devices or financial lending algorithms). Even here, the focus is on performance standards rather than prescriptive requirements.
Voluntary Standards: The government encourages industry-led, voluntary technical standards and best practices for areas like transparency and data security, rather than mandated federal rules.
National Security and Defense Integration: AI is viewed first and foremost as a critical asset for national defense and intelligence, making its military application a top priority.
Autonomous Systems Development: The administration actively supports the research and rapid deployment of fully autonomous military technology (Lethal Autonomous Weapon Systems). The philosophy accepts the necessity of AI systems running conflict simulations and deploying resources based on pure predictive probability, prioritizing speed and decisive advantage on the battlefield.
Cybersecurity Focus: Significant resources are dedicated to leveraging AI for offensive and defensive cybersecurity to protect critical infrastructure, including the physical AI compute clusters and the high-speed fiber lines of the data center network.
Workforce and Societal Impact: The administration acknowledges the societal shifts caused by automation (the "Silent Disruption" leading to displacement), but frames the response through economic growth.
Emphasis on Growth: The primary solution to job displacement caused by AI agents (like those displacing customer service or accounting roles) is to accelerate economic growth, arguing that new, higher-skilled jobs will emerge and absorb the displaced workforce.
Retraining Focus: While not advocating for structural intervention to slow automation, the policy supports targeted workforce development and retraining initiatives focused on high-demand, technical skills (like network operations and specialized maintenance roles created by the data centers). This ensures American workers are prepared for the new technical economy created by AI.
While governments regulate, corporations build and build fast. AI is a competitive business, where speed can mean the difference between dominance and obsolescence. OpenAI's GPT, Anthropic's Claude, Microsoft's CoPilot, and Google's Gemini are locked in a race to create increasingly capable and profitable AI systems.
But this rapid innovation comes at a cost: transparency, accountability, and control often lag behind capability. Model weights are guarded as trade secrets. Data provenance is obscure. Alignment mechanisms, the ways in which AIs are taught to behave safely, are proprietary.
Corporations argue that regulation must not stifle progress. They warn of a regulatory overreach that could slow America's innovation and allow China or Europe to take the lead. Their message to Washington is clear: Let us innovate, and we'll self-regulate. President Trump agrees: "AI is far too important to smother it in bureaucracy at this early stage."
Trump has taken aim at state laws regulating AI. He threatens to limit funding from the federal government for states that pass AI laws deemed burdensome to developing the technology. "We also have to have a single federal standard, not 50 different states regulating this industry of the future," Trump said. "We need one common-sense federal standard that supersedes all states; supersedes everybody, so you don't end up in litigation with 43 states at one time."
The portion of Trump's plan targeting states is getting pushback from some in the industry. For example, Anthropic released a post responding to Trump's AI plan. "We share the Administration's concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws," the company said, but added, "We continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems, if the federal government fails to act." The key is 'if' the federal government fails to act, then at least one tech company would welcome appropriate state legislation.
Several U.S. states have already passed laws regulating artificial intelligence. States like Texas, California, Illinois, and Colorado are leading with AI-specific laws, while others are adapting existing privacy and consumer protection statutes. The laws are mainly directed at transparency, discrimination, and consumer protection. Texas, Illinois, and California are among the most active, though Congress is now debating whether to override state-level AI laws.
California proposed and enacted bills requiring impact assessments for automated decision systems used in hiring, housing, and lending. California has a strong focus on consumer privacy through the California Consumer Privacy Act (CCPA), which applies to AI-driven data use.
Colorado passed a law requiring risk management frameworks for companies deploying high-risk AI systems. The law includes transparency obligations and consumer rights to opt out of certain automated decisions.
Illinois has enacted the AI Video Interview Act, requiring employers to notify applicants when AI is used in hiring and to obtain consent. There are expanded rules around biometric data through the Biometric Information Privacy Act that indirectly regulate AI systems.
Texas passed a far-reaching AI law addressing child protection, data privacy, discrimination, and accountability for Big Tech. Lawmakers argue it prevents harmful AI use in areas like child pornography and biased algorithms.
Transparency: Informing consumers when AI is used (hiring, lending, healthcare).
Bias and Discrimination: Preventing unfair outcomes in employment, housing, or policing.
Privacy: Protecting biometric and personal data from misuse.
Child Protection: Safeguarding minors from harmful AI-generated content.
Self-regulation has limits. When private AI systems begin influencing public opinion, national security, and labor markets, the argument that they are "just products" no longer holds. The line between private enterprise and public infrastructure begins to blur. Is it too soon to regulate, to smother it in bureaucracy at this early stage, as the President suggests. Or are we merely singing a familiar American refrain of innovation versus regulation.
We've sung that song before in American history from the Early Industrial Era (1800s-early 1900s), to the Progressive Era and New Deal (1900s-1930s), to Post-WWII and the Cold War (1940s-1970s), and the Late 20th Century Tech Boom (1980s-2000s). These epochs have resulted in such laws as the Interstate Commerce Act of 1887, the Securities Exchange Act of 1934, Environmental laws, Antitrust cases, and deregulation in industries like airlines. Time will tell the requisite legislation for AI in America.
At the core of the AI control debate lies data: who collects it, who owns it, and who benefits from it. AI companies depend on vast datasets scraped from the Internet, often including copyrighted works, personal data, and sensitive information. Governments, aware of the strategic importance of data, are moving to assert data sovereignty.
The U.S. is crafting rules to restrict the export of sensitive data to foreign adversaries, while China enforces strict localization of domestic data within its borders. The European Union, through GDPR and the AI Act, gives individuals legal rights over how their data can train AI models. In this global contest, data is the new oil and those who refine it into intelligence hold the true power.
Data sovereignty is the idea that data, particularly that belonging to a nation's citizens or government, must be subject to the laws and governance structures of that nation, regardless of where the data is physically stored or processed. To understand this concept, consider the imaginary country of Veridia and its data predicament.
The nation of Veridia had long embraced the global cloud. Their health records, university research, and most government communications were processed and stored efficiently by vast, nameless data centers located thousands of miles away, primarily under the legal jurisdiction of foreign powers. It was cheap, it was fast, and for years, it was convenient.
Then came the crisis known simply as "The Forecast."
Veridia relied heavily on a predictive AI model leased from a large, international corporation (we'll call Big Data) to forecast resource allocation for its crucial agricultural sector. Big Data's AI suddenly recommended drastic, insane cuts to water reserves for a specific region. Perplexed by this sudden move, Veridia's water minister asked for the model's underlying logic and the real-time sensor data that drove the decision.
Big Data denied the request, citing the foreign jurisdiction where the data resided; a jurisdiction with weaker privacy and disclosure laws. They claimed the underlying data and the model's weights were proprietary and legally inaccessible to Veridia.
The minister realized the grim truth: Veridia had outsourced not just its data storage, but its national decision-making capacity. The data--the collective memory of the nation, its weather patterns, its soil composition, and its people's health--was their most strategic resource, and they had no control over it. This demonstrated the strategic importance of data, especially for AI development, which relies on this data for training trillion-parameter models.
In the wake of The Forecast disaster, Veridia passed the National Data Integrity Act. It was their modern-day equivalent of building a national border wall. But this wall was made of code along with a legal mandate.
The Act didn't ban foreign cloud providers; instead, it established a clear rule: Any data pertaining to Veridian citizens, critical infrastructure, or government operations must be processed and stored exclusively on servers physically located within Veridia's territorial borders.
This action had two profound consequences, demonstrating the assertion of data sovereignty:
Mandated Infrastructure: To continue serving the Veridian market, major international tech companies were forced to spend billions building dedicated AI Data Center Network infrastructure within Veridia. This addressed the issue of geographic concentration of AI infrastructure, bringing investment and high-tech jobs to the country.
Legal Jurisdiction: Because the physical data centers now sat on Veridian soil, any legal dispute, audit, or access request was subject immediately and unequivocally to Veridian courts and laws. The data was no longer protected by the legal shields of foreign countries. The nation reclaimed control over its digital destiny.
Veridia learned that in the age of AI, sovereignty wasn't just about controlling physical borders; it was about drawing a clear, undeniable legal boundary around the national data that fuels the new intelligent economy.
A growing movement within academia and civil society argues that AI should not be monopolized by corporate interests. Instead, they advocate for publicly funded AI infrastructure shared datasets, open models, and community-driven research.
Projects like OpenAI's original mission, EleutherAI, and Hugging Face's open model hub embody this philosophy: democratizing access to AI so that innovation benefits everyone, not just the few. The U.S. government has also begun to invest in "AI for the public good," funding initiatives in education, healthcare, and climate modeling.
The tension remains, however: can open AI remain safe? Can public research keep up with the billion-dollar budgets of private labs?
The question of control is not merely technical or economic it is ethical. Who decides what an AI system can or cannot do? Should corporations be the moral arbiters of machines used by billions, or should governments, accountable to the public, set those limits?
Some argue that AI ethics cannot be outsourced. Governments must enforce transparency, fairness, and safety. Others counter that bureaucratic control could choke creativity, replacing innovation with compliance.
The ideal path lies between the extremes: a co-regulatory model, where private innovation thrives within a transparent, accountable framework enforced by public institutions. This model mirrors the structure of the aviation or pharmaceutical industries sectors where innovation continues, but under strict oversight for safety and reliability.
As AI continues to permeate life and governance, a new legal architecture a kind of "AI Constitution" is emerging. It will define rights, responsibilities, and limits for intelligent systems, much as earlier generations of law defined them for corporations and citizens.
The U.S., with its blend of free-market dynamism and democratic governance, is uniquely positioned to pioneer this balance. But success will require coordination between Congress, federal agencies, and the very corporations that now lead the field.
Ultimately, the question is not who controls AI, but how AI is controlled through secrecy and competition, or through openness and shared accountability. The answer will determine whether artificial intelligence remains a tool of progress or becomes an unaccountable power.
AI has blurred the old boundaries between state and market. Governments need the expertise and resources of corporations; corporations need the legitimacy and stability provided by governance. The challenge is to find equilibrium a partnership where neither dominates and both are accountable.
In the end, the story of AI in America may not be about conquest or control at all. It may be about coexistence a delicate balance between innovation and regulation, freedom and responsibility, human ambition and collective wisdom.
For as AI grows smarter, the true test will not be whether machines can govern themselves but whether humanity can.
AI in America home page
AI Governance home page
Trump AI page