Following the viral success of ChatGPT in November of 2022, the past twelve months witnessed a rapid-fire series of breakthroughs, controversies, and massive investments. 2023 was the year AI moved from a niche research interest to a global cultural and economic phenomenon. AI went from "cool lab toy" to "global celebrity who can't stay out of trouble." It was the breakout year for modern AI, especially generative AI.

If 2022 was about wonder, following the release of ChatGPT, then 2023 was about utility. We stopped asking "What is AI?" and started asking "How do I use it at my job?" and "How do we stop it from taking over?" Big software launches included Google's Bard, Meta's LLaMA family, Mistral's open‑source models, and a wave of GPT‑like systems from U.S., European, and Chinese firms.
As we look back, 2023 was the year the world realized that the relationship between humans and computers had changed forever. Here's how it happened:
The year was defined by the transition from simple text-in, text-out models to multimodal systems that can see, hear, and speak. 2023 was the first year the public got broad access to multimodal models that handle text plus images and, experimentally, audio/video.
GPT-4 (March): OpenAI's GPT‑4 added image input; users could, for example, upload a fridge photo and ask AI what to cook. Or call delivery. OpenAI set a new benchmark for reasoning capabilities as GPT-4 demonstrated the ability to pass the Bar Exam in the 90th percentile and interpret complex visual data. Now, AI can be your counselor (therapist) and your counselor (attorney).
In perhaps the greatest hallucination of all time, Google's AI-powered chatbot Bard mistakenly claimed during a demo that the James Webb Space Telescope (JWST) took the first picture of an exoplanet outside our solar system. This inaccuracy was pointed out by astronomers and highlighted the risks of relying on AI-generated information for critical facts. The mistake occurred during a demonstration where Bard was meant to showcase its capabilities, but it inadvertently provided false information. Google acknowledged the error and implemented a rigorous testing process. The result? Google stock drops $100 billion in one day. And Google fired Bard.
Google Gemini (December): Over a year after ChatGPT, Google responded with Gemini (goodbye Bard), designed from the ground up to be natively multimodal across text, code, audio, image, and video. "We're back, baby!"
Open-Source Surge: Meta's release of Llama 2 democratized high-quality AI, allowing developers to run powerful models locally as well as sparking a massive wave of innovation outside of Big Tech.
Text‑to‑image systems like Midjourney, DALL-E, and Stable Diffusion improved fast, while text‑to‑video moved from research to early products (Pika, early Sora demos), signaling a shift in how video content can be created, along with a wave of cat videos (see AI Trains on Cats).
Workflows and productivity: Enterprises and individuals began using chatbots to draft emails, code, marketing copy, and analyses. ChatGPT as office copilot became a standard part of knowledge work in some organizations, while others were trying to figure out what to do with AI.
Creative industries: Generative image, music, and video tools started to transform design, advertising, and entertainment. Adobe's Firefly models integrated gen‑AI safely into Photoshop. Launch of Creative Cloud apps. Canva introduced their virtual design assistant. Tilly Norwood's ready to emerge.
Science and healthcare: Deep learning models improved lung‑cancer risk prediction from CT scans. AlphaFold's protein structure predictions continued to reshape biotech. Neuralink, Elon Musk's brain implant startup, aims to create and implant AI-powered chips in people's brains. This was approved by the FDA for human trials.
In April 2023, DeepMind merged with Google Brain to form a unified division known as Google DeepMind
Autonomous agentic systems: Projects like Auto‑GPT showed early prototypes of LLMs that could plan multi‑step tasks and call tools with minimal prompting, sparking interest in AI agents.
Anthropic's Constitutional AI approach - training models to follow written principles instead of human feedback only - became one of the year's most discussed safety innovations.
Watermarking and detection: Research and policy attention focused on watermarking and other methods to label AI‑generated text and images to fight plagiarism and misinformation. Schools started tackling AI plagiarism by using detection software and redesigning assignments to focus on in-person, critical thinking tasks.
The industry witnessed dramatic shifts in power and leadership.
The Microsoft-OpenAI Alliance: Microsoft integrated AI into nearly every product under the Copilot brand, solidifying its lead in the enterprise AI space with AI integrated in the Windows and Office ecosystems.
The OpenAI Leadership Crisis (November): In a week that gripped the tech world, CEO Sam Altman was fired and then reinstated following a massive employee revolt, highlighting the internal tensions between effective altruism (safety-first) and effective accelerationism (market-first).
NVIDIA's Trillion-Dollar Moment: As every company scrambled for compute power, NVIDIA became the primary supplier of the AI revolution, with its H100 chips becoming the most sought-after hardware in the world.
As AI entered the creative arts, the human vs. machine debate reached a boiling point.
The Hollywood Strikes: The SAG-AFTRA and WGA strikes were fueled by concerns over digital doubles and AI-generated scripts, eventually resulting in historic protections for human creators.
Copyright Litigations: Leading authors and The New York Times filed landmark lawsuits against AI companies, arguing that their work was used for training without compensation or consent.
Generative Media: Tools like Midjourney v5 and Runway Gen-2 reached a level of photorealism that made deepfakes a mainstream concern, exemplified by the viral (but fake) Pope in a Puffer Jacket image.
2023 marked the end of the unregulated AI era as governments around the world began to realize that AI was moving faster than policy. Public debates intensified over job disruption, deepfakes, bias, and existential risk, with governments from the G7 to national parliaments holding AI hearings and summits and taking action.
The EU AI Act: The European Union reached a provisional agreement on the world's first comprehensive AI laws, categorizing AI systems by risk level. The AI Act set rules for high‑risk AI and banned certain uses like real‑time biometric identification in public spaces.
The Bletchley Declaration (November): World leaders gathered in the UK for the first-ever AI Safety Summit, signing a declaration to collaborate on managing the potential risks of frontier AI.
The White House issued an executive order on AI regarding Safe, Secure, and Trustworthy AI. The EO required developers of AI systems to share safety test results with the government. The EO was cancelled in 2025 amid a series of new EO's during the Trump Administration.
Surveys like McKinsey's State of AI 2023 found that a majority of organizations either adopted or were experimenting with generative AI, and early adopters were already reporting productivity gains and new business models.
Analysts and researchers converged on a similar view holding that 2023 will be remembered as the year generative, multimodal AI jumped from labs into everyday life, forcing technology, business, and policy to adjust at unprecedented speed.

2025 Year in Review page.
2024 Year in Review page.
External links open in a new tab: