AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3
ModelInteractive timeline of every major Large Language Model. Filterable by open/closed source, searchable, 54 organizations tracked.
Capabilities4 decomposed
historical llm tracking and visualization
Medium confidenceThis capability compiles a comprehensive timeline of 171 large language models (LLMs) from the inception of the Transformer architecture in 2017 to the anticipated release of GPT-5.3 in 2026. It utilizes a structured database to categorize and chronologically arrange models based on their release dates, architectures, and notable features, enabling users to visualize the evolution of LLMs over time. The timeline is interactive, allowing users to explore significant milestones and advancements in the field of AI.
The timeline is uniquely structured to provide a chronological and visual representation of LLMs, making it easier to grasp the progression of technology at a glance.
More comprehensive and visually engaging than static lists or articles on LLMs, providing an interactive experience.
model feature comparison
Medium confidenceThis capability allows users to compare various features of different LLMs side by side, leveraging a structured dataset that includes parameters like model size, architecture type, training data, and performance metrics. By utilizing a comparative analysis framework, users can easily identify strengths and weaknesses among the models, facilitating informed decisions regarding model selection for specific applications.
Utilizes a structured dataset that allows for detailed side-by-side comparisons, which is more dynamic than traditional text-based comparisons.
Offers a more granular and visual comparison than typical articles or tables, enhancing user understanding.
interactive model exploration
Medium confidenceThis capability provides an interactive interface for users to explore various LLMs, including detailed information about each model's architecture, training data, and use cases. It employs a user-friendly design that allows for filtering and searching through models based on specific criteria, such as release year or architecture type, making it easier for users to find relevant models quickly.
The interactive exploration feature allows for dynamic filtering and searching, which is more engaging than static lists or documents.
Provides a more intuitive and user-friendly experience compared to traditional databases or spreadsheets.
milestone highlighting
Medium confidenceThis capability highlights significant milestones in the development of LLMs, such as the introduction of new architectures or breakthroughs in training techniques. It uses a timeline format to mark these events, providing contextual information and links to relevant research papers or articles, thereby enriching the user's understanding of the historical context of each milestone.
Provides a curated selection of milestones with contextual information, making it easier to understand their significance in the timeline of LLMs.
More focused and informative than generic timelines or lists, offering deeper insights into each event.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3, ranked by overlap. Discovered automatically through the match graph.
How LLMs Work – Interactive visual guide based on Karpathy's lecture
All content is based on Andrej Karpathy's "Intro to Large Language Models" lecture (youtube.com/watch?v=7xTGNNLPyMI). I downloaded the transcript and used Claude Code to generate the entire interactive site from it — single HTML file. I find it useful to revisit this content time
Ape
Revolutionize LLM prompts with advanced tracing and automated...
DeepChecks
Automates and monitors LLMs for quality, compliance, and...
Open LLM Leaderboard
Hugging Face open-source LLM leaderboard — standardized benchmarks, automatic evaluation.
Jeremy Howard’s Fast.ai & Data Institute Certificates
The in-person certificate courses are not free, but all of the content is available on Fast.ai as MOOCs.
Baserun
LLM testing and monitoring with tracing and automated evals.
Best For
- ✓AI researchers analyzing trends in language model development
- ✓Developers selecting LLMs for specific applications
- ✓AI developers and researchers looking for specific model characteristics
- ✓Students and professionals studying AI history
Known Limitations
- ⚠Data is limited to publicly available information and may not include proprietary models.
- ⚠May not include subjective performance evaluations or user experiences.
- ⚠Limited to the models included in the database; may not cover all recent developments.
- ⚠Milestones are subjective and may vary based on different perspectives.
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)
Categories
Alternatives to AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3
Search the Supabase docs for up-to-date guidance and troubleshoot errors quickly. Manage organizations, projects, databases, and Edge Functions, including migrations, SQL, logs, advisors, keys, and type generation, in one flow. Create and manage development branches to iterate safely, confirm costs
Compare →Are you the builder of AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →