Curiosity · AI Model

Gemini Ultra 1.0

Gemini Ultra 1.0 was Google DeepMind's first frontier-class multimodal model, launched with Gemini Advanced in February 2024. It was the first LLM to beat human-expert baseline on MMLU (≈90%) and introduced native image, audio, and video reasoning, laying the groundwork for the Pro and Flash families that followed.

Model specs

Vendor
Google
Family
Gemini 1.0
Released
2024-02
Context window
32,768 tokens
Modalities
text, vision, audio, video

Strengths

  • First LLM to beat expert human MMLU baseline
  • Natively multimodal across image, audio, and video
  • Served as the reasoning engine for Gemini Advanced at launch

Limitations

  • Superseded by Gemini 1.5 Pro, 2.0, and 2.5 — mostly historical now
  • 32K context is small vs. modern Gemini Pro 1M-token context
  • No longer the default model behind Gemini Advanced

Use cases

  • Early Google Workspace Gemini Advanced integrations
  • Baseline for Gemini 1.5/2.x benchmark comparisons
  • Legacy multimodal research experiments

Benchmarks

BenchmarkScoreAs of
MMLU≈90.0%2024-02
HumanEval≈74%2024-02
GSM8K≈94%2024-02

Frequently asked questions

What is Gemini Ultra 1.0?

Gemini Ultra 1.0 is Google DeepMind's original top-tier multimodal LLM, launched in February 2024 as the flagship of the Gemini 1.0 family and powering Gemini Advanced at its debut.

Is Gemini Ultra 1.0 still available?

Google has transitioned Gemini Advanced to the Gemini 1.5, 2.0, and 2.5 Pro families. Gemini Ultra 1.0 is mostly a historical benchmark reference today.

What is Gemini Ultra 1.0 best remembered for?

Being the first LLM to beat the ~89.8% expert baseline on MMLU and for popularising truly multimodal frontier models capable of reasoning over text, image, audio, and video.

Sources

  1. Google — Gemini 1.0 announcement — accessed 2026-04-20
  2. Google DeepMind — Gemini technical report — accessed 2026-04-20