Curiosity · AI Model

Reka Core

Reka Core is the flagship multimodal model from Reka AI, a San Francisco lab with deep research roots (ex-DeepMind, Meta, Google Brain). Launched in April 2024, it was an unusually ambitious debut — combining text, image, audio, and video understanding with a 128k context window, and positioning Reka as a frontier-tier player alongside OpenAI, Anthropic, and Google.

Model specs

Vendor
Reka AI
Family
Reka
Released
2024-04
Context window
128,000 tokens
Modalities
text, vision, audio, video
Input price
$3/M tok
Output price
$15/M tok
Pricing as of
2026-04-20

Strengths

  • Native support for video and audio in one endpoint
  • 128k context alongside full multimodality
  • Enterprise-friendly deployment options (VPC, private cloud)

Limitations

  • Smaller ecosystem than OpenAI / Google / Anthropic
  • Superseded on some tasks by GPT-4o and Gemini 2.5 Pro
  • Less public third-party benchmarking than peer frontier models

Use cases

  • Multimodal agents that need image, audio, and video in one model
  • Enterprise-hosted alternatives to GPT-4o / Gemini
  • Research on non-big-three frontier models
  • Video question answering over short clips

Benchmarks

BenchmarkScoreAs of
MMMU~56%2026-04
MMLU~83%2026-04
Perception Test~60%2026-04

Frequently asked questions

What is Reka Core?

Reka Core is Reka AI's flagship multimodal frontier model — able to process text, images, audio, and video with a 128k context window. It launched in April 2024 as Reka's most capable public model.

Who builds Reka Core?

Reka AI is a San Francisco lab founded in 2022 by former DeepMind, Meta, and Google Brain researchers. The team ships frontier multimodal models with a focus on enterprise deployments.

Sources

  1. Reka AI homepage — accessed 2026-04-20
  2. Reka Core technical report (arXiv) — accessed 2026-04-20