<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Vivekananda School of Engineering &amp; Technology — VIPS Learn</title><description>AI education by VSET — models, MCPs, agents, and engineering practice</description><link>https://learn.engineering.vips.edu/</link><language>en-IN</language><item><title>Yi-Large</title><link>https://learn.engineering.vips.edu/ai-models/01-ai-yi-large/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/01-ai-yi-large/</guid><description>01.AI&apos;s Yi-Large is Kai-Fu Lee&apos;s flagship Chinese/English LLM, a closed-model 2024 release optimised for reasoning, multilingual chat, and enterprise RAG.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Adobe Firefly Image 3</title><link>https://learn.engineering.vips.edu/ai-models/adobe-firefly-3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/adobe-firefly-3/</guid><description>Firefly Image 3 is Adobe&apos;s commercially-safe generative image model, trained on licensed Adobe Stock content and deeply integrated into Photoshop, Illustrator, and Express.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>GTE-Qwen2 7B Instruct</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-gte-qwen2-7b-instruct/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-gte-qwen2-7b-instruct/</guid><description>GTE-Qwen2 7B Instruct is Alibaba DAMO&apos;s 7B-parameter open text-embedding model — topped the MTEB leaderboard at release, built on the Qwen 2 backbone for 4096-dim dense retrieval.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Jamba 1.5 Large</title><link>https://learn.engineering.vips.edu/ai-models/ai21-jamba-1-5-large/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/ai21-jamba-1-5-large/</guid><description>Jamba 1.5 Large is AI21 Labs&apos; open-weights hybrid SSM-Transformer model — a 398B total / 94B active MoE combining Mamba and attention layers with 256K context.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Marco-o1</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-marco-o1/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-marco-o1/</guid><description>Alibaba&apos;s Marco-o1 is an open-weight reasoning LLM that applies o1-style chain-of-thought search using Monte Carlo Tree Search over reasoning trajectories.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen 2.5 72B Instruct</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-72b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-72b/</guid><description>Qwen 2.5 72B Instruct is Alibaba&apos;s 2024 open-weights flagship dense model — Apache 2.0 licensed, matching Llama 3.1 405B on many benchmarks at a 72B footprint.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen2.5-VL 72B</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-vl-72b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-vl-72b/</guid><description>Qwen2.5-VL 72B is Alibaba&apos;s top-tier open-weights vision-language model — a 72B transformer with agentic UI grounding, long-video understanding, and precise document OCR.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen 2.5 Coder 32B</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-coder-32b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-coder-32b/</guid><description>Qwen 2.5 Coder 32B is Alibaba&apos;s open-weights coding flagship — a 32B dense model that matched GPT-4o on HumanEval at release and runs on a single H100.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen2.5-Math 72B</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-math-72b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-5-math-72b/</guid><description>Qwen2.5-Math 72B is Alibaba&apos;s open-weights math specialist — a 72-billion-parameter Qwen2.5 fine-tune with tool-augmented (Python) reasoning for Olympiad-class problems.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen2-Audio 7B</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-audio-7b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-audio-7b/</guid><description>Qwen2-Audio 7B is Alibaba&apos;s open-weights audio-language model — a 7B transformer that accepts speech, music, and environmental sounds and responds in natural-language text.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen2-VL 72B</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-vl-72b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-2-vl-72b/</guid><description>Qwen2-VL 72B is Alibaba&apos;s flagship open vision-language model with dynamic-resolution visual encoding, strong OCR, and 20-minute video understanding on the Qwen 2 backbone.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen QwQ 32B</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwq-32b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwq-32b/</guid><description>Qwen QwQ 32B is Alibaba&apos;s open-weights reasoning model — a 32B dense variant trained with reinforcement learning that competes with DeepSeek R1 at a much smaller footprint.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Qwen 3</title><link>https://learn.engineering.vips.edu/ai-models/alibaba-qwen-3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/alibaba-qwen-3/</guid><description>Qwen 3 is Alibaba&apos;s 2025 flagship open-weights family — dense and MoE variants from 0.6B to 235B, Apache 2.0 licensed, with strong multilingual and reasoning behavior.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3.5 Haiku</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-5-haiku/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-5-haiku/</guid><description>Claude 3.5 Haiku is Anthropic&apos;s November 2024 small model — fast, cheap, and the first Haiku to match or beat Claude 3 Opus on several coding and reasoning benchmarks.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 2.1</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-2-1/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-2-1/</guid><description>Claude 2.1 is Anthropic&apos;s late-2023 flagship — introduced the 200K-token context window and improved refusal behaviour. Now a legacy model referenced mostly for benchmark comparisons.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3.7 Sonnet</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-7-sonnet/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-7-sonnet/</guid><description>Claude 3.7 Sonnet is Anthropic&apos;s February 2025 hybrid reasoning model — the first Claude with extended thinking, mixing fast responses and long chain-of-thought in one model.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3 Haiku</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-haiku/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-haiku/</guid><description>Claude 3 Haiku is Anthropic&apos;s original March 2024 small, fast, cheap model — the first Haiku tier, still widely deployed in legacy pipelines despite being surpassed by Haiku 3.5 and 4.5.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3 Opus</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-opus/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-opus/</guid><description>Claude 3 Opus is Anthropic&apos;s March 2024 flagship — the original Opus tier that established Claude as a GPT-4-class frontier model with strong long-context and reasoning performance.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3.5 Sonnet</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-5-sonnet/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-5-sonnet/</guid><description>Claude 3.5 Sonnet is the June 2024 model that made Claude famous for coding — state-of-the-art SWE-bench at launch, tool use, vision, and the first computer-use preview.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude 3 Sonnet</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-sonnet/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-3-sonnet/</guid><description>Claude 3 Sonnet is Anthropic&apos;s March 2024 mid-tier model — the original Sonnet that balanced cost and quality in the Claude 3 launch before 3.5 Sonnet redefined the tier.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Code</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-code/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-code/</guid><description>Claude Code is Anthropic&apos;s official agentic command-line product — a terminal-first coding agent built on the Claude models, with native tool use, file editing, and git integration.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Haiku 4.5</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-haiku-4-5/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-haiku-4-5/</guid><description>Claude Haiku 4.5 is Anthropic&apos;s fast, low-cost 2025 model — matches Sonnet 4 on many tasks at about one-third the price and double the speed, ideal for sub-tasks and real-time UX.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Opus 4.7</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-opus-4-7/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-opus-4-7/</guid><description>Claude Opus 4.7 is Anthropic&apos;s top-tier model for long-context reasoning, code generation, and agentic workflows. 1M context, native tool use, strong on SWE-bench.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Sonnet 4.5</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-sonnet-4-5/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-sonnet-4-5/</guid><description>Claude Sonnet 4.5 is Anthropic&apos;s September 2025 Sonnet refresh — a best-in-class coding model at the time with 200K context, extended thinking, and strong agent behaviour.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Instant 1.2</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-instant-1-2/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-instant-1-2/</guid><description>Claude Instant 1.2 is Anthropic&apos;s 2023 low-latency chat model — the cheap, fast sibling of Claude 1. Deprecated in favour of the Haiku line but still referenced in many legacy apps.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Claude Sonnet 4.6</title><link>https://learn.engineering.vips.edu/ai-models/anthropic-claude-sonnet-4-6/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/anthropic-claude-sonnet-4-6/</guid><description>Claude Sonnet 4.6 is Anthropic&apos;s everyday-workhorse model — balances quality and cost, 1M context, strong coding and tool use, and powers most Claude-based production apps in 2026.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>OpenELM 3B</title><link>https://learn.engineering.vips.edu/ai-models/apple-openelm-3b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/apple-openelm-3b/</guid><description>Apple&apos;s OpenELM 3B is an open, on-device-friendly LLM using layer-wise scaling, released with full training recipe and CoreML export in 2024.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AssemblyAI Universal-2</title><link>https://learn.engineering.vips.edu/ai-models/assemblyai-universal-2/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/assemblyai-universal-2/</guid><description>AssemblyAI Universal-2 is a batch-first speech-to-text model with state-of-the-art English WER and built-in LeMUR LLM features for summaries, chapters, and Q&amp;A.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Baichuan 4</title><link>https://learn.engineering.vips.edu/ai-models/baichuan-baichuan-4/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/baichuan-baichuan-4/</guid><description>Baichuan Intelligent&apos;s Baichuan 4 is a closed Chinese LLM with 192k context, strong reasoning and bilingual performance, widely used in Chinese enterprise.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>BAAI BGE-M3</title><link>https://learn.engineering.vips.edu/ai-models/bge-m3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/bge-m3/</guid><description>BGE-M3 is BAAI&apos;s open-weight multilingual embedding model — one backbone producing dense, sparse, and multi-vector retrievals over 100+ languages with 8k context.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Cartesia Sonic</title><link>https://learn.engineering.vips.edu/ai-models/cartesia-sonic/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cartesia-sonic/</guid><description>Sonic is Cartesia&apos;s low-latency text-to-speech model built on state-space-model (Mamba-style) architectures — sub-90 ms time-to-first-audio for real-time voice agents.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>BAAI BGE Reranker v2-M3</title><link>https://learn.engineering.vips.edu/ai-models/bge-reranker-v2-m3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/bge-reranker-v2-m3/</guid><description>BGE Reranker v2-M3 is BAAI&apos;s open-weight multilingual cross-encoder reranker — pairs naturally with BGE-M3 embeddings for a fully open-source RAG pipeline.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Black Forest Labs FLUX.1 [pro]</title><link>https://learn.engineering.vips.edu/ai-models/black-forest-flux-1-pro/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/black-forest-flux-1-pro/</guid><description>FLUX.1 [pro] is Black Forest Labs&apos; flagship closed text-to-image model — state-of-the-art prompt adherence and photorealism, served via bfl.ai and partner APIs.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Aya 23 35B</title><link>https://learn.engineering.vips.edu/ai-models/cohere-aya-23-35b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-aya-23-35b/</guid><description>Aya 23 35B is Cohere For AI&apos;s 2024 open-weights multilingual model — a 35-billion-parameter decoder built on Command R, tuned across 23 languages.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>BloombergGPT</title><link>https://learn.engineering.vips.edu/ai-models/bloomberg-bloomberg-gpt/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/bloomberg-bloomberg-gpt/</guid><description>BloombergGPT is a 50-billion-parameter finance-specialised LLM trained on Bloomberg&apos;s proprietary financial corpus — a landmark domain model for finance NLP.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Black Forest Labs FLUX.1 [dev]</title><link>https://learn.engineering.vips.edu/ai-models/black-forest-flux-1-dev/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/black-forest-flux-1-dev/</guid><description>FLUX.1 [dev] is Black Forest Labs&apos; open-weight 12B diffusion transformer — near-[pro] quality for research and non-commercial use, with a growing LoRA ecosystem.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Aya Expanse 32B</title><link>https://learn.engineering.vips.edu/ai-models/cohere-aya-expanse-32b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-aya-expanse-32b/</guid><description>Aya Expanse 32B is Cohere For AI&apos;s follow-up multilingual open-weights model — a 32B Command-family decoder covering 23 languages with state-of-the-art per-language quality.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Command R</title><link>https://learn.engineering.vips.edu/ai-models/cohere-command-r/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-command-r/</guid><description>Command R is Cohere&apos;s RAG-first production LLM — a mid-size model tuned for grounded answers with citations, tool use, and multilingual enterprise deployments.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Command R+</title><link>https://learn.engineering.vips.edu/ai-models/cohere-command-r-plus/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-command-r-plus/</guid><description>Command R+ is Cohere&apos;s 104B open-weights model purpose-built for RAG and tool-use — strong citation quality and multilingual support under the CC-BY-NC research license.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Cohere Embed v3</title><link>https://learn.engineering.vips.edu/ai-models/cohere-embed-v3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-embed-v3/</guid><description>Cohere Embed v3 is a multilingual retrieval embedding model with input-type prompts (search_document, search_query) and strong BEIR scores for enterprise RAG.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Cohere Rerank 3 (Multilingual)</title><link>https://learn.engineering.vips.edu/ai-models/cohere-rerank-multilingual-v3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-rerank-multilingual-v3/</guid><description>Cohere Rerank 3 Multilingual is a cross-encoder reranking model over 100+ languages — reorders retrieval hits by query relevance for RAG and search at low latency.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Cohere Rerank 3</title><link>https://learn.engineering.vips.edu/ai-models/cohere-rerank-3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/cohere-rerank-3/</guid><description>Cohere Rerank 3 is a cross-encoder reranker for RAG — score (query, document) pairs to boost top-k relevance after a first-stage embedding retrieval.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Deepgram Nova-3</title><link>https://learn.engineering.vips.edu/ai-models/deepgram-nova-3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepgram-nova-3/</guid><description>Deepgram Nova-3 is a streaming-first speech-to-text model — sub-300 ms real-time transcription with diarisation, keyterm prompting, and strong accented-English WER.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>DeepMind AlphaProof</title><link>https://learn.engineering.vips.edu/ai-models/deepmind-alphaproof/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepmind-alphaproof/</guid><description>AlphaProof is Google DeepMind&apos;s AI math-proof system that achieved silver-medal IMO performance — Gemini-trained reinforcement learning over Lean 4 theorem-proving environments.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>DBRX Instruct</title><link>https://learn.engineering.vips.edu/ai-models/databricks-dbrx-instruct/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/databricks-dbrx-instruct/</guid><description>Databricks DBRX Instruct is a 132B-parameter open-weight MoE model (36B active) trained on 12T tokens, optimised for enterprise data and lakehouse RAG.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Google DeepMind AlphaFold 3</title><link>https://learn.engineering.vips.edu/ai-models/deepmind-alphafold-3/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepmind-alphafold-3/</guid><description>AlphaFold 3 is Google DeepMind&apos;s biology model that predicts joint structures of proteins, DNA, RNA, ligands, and ions — a step-change for drug-discovery workflows.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>DeepSeek Coder 33B Instruct</title><link>https://learn.engineering.vips.edu/ai-models/deepseek-coder-33b-instruct/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepseek-coder-33b-instruct/</guid><description>DeepSeek Coder 33B Instruct is DeepSeek AI&apos;s 2023 open-weights coding LLM — a 33B dense decoder trained on 2T tokens of code, fluent in 80+ programming languages.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Janus Pro 7B</title><link>https://learn.engineering.vips.edu/ai-models/deepseek-janus-pro-7b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepseek-janus-pro-7b/</guid><description>Janus Pro 7B is DeepSeek AI&apos;s open-weights unified multimodal model — a 7B transformer that both understands and generates images through decoupled visual encoders.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>DeepSeek Coder V2</title><link>https://learn.engineering.vips.edu/ai-models/deepseek-coder-v2/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepseek-coder-v2/</guid><description>DeepSeek Coder V2 is the open-weights coding SOTA — a 236B parameter MoE (21B active) that matched closed-frontier coding models on HumanEval and LiveCodeBench.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item><item><title>DeepSeek LLM 67B</title><link>https://learn.engineering.vips.edu/ai-models/deepseek-llm-67b/</link><guid isPermaLink="true">https://learn.engineering.vips.edu/ai-models/deepseek-llm-67b/</guid><description>DeepSeek LLM 67B is DeepSeek AI&apos;s 2023 general-purpose open-weights model — a 67-billion-parameter dense decoder that served as the bilingual Chinese/English foundation for later DeepSeek releases.</description><pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate></item></channel></rss>