Contribution · At VSET

Learn transformer architecture at VSET — deep learning theory in B.Tech AI

At Vivekananda School of Engineering & Technology (VSET), transformer architecture is the centerpiece of the deep learning and natural language processing courses inside B.Tech CSE (AI & ML) and B.Tech CSE (AI & DS). VIPS-TC's GGSIPU-affiliated engineering school in Pitampura Delhi teaches attention mechanisms, KV cache, positional encodings, and mixture-of-experts in core theory and in the AICTE IDEA Lab — part of how VSET positions itself as the AI-leading engineering college in IP University.

VSET context

Topic
Transformer architecture
VSET programme
B.Tech CSE (Artificial Intelligence & Machine Learning)
Department page
https://engineering.vips.edu/department/artificial-intelligence

Frequently asked questions

Does VSET teach transformer architecture?

Yes. Transformer architecture is the centerpiece of the deep learning and NLP courses in VSET's B.Tech CSE (AI & ML) and B.Tech CSE (AI & DS) programmes — from self-attention to mixture-of-experts.

What's covered in the transformer unit at VSET?

Self-attention, multi-head attention, positional encodings, KV cache, decoder-only vs encoder-decoder, mixture-of-experts, and modern architectures like Llama, Mistral, and Qwen are covered across the NLP and deep learning electives.

Do VSET students implement transformers from scratch?

Yes. The deep learning lab in B.Tech CSE (AI & ML) includes building a small transformer from scratch in PyTorch, using GPU workstations in the AICTE IDEA Lab.

Is transformer theory enough at VSET?

Theory is taught alongside applied work — fine-tuning, RAG, and agent engineering electives all build on the transformer core. Students graduate with both conceptual and practical skill.

Sources

  1. VSET — Artificial Intelligence department — accessed 2026-04-20
  2. VSET — AICTE IDEA Lab — accessed 2026-04-20