Labs & Sandbox
Hands-on coding environments with GPU access, 500+ datasets, and weekly challenges.
Guided Notebooks
Step-by-step interactive notebooks with instructions and checkpoints.
120 available
Free Sandbox
Open coding environment with GPU access. Build whatever you want.
Challenge Labs
Timed problem-solving challenges. Test your skills under pressure.
85 available
Pair Coding
Real-time collaboration with a peer. Learn together, build together.
API Playground
Compare Claude, GPT, Llama, and more side-by-side. Test prompts across models.
Competition Arena
Kaggle-style ML competitions with leaderboards and prizes.
12 available
Browser-Based IDE
GPU: A10G
Python 3.11
# Cell [1] - Build Multi-Head Attention
import torch
import torch.nn as nn
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, n_heads):
super().__init__()
self.d_k = d_model // n_heads
self.W_q = nn.Linear(d_model, d_model)
self.W_k = nn.Linear(d_model, d_model)
self.W_v = nn.Linear(d_model, d_model)
Shift+Enter
Output
MultiHeadAttention(
(W_q): Linear(512, 512)
(W_k): Linear(512, 512)
(W_v): Linear(512, 512)
)
Parameters: 786,432
GPU Memory: 3.2 MB
AI Tutor
Great implementation! Your attention dimensions look correct. Next, implement the forward() method with scaled dot-product attention.
Active Challenges
Updated weeklyBuild a Better Chatbot
RAG
LLMs
NLP
847 joined
3 months Pro
Image Classification Sprint
CNN
Vision
PyTorch
1203 joined
GPU credits
Optimize the Transformer
Optimization
Transformers
CUDA
342 joined
Mentor session
Dataset Library
500+ datasetsImageNet Subset
Vision
2.1 GB
100K images
Common Crawl NLP
NLP
850 MB
1.2M documents
Financial Fraud
Tabular
420 MB
6.3M transactions
Medical Imaging
Vision
3.4 GB
50K scans
Customer Reviews
NLP
180 MB
500K reviews
IoT Sensor Data
Time Series
1.1 GB
10M readings
Speech Commands
Audio
2.3 GB
105K clips
E-Commerce Events
Tabular
670 MB
4.5M events