✨ Production-Ready Deep Learning Framework

Build Neural Networks From Scratch

A lightweight, educational deep learning framework built with pure Python and NumPy. No PyTorch. No TensorFlow. Just clean, understandable implementations of neural networks, automatic differentiation, and optimization algorithms.

~3K
Lines of Code
1
Dependency
100%
Pure Python
Python 3.8+ MIT License Production Ready

Why MicroGrad-Plus?

Everything you need to understand deep learning from the ground up

Automatic Differentiation

Full computational graph with reverse-mode autodiff. See exactly how backpropagation works under the hood.

Neural Network Layers

Linear, Conv2D, BatchNorm, Dropout, and more. Build complex architectures with ease.

Advanced Optimizers

SGD with momentum, Adam, RMSprop, AdaGrad. All implemented from scratch with clear math.

Clean, Readable Code

Production-quality Python with extensive documentation, type hints, and meaningful variable names.

Educational Focus

Learn by reading. Every implementation is designed to be understood, not obfuscated.

Fully Tested

Comprehensive unit tests verify correctness. All gradients validated against known results.

Quick Start

Get up and running in minutes

1

Clone the Repository

git clone https://github.com/yourusername/micrograd-plus.git
cd micrograd-plus
pip install numpy
2

Build Your First Model

from micrograd import Tensor
from micrograd.nn import Linear, Sequential, ReLU
from micrograd.optim import Adam
from micrograd.nn.losses import MSELoss

# Create a simple neural network
model = Sequential(
    Linear(2, 16),
    ReLU(),
    Linear(16, 1)
)

# Setup training
optimizer = Adam(model.parameters(), lr=0.01)
criterion = MSELoss()
3

Train Your Model

# Training loop
for epoch in range(100):
    # Forward pass
    predictions = model(X)
    loss = criterion(predictions, y)

    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    print(f"Epoch {epoch}, Loss: {loss.item():.4f}")

Working Examples

Real implementations you can run right now

Linear Regression

Learn the basics with a simple regression problem. Demonstrates tensor operations, model training, and gradient descent.

Beginner Regression
python examples/simple_regression.py

XOR Problem

Classic neural network challenge. Shows non-linear decision boundaries, multi-layer networks, and binary classification.

Intermediate Classification
python examples/xor_problem.py

MNIST Classifier

Digit recognition with dropout regularization. Multi-class classification using Adam optimizer and batch training.

Advanced Computer Vision
python examples/mnist_classifier.py

Architecture Overview

Clean, modular design built for learning

Tensor

516 lines

Multi-dimensional arrays with automatic differentiation. The core building block for all operations.

Arithmetic ops Matrix ops Autograd Broadcasting

Neural Network Layers

460 lines

Building blocks for neural networks: Linear, Conv2D, BatchNorm, Dropout, Flatten, Sequential.

Linear Conv2D BatchNorm Dropout

Activation Functions

255 lines

Non-linear transformations: ReLU, LeakyReLU, ELU, Sigmoid, Tanh, Softmax.

ReLU Sigmoid Tanh Softmax

Loss Functions

307 lines

Training objectives: MSELoss, CrossEntropyLoss, BCELoss, BCEWithLogitsLoss.

MSE Cross Entropy BCE

Optimizers

377 lines

Gradient-based optimization: SGD, Adam, RMSprop, AdaGrad with learning rate scheduling.

SGD Adam RMSprop AdaGrad

Utilities

380 lines

Helper functions: DataLoader, train_test_split, early stopping, gradient clipping, model save/load.

DataLoader Early Stopping Save/Load

Documentation

Everything you need to get started

Ready to Dive Deep?

Start building neural networks from scratch and truly understand how deep learning works