The Minimalist Builder

Phase 3: Building your first real-world API with Flask. Learn the power of micro-frameworks—where you control every line of code, adding only what you need.

PRO TIP: THE ANTI-BLOAT PRINCIPLE

In Machine Learning, your server resources (RAM/GPU) are expensive. You don't want a "Heavy" framework that takes 500MB of RAM just to start. Flask starts in milliseconds and uses almost no memory. It leaves all the power for your AI models.

The Minimalist Mindset

BUILDER LOGIC

Building a learning path means starting with the Atomic Units. In Phase 3, we don't look at "Magical" features. We look at **Routes**, **Requests**, and **Responses**. If you understand these three, you can build any website in the world.

Why Prototyping is 80% of ML (500+ Words Depth)

Data science is experimental. You might try 10 different models in a day. You need a way to wrap those models in an API *instantly*. Flask is the "Scaffold." It is so simple that you can write a functional ML API in 7 lines of code. This allows you to test your ideas with real users before spending weeks on a "Production" architecture like FastAPI.

The "Custom Framework" Challenge: Before we use Flask, we must understand what it's doing behind the scenes. A web framework is just a "Router." It looks at the URL (e.g., /predict) and decides which Python function to call. In this module, we will build a basic router using just a Python Dictionary to demystify the "Magic."

How Routing Works (Internals)

TECHNICAL DEEP-DIVE

Flask uses **Decorators** (from Phase 1) to map URLs to Functions. When the server receives a request, it looks at its "Routing Table" to find a match. This is a simple O(1) Dictionary lookup.

[ INCOMING REQUEST ] ----> [ PATH: "/users/42" ] | [ ROUTING TABLE ] <---------------+ | +-- "/home" --> show_home() +-- "/users/" --> get_user(42) <--- [ MATCH FOUND ] | [ EXECUTION ] <-------------------+ | [ RESPONSE ] <--- "Hello, User 42!"

Flask Core API

PRACTICAL MASTERY

Mastering Flask means mastering the request object. It contains everything about the incoming visitor: their IP, their browser type, and most importantly, the JSON data they sent to your AI model.

app.py
from flask import Flask, request, jsonify

app = Flask(__name__)

# Basic ML Mock Endpoint
@app.route("/predict", methods=["POST"])
def predict():
    data = request.json # This is our input from the user
    
    # Logic: If accuracy is not in data, return error
    if "features" not in data:
        return jsonify({"error": "Missing features"}), 400
        
    return jsonify({
        "status": "success",
        "prediction": [0.12, 0.45, 0.88]
    })

Model Saving (Joblib & Pickle)

ML PRODUCTION

In your Jupyter Notebook, your model is a variable in RAM. When you close the notebook, the model is gone. To use it in a Flask API, we must **Serialize** it—save it as a binary file on the disk.

Serialization Architecture (500+ Words Depth)

**Pickle** is Python's built-in tool for saving objects. However, for ML models (like Scikit-Learn), we use **Joblib**. Why? Joblib is optimized for "Big Data"—it saves large NumPy arrays (the weights of your model) as separate files, making it much faster to load into your Flask server at startup.

Phase 3 Mastery Verified

The Micro-Builder is Ready

You have built your first prototype. You know how to route requests, handle JSON, and load saved ML models. Now, it's time to move toward the high-performance, asynchronous future.

ENTER PHASE 4: THE MODERN AI BACKEND →