MODULE 6.3

Coding with Claude API

LLM APIs: These allow your backend to "think." You send a prompt, and Claude sends back structured text or code. We use the anthropic Python library.

python
import anthropic

client = anthropic.Anthropic()

message = client.messages.create(
    model="claude-3-5-sonnet-20240620",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello, Claude"}]
)
MODULE 6.4

Async ML Processing

CRITICAL: ML inference (NumPy/PyTorch) is CPU-intensive. If you run it inside an async def, it will block the whole server. Always use run_in_executor or a background worker like Celery.