Technology Apr 23, 2026 · 29 min read

Cursor Rules for Django: The Complete Guide to AI-Assisted Django Development

Cursor Rules for Django: The Complete Guide to AI-Assisted Django Development Django is the framework that lets you ship a working CRUD app in a weekend and a smoldering production fire in a month. The first regression is almost always a query: a templated list page that prints {{ order.c...

DE
DEV Community
by Olivia Craft
Cursor Rules for Django: The Complete Guide to AI-Assisted Django Development

Cursor Rules for Django: The Complete Guide to AI-Assisted Django Development

Django is the framework that lets you ship a working CRUD app in a weekend and a smoldering production fire in a month. The first regression is almost always a query: a templated list page that prints {{ order.customer.name }} for every row in a queryset that did not select_related("customer"), generating one SELECT per row, and then someone scales the page from twenty rows to two thousand and the request times out at the load balancer. The second is a Model.save() override that calls requests.post() to a third-party webhook synchronously inside a transaction — every signup blocks for the duration of someone else's API. The third is a migration that adds a NOT NULL column to a fifty-million-row table during a deploy because nobody noticed Django was running it as ALTER TABLE instead of in a one-shot RunPython chunk-batched backfill.

Then you add an AI assistant.

Cursor and Claude Code were trained on Django code that spans two decades — Django 1.0 tutorials, function-based views from Django 1.4, the objects.get_or_create pre-2.0 idioms, REST framework patterns from before viewsets, signals abused as a "loose coupling" hammer for synchronous cross-module calls, and settings.py as a single 600-line module with secrets pasted next to DEBUG = True. Ask for "an endpoint that creates an order with line items," and you get a function-based view with request.POST["customer_id"] parsing, Order.objects.create(...) followed by a Python loop calling LineItem.objects.create(...) per item (no bulk_create, no transaction), a try/except: pass around the whole thing, and a return of HttpResponse("ok") with status 200 even when the database raised an IntegrityError. It runs. It is not the Django you should ship in 2026.

The fix is .cursorrules — one file in the repo that tells the AI what idiomatic modern Django looks like. Eight rules below, each with the failure mode, the rule, and a before/after. Copy-paste .cursorrules at the end.

How Cursor Rules Work for Django Projects

Cursor reads project rules from two locations: .cursorrules (a single file at the repo root, still supported) and .cursor/rules/*.mdc (modular files with frontmatter, recommended for anything bigger than a tutorial blog). For Django I recommend modular rules so the admin app's conventions don't bleed into the public API and so per-app teams can own their own slice:

.cursor/rules/
  django-models.mdc        # fat models, manager methods, query discipline
  django-views.mdc         # CBV preference, DRF viewsets, permissions
  django-migrations.mdc    # zero-downtime patterns, RunPython rules
  django-settings.mdc      # layered config, secrets via env, never DEBUG=True in prod
  django-testing.mdc       # pytest-django, factories, no .objects.create in tests
  django-celery.mdc        # idempotent tasks, retries, no DB session in tasks

Frontmatter controls activation: globs: ["**/*.py", "**/templates/**/*.html"] with alwaysApply: false. Now the rules.

Rule 1: Fat Models, Thin Views — Domain Logic Lives on the Model or in a Service, Never in the View

The most common AI failure in Django is "the view is the application." Cursor returns a class-based or function-based view that reads request data, computes business logic inline (taxes, discounts, eligibility checks, side effects), writes to four different models, and then renders a response. Six months later, the same logic needs to run from a Celery task or a management command, and the only way to reuse it is to call the view function with a fake Request object. The Django convention since 2010 has been "fat models, thin views," extended in modern projects to "thin views, fat services, dumb models that hold state and validate themselves."

The rule:

Views own one job: parse input, call a service or model method, return a response.
A view longer than 30 lines is a smell.

Domain operations live in one of:
  - A model method (when the operation is on a single instance and is
    inherently about that record's state: `order.cancel()`, `user.deactivate()`)
  - A custom Manager / QuerySet method (when the operation is a query:
    `Order.objects.outstanding()`, `Order.objects.for_customer(c)`)
  - A service function in `app/services.py` (when it spans multiple
    models or external systems: `services.checkout(cart, payment_method)`)

Models validate their own state in `clean()` and via field-level validators.
`save()` calls `full_clean()` before persisting in any path the user can hit.
Never put `requests.post(...)` or any blocking I/O in `save()`.

Form / serializer validation is for shape; domain validation is on the
model. The two layers do not duplicate each other.

Views never reach across apps' internals. App A imports `apps.b.services`,
not `apps.b.models.SomeModel.objects.filter(...)`.

Before — view does everything, no reuse path:

def create_order(request):
    customer_id = request.POST.get("customer_id")
    items = request.POST.getlist("items")
    customer = Customer.objects.get(id=customer_id)
    total = 0
    for item_id in items:
        product = Product.objects.get(id=item_id)
        total += product.price
    if total > customer.credit_limit:
        return HttpResponse("over limit", status=400)
    order = Order.objects.create(customer=customer, total=total)
    for item_id in items:
        LineItem.objects.create(order=order, product_id=item_id)
    requests.post("https://hooks.example.com/order-created", json={"id": order.id})
    return HttpResponse(f"created {order.id}")

Eligibility check, totals math, persistence, and a webhook all in the view. None of it is callable from anywhere else.

After — view delegates to a service, model owns its lifecycle:

# orders/services.py
@transaction.atomic
def place_order(*, customer: Customer, product_ids: list[int]) -> Order:
    products = Product.objects.filter(id__in=product_ids).only("id", "price")
    total = sum(p.price for p in products)
    if not customer.can_charge(total):
        raise OverCreditLimit(customer=customer, attempted=total)
    order = Order.objects.create(customer=customer, total=total)
    LineItem.objects.bulk_create(
        [LineItem(order=order, product_id=p.id, price=p.price) for p in products]
    )
    transaction.on_commit(lambda: notify_order_created.delay(order.id))
    return order

# orders/views.py
class CreateOrderView(LoginRequiredMixin, View):
    def post(self, request: HttpRequest) -> JsonResponse:
        form = OrderForm(request.POST)
        form.full_clean()
        try:
            order = place_order(customer=request.user.customer, product_ids=form.cleaned_data["items"])
        except OverCreditLimit:
            return JsonResponse({"error": "over_credit_limit"}, status=400)
        return JsonResponse({"id": order.id}, status=201)

Service is unit-testable without a request, callable from a Celery task, callable from a management command. Webhook is dispatched after commit, never inside the transaction.

Rule 2: Query Discipline — select_related, prefetch_related, only, and iterator()

The N+1 query is the Django bug that ships to production more than any other. Cursor writes Order.objects.all() and templates iterate {{ order.customer.name }}{{ order.shipping_address.city }} and that is twenty thousand SELECTs to render a hundred orders. The fix is mechanical: every queryset that crosses a relation must declare it. Lists that exceed a few thousand rows must use iterator(). Big queries that only need a column or two must use only() or values(). The rule below codifies the discipline.

The rule:

Every queryset whose results cross a ForeignKey / OneToOneField in
templates, serializers, or downstream code MUST declare:
  - `select_related("customer", "shipping_address")` for FK / OneToOne
  - `prefetch_related("items", Prefetch("items", queryset=...))` for M2M
    and reverse FK

`Model.objects.all()` followed by a Python loop over more than 1,000 rows
must use `.iterator(chunk_size=...)`. Materializing to a list is forbidden
on unbounded queries.

When only a few fields are needed, use `.only("id", "status")` (returns
model instances) or `.values("id", "status")` (returns dicts) — never
fetch every column and discard 90% of them.

Counts use `.count()`, not `len(queryset)`. Existence uses `.exists()`,
not `bool(queryset)`. First-row fetches use `.first()` with a None check,
not `[0]` with a `try/except IndexError`.

Aggregations live in the database: `.aggregate(Sum("total"))`, not a
Python `sum()` over a queryset. The same for `Avg`, `Count`, `Max`.

Bulk writes use `.bulk_create(..., batch_size=500, ignore_conflicts=...)`
and `.bulk_update(..., fields=[...], batch_size=500)`. A loop of `.save()`
calls in a `for` over more than ~20 items is a code-review reject.

`assertNumQueries` is mandatory in tests for any view that lists data.

Before — N+1 in a list view, full Python aggregation, full-row fetch:

def order_list(request):
    orders = Order.objects.all()  # N+1 in template on customer + items
    total_revenue = sum(o.total for o in orders)  # Python sum over potentially millions
    return render(request, "orders/list.html", {"orders": orders, "total": total_revenue})

After — relations declared, aggregate in DB, only the needed fields:

class OrderListView(LoginRequiredMixin, ListView):
    template_name = "orders/list.html"
    paginate_by = 50
    context_object_name = "orders"

    def get_queryset(self) -> QuerySet[Order]:
        return (
            Order.objects
            .select_related("customer", "shipping_address")
            .prefetch_related(Prefetch("items", queryset=LineItem.objects.select_related("product")))
            .only("id", "status", "total", "customer__name", "shipping_address__city")
            .order_by("-created_at")
        )

    def get_context_data(self, **kwargs):
        ctx = super().get_context_data(**kwargs)
        ctx["total_revenue"] = Order.objects.aggregate(total=Sum("total"))["total"] or 0
        return ctx
def test_order_list_uses_at_most_5_queries(client, django_assert_num_queries):
    OrderFactory.create_batch(50)
    with django_assert_num_queries(5):
        response = client.get("/orders/")
    assert response.status_code == 200

The list page is constant-query regardless of row count. The aggregate is one SQL SUM. A regression that adds an N+1 fails the test on the next CI run.

Rule 3: Class-Based Views, DRF ViewSets, and Serializers Done Right

Cursor's default DRF code is a @api_view(["GET", "POST"]) function with Response(data, status=...) and inline serializer.is_valid() calls — fine for a one-off, terrible for a real API. The framework's ViewSet + Router + Serializer + Permission trio is the long-term path. The rule covers the structure and the parts that AI gets wrong: read vs. write serializers, get_queryset filtering by request, get_serializer_class per action, validators on the serializer (not the view), and ModelSerializer.create/update overrides only when justified.

The rule:

Every API resource is a `ModelViewSet` (or `ReadOnlyModelViewSet` /
`GenericViewSet` mixed with explicit mixins) wired through a `DefaultRouter`.
Function-based `@api_view` is reserved for genuinely-not-a-resource endpoints
(health checks, login, webhook receivers).

Two serializers per resource:
  - {Resource}WriteSerializer: input validation, used for create/update
  - {Resource}ReadSerializer: output shape, used for list/retrieve and
    as the response after a write

Switch via `get_serializer_class`:
  def get_serializer_class(self):
      if self.action in ("list", "retrieve"):
          return OrderReadSerializer
      return OrderWriteSerializer

`get_queryset` always scopes by the authenticated principal:
  def get_queryset(self):
      return Order.objects.filter(customer=self.request.user.customer)
Never return `Model.objects.all()` from a tenant-scoped resource.

Permissions are classes registered on the viewset, not `if request.user...`
checks inside the action. `IsAuthenticated`, custom `IsOwner`, and
object-level `has_object_permission`.

Filtering uses `django-filter`'s `FilterSet` or DRF's `OrderingFilter` /
`SearchFilter` — never hand-rolled `request.query_params.get("...")`.

Pagination is configured at `DEFAULT_PAGINATION_CLASS` (CursorPagination
for time-ordered, PageNumberPagination for everything else). Never raw
slicing with `[offset:limit]`.

`Serializer.create / update` is only overridden when the write touches
multiple models or external systems; otherwise let the framework do it.

Error responses are RFC-7807-shaped via a custom exception handler.
Bare `Response({"error": "..."}, status=...)` is forbidden in viewsets.

Before — function view, one serializer for everything, manual auth, no pagination:

@api_view(["GET", "POST"])
def orders(request):
    if request.user.is_anonymous:
        return Response({"error": "auth"}, status=401)
    if request.method == "GET":
        qs = Order.objects.all()
        return Response(OrderSerializer(qs, many=True).data)
    serializer = OrderSerializer(data=request.data)
    if not serializer.is_valid():
        return Response(serializer.errors, status=400)
    serializer.save()
    return Response(serializer.data, status=201)

After — viewset, split serializers, scoped queryset, configured pagination:

class OrderViewSet(viewsets.ModelViewSet):
    permission_classes = [IsAuthenticated, IsCustomerOwner]
    filter_backends = [DjangoFilterBackend, OrderingFilter]
    filterset_class = OrderFilter
    ordering_fields = ["created_at", "total"]
    ordering = ["-created_at"]

    def get_queryset(self) -> QuerySet[Order]:
        return (
            Order.objects
            .filter(customer=self.request.user.customer)
            .select_related("customer", "shipping_address")
            .prefetch_related("items__product")
        )

    def get_serializer_class(self):
        return OrderReadSerializer if self.action in ("list", "retrieve") else OrderWriteSerializer

    def perform_create(self, serializer: OrderWriteSerializer) -> None:
        order = orders_services.place_order(
            customer=self.request.user.customer,
            **serializer.validated_data,
        )
        serializer.instance = order

# urls.py
router = DefaultRouter()
router.register("orders", OrderViewSet, basename="order")
urlpatterns = [path("api/v1/", include(router.urls))]

Wire-shape responses. Tenant scoping is enforced in one place. Permissions are declarative. The viewset is generated free for SDK clients.

Rule 4: Migrations Are Code — Reversible, Idempotent, and Zero-Downtime by Default

Migrations are where AI confidently emits the change that will lock your largest table for an hour. Cursor will write models.CharField(max_length=200, null=False, default="") and add it to a fifty-million-row table without batching the backfill, without checking that PostgreSQL will rewrite the table, without separating the schema change from the data fill from the constraint enforcement. The rule below codifies the "expand-migrate-contract" pattern that every production Django shop has learned the hard way.

The rule:

Every migration is reviewed against three questions, in order:
  1. Does PostgreSQL rewrite the table or take an ACCESS EXCLUSIVE lock
     for more than a few hundred ms?
  2. Is the migration safe to deploy with the OLD code still running
     (forward compatibility)?
  3. Is it reversible, or — if not — is the irreversibility documented
     in the migration's Meta and the PR description?

Adding a NOT NULL column to a non-empty table is THREE migrations:
  1. Add the column nullable, with a server-side default if the new
     code reads it.
  2. Backfill in batches via `RunPython` (or out-of-band via a
     management command) — chunk by primary key range, COMMIT per
     chunk, never a single UPDATE over the whole table.
  3. Once the backfill is verified, alter the column to NOT NULL.
Never combine the three into one.

`RunPython` operations always pair forward + reverse. If reverse is a
no-op, write `migrations.RunPython.noop` explicitly with a comment.

Never call `Model.objects.create(...)` inside `RunPython` — fetch the
historical model via `apps.get_model("app", "Model")` so the migration
runs against the schema as it was when the migration was written.

Index creation on big tables uses `AddIndexConcurrently` (PostgreSQL).
Constraint additions use `AddConstraint` only after verifying no
existing rows violate it.

`makemigrations --check --dry-run` runs in CI and fails the build if
models drift from migrations.

Squash migrations once an app's history exceeds ~30 files; never edit
historical migrations in place after they have been deployed.

`atomic = False` on the migration class for any operation that cannot
run inside a transaction (CONCURRENTLY indexes, certain ALTER TYPEs).

Before — one-shot NOT NULL on a big table, no batching, no reverse:

class Migration(migrations.Migration):
    dependencies = [("orders", "0014_auto")]
    operations = [
        migrations.AddField(
            model_name="order",
            name="region",
            field=models.CharField(max_length=8, default="US"),
        ),
    ]

On a 50M-row Postgres table this rewrites every row, takes an ACCESS EXCLUSIVE lock, and times out the deploy.

After — three separate migrations, batched backfill, reversible:

# 0015_add_region_nullable.py
class Migration(migrations.Migration):
    dependencies = [("orders", "0014_auto")]
    operations = [
        migrations.AddField(
            model_name="order",
            name="region",
            field=models.CharField(max_length=8, null=True),
        ),
    ]

# 0016_backfill_region.py
def forward(apps, schema_editor):
    Order = apps.get_model("orders", "Order")
    qs = Order.objects.filter(region__isnull=True).only("id", "shipping_address_id")
    for batch in chunked(qs.iterator(chunk_size=2000), 2000):
        ids = [o.id for o in batch]
        Order.objects.filter(id__in=ids).update(region=Subquery(_region_sql(ids)))

def reverse(apps, schema_editor):
    Order = apps.get_model("orders", "Order")
    Order.objects.update(region=None)

class Migration(migrations.Migration):
    atomic = False
    dependencies = [("orders", "0015_add_region_nullable")]
    operations = [migrations.RunPython(forward, reverse)]

# 0017_region_not_null.py
class Migration(migrations.Migration):
    dependencies = [("orders", "0016_backfill_region")]
    operations = [
        migrations.AlterField(
            model_name="order",
            name="region",
            field=models.CharField(max_length=8, null=False, default="US"),
        ),
    ]

Each step is small, reversible, and ships independently. Old code reading the table sees nullable values; new code writing the column sets them.

Rule 5: Settings Are Layered, Secrets Live in the Environment, and DEBUG = True Is a CI Failure

Every long-lived Django project I've inherited has had one settings.py with DEBUG = True if DEBUG else False, a hard-coded SECRET_KEY, and a database password literally in the file. Cursor will reproduce this faithfully if you let it. The only sustainable layout is layered: settings/base.py, settings/local.py, settings/production.py, with DJANGO_SETTINGS_MODULE set per-environment and every secret loaded from the environment via django-environ or pydantic-settings. The rule below is the minimum.

The rule:

Settings layout:
  config/settings/
    base.py         # everything shared, no secrets, no env-specific
    local.py        # DEBUG=True, console email, dev DB, Django Debug Toolbar
    test.py         # SQLite in-memory or test PG, no Celery, eager mode
    production.py   # DEBUG=False, structured logging, real cache, real email

`base.py` reads the environment via `environ.Env()` (or `pydantic-settings`).
SECRET_KEY, DATABASE_URL, EMAIL_HOST_PASSWORD, third-party API keys —
all `env("...")`. Never a literal string.

`production.py`:
  - `DEBUG = False` (CI fails the build if any settings module sets True
    while `ENV=production`)
  - `ALLOWED_HOSTS` from env, comma-split
  - `SECURE_PROXY_SSL_HEADER`, `SECURE_HSTS_SECONDS`, `SESSION_COOKIE_SECURE`,
    `CSRF_COOKIE_SECURE`, `SECURE_SSL_REDIRECT` all set
  - `LOGGING` configured with structured JSON (python-json-logger or
    structlog) — no print, no default Django logger only

Never commit a `.env` file. Commit `.env.example` with placeholder
values and a comment per variable.

Database `OPTIONS={"connect_timeout": 5}` and `CONN_MAX_AGE` set
explicitly — never the framework default.

`django-storages` for media to S3 / GCS in production. `MEDIA_ROOT` is
local-only.

Cache backend: Redis in production, locmem in tests, never the dummy
backend in production by mistake.

Sentry / Honeybadger / equivalent wired in `production.py`'s
`raven`/`sentry_sdk.init()` block — required, not optional.

Before — single settings, secrets in source, debug always on:

# settings.py
DEBUG = True
SECRET_KEY = "django-insecure-abcd1234"
DATABASES = {"default": {"ENGINE": "...", "NAME": "myapp", "USER": "root", "PASSWORD": "hunter2"}}
ALLOWED_HOSTS = ["*"]
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"

After — layered, env-driven, hardened production:

# config/settings/base.py
import environ
env = environ.Env(DEBUG=(bool, False))
environ.Env.read_env(BASE_DIR / ".env")

SECRET_KEY = env("SECRET_KEY")
DEBUG = env("DEBUG")
ALLOWED_HOSTS = env.list("ALLOWED_HOSTS", default=[])
DATABASES = {"default": env.db("DATABASE_URL")}
DATABASES["default"]["CONN_MAX_AGE"] = 60
DATABASES["default"]["OPTIONS"] = {"connect_timeout": 5}
CACHES = {"default": env.cache("REDIS_URL", backend="django_redis.cache.RedisCache")}

# config/settings/production.py
from .base import *
DEBUG = False
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_HSTS_SECONDS = 31536000
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
AWS_STORAGE_BUCKET_NAME = env("AWS_STORAGE_BUCKET_NAME")

import sentry_sdk
sentry_sdk.init(dsn=env("SENTRY_DSN"), traces_sample_rate=0.1, send_default_pii=False)

LOGGING = {
    "version": 1, "disable_existing_loggers": False,
    "formatters": {"json": {"()": "pythonjsonlogger.jsonlogger.JsonFormatter"}},
    "handlers": {"console": {"class": "logging.StreamHandler", "formatter": "json"}},
    "root": {"level": "INFO", "handlers": ["console"]},
}

Production has no secret in source. DEBUG=True cannot ship by accident. Logs are structured. Errors page Sentry, not stdout.

Rule 6: Signals Are a Last Resort, transaction.on_commit Is the Right Hammer

Django signals are the framework's most-misused feature. Cursor will reach for post_save to "decouple" two pieces of code that should call each other directly, then six months later you cannot trace the call graph and a single User.save() triggers four signal handlers, two of which open new database connections and one of which sends an email mid-transaction. The rule is: signals are reserved for genuine cross-cutting concerns (audit logs, third-party app hooks). For "after I save this, do X," transaction.on_commit(...) is the right primitive.

The rule:

`post_save` / `pre_save` / `post_delete` signals are forbidden for
intra-app coupling. If module A wants to do something when B saves,
B's save method or B's service function calls A directly.

Signals are allowed only for:
  - Cross-cutting audit logs (record every change to a tracked model)
  - Third-party-app hooks (allauth's `user_signed_up`, etc.)
  - Reusable Django apps that genuinely cannot import their consumer

When a signal IS used, the receiver is in `signals.py`, registered in
`AppConfig.ready()`, and decorated `@receiver(post_save, sender=Model,
dispatch_uid="unique_id")` to prevent duplicate registration.

Signal receivers do NOT do I/O directly — they enqueue Celery tasks
or schedule `transaction.on_commit` callbacks. A receiver that calls
`requests.post` synchronously is a code-review reject.

`transaction.on_commit(callback)` is the canonical "after this write
sticks, do X" primitive. Use it for every webhook dispatch, every
email send, every cache invalidation that follows a write.

Bulk operations (`bulk_create`, `update`) DO NOT fire signals — never
rely on signal-driven side effects for anything that might be batched.

Before — signal does I/O, fires inside the transaction, untraceable coupling:

@receiver(post_save, sender=Order)
def order_saved(sender, instance, created, **kwargs):
    if created:
        requests.post("https://hooks.example.com/order", json={"id": instance.id})
        send_mail("Order placed", f"#{instance.id}", "ops@x", [instance.customer.email])

If the surrounding transaction rolls back, the webhook and email already fired. The grep for "what happens when an Order is saved?" finds nothing in views.py.

After — explicit on_commit dispatch from the service, signal eliminated:

# orders/services.py
def place_order(*, customer, product_ids):
    with transaction.atomic():
        order = _create_order(customer, product_ids)
        transaction.on_commit(lambda: notify_order_created.delay(order.id))
        transaction.on_commit(lambda: send_order_email.delay(order.id))
    return order

# orders/tasks.py
@shared_task(bind=True, autoretry_for=(requests.RequestException,), retry_backoff=True, max_retries=5)
def notify_order_created(self, order_id: int) -> None:
    order = Order.objects.get(id=order_id)
    requests.post("https://hooks.example.com/order", json={"id": order.id}, timeout=5)

Webhook fires only if the transaction commits. Each side effect is a discoverable, retryable Celery task. Searching "where is notify_order_created called?" finds the line.

Rule 7: Celery Tasks Are Idempotent, Bounded, and Survive Restarts

Cursor writes Celery tasks the way it writes background jobs: a function decorated @shared_task that does the work, with no retries, no idempotency key, no timeout, and a database session opened with Order.objects.create(...) on whatever the broker happens to be configured with. Then a deploy restarts the worker mid-task and the task runs twice — emails sent twice, charges applied twice, webhooks delivered twice. The rule below is the pattern that survives a pager rotation.

The rule:

Every Celery task:
  - Takes the smallest possible argument set: IDs, not whole model
    instances. Pickle of a model instance is forbidden.
  - Is idempotent: re-running with the same args produces the same
    end state. Implement via DB unique constraints, "if already done,
    return" guards, or external idempotency keys.
  - Has explicit `bind=True`, `autoretry_for=(...)`, `retry_backoff=True`,
    `max_retries=N`, `soft_time_limit=...`, `time_limit=...`.
  - Acks late: `acks_late=True` on the task decorator AND
    `task_acks_late=True` plus `worker_prefetch_multiplier=1` in
    config — without both, lost workers lose tasks.
  - Logs at start with task name + args, on success with duration,
    on failure with exception. Use the worker's logger, not print.

Tasks NEVER call `transaction.on_commit` to schedule themselves —
they ARE the post-commit work.

Long-running tasks chunk their work and re-enqueue: a "process all
orders for region X" task processes 500 orders, then enqueues itself
with the next cursor. No task runs longer than `soft_time_limit`.

Beat (periodic) tasks live in `celery.py` `beat_schedule`, not in
ad-hoc `@periodic_task` decorators (deprecated). Schedule entries
have a stable name so they can be paused via Celery Inspector.

Result backend is configured ONLY if results are actually used — most
"fire and forget" pipelines should set `task_ignore_result=True`.

Workers run with `--concurrency` matching CPU * 2 for I/O-bound,
CPU count for CPU-bound. Never the default of "however many cores
the box has" without thinking about it.

Dead-letter handling: on `MaxRetriesExceededError`, route to a
dead_letter queue, alert via Sentry, never silently drop.

Before — passes model instance, no retries, no timeout, not idempotent:

@shared_task
def send_invoice(order):
    pdf = generate_pdf(order)
    smtp.send(order.customer.email, "Invoice", pdf)

Pickled order may be stale by task time. SMTP failure means lost email forever. A retry sends two invoices.

After — id arg, idempotency, retries, observability:

@shared_task(
    bind=True,
    autoretry_for=(SMTPException, requests.RequestException),
    retry_backoff=True,
    retry_backoff_max=600,
    max_retries=8,
    soft_time_limit=120,
    time_limit=180,
    acks_late=True,
)
def send_invoice(self, order_id: int) -> None:
    order = Order.objects.select_related("customer").get(id=order_id)
    if InvoiceSent.objects.filter(order=order).exists():
        logger.info("send_invoice.skip_duplicate", order_id=order_id)
        return
    pdf = generate_pdf(order)
    mailer.send(order.customer.email, "Invoice", pdf)
    InvoiceSent.objects.create(order=order, sent_at=timezone.now())

InvoiceSent is the idempotency record. Retries are exponential. The unique constraint on InvoiceSent(order) makes double-send impossible even under a race.

Rule 8: Testing With pytest-django, factory_boy, and Real Database Fixtures

Cursor's default test is TestCase with Order.objects.create(...) to set up data and a self.client.get("/orders/") assertion on response.status_code. That works for two tests. By the tenth, you have a 200-line setUp per test class, hand-typed model dicts everywhere, and a CI run that takes ten minutes because every test resets the database. pytest-django plus factory_boy (or model_bakery) plus database-reuse plus real Postgres is the path that scales.

The rule:

Test runner: pytest + pytest-django. `TestCase` subclasses are migrated
to plain `def test_...(db, ...):` functions on touch. New tests are pytest
style only.

Database: real Postgres in CI (matches production for JSONB, ArrayField,
window functions, full-text search). SQLite is allowed only for the
fastest unit tests that touch no Postgres-specific feature.

`pytest-django` config: `--reuse-db` locally, `--create-db` only when
migrations changed, `--nomigrations` is forbidden (it bypasses the
expand/contract pattern from Rule 4 and can mask broken migrations).

Test data: `factory_boy` Factories per model. `OrderFactory.create()`,
`OrderFactory.create_batch(50)`, `OrderFactory.build()` for in-memory.
Hand-typed `Order(customer_id=1, total=Decimal("0.00"), ...)` is forbidden
in tests — every required field becomes a maintenance burden.

External services: `responses` for `requests`, `respx` for `httpx`,
`unittest.mock.patch` for boto3 / external SDKs. NEVER mock at the
ORM layer (don't `mock.patch("app.models.Order.objects.filter")`).

`assertNumQueries` (or `django_assert_num_queries` fixture) on every
list-style endpoint test, with a tight upper bound. A test that doesn't
care about query count for a list endpoint is incomplete.

Authentication in tests: `client.force_login(user)` for session auth,
DRF `client.force_authenticate(user)` for token auth. Manual cookie
poking is a smell.

Coverage: >85% on services, >75% on views. Property-based tests
(hypothesis) for any function with non-trivial input space.

Before — Django TestCase, hand-typed model dicts, no query assertion:

class OrderTestCase(TestCase):
    def setUp(self):
        self.customer = Customer.objects.create(name="Ada", email="a@b.c", credit_limit=1000)
        self.product = Product.objects.create(name="Widget", price=10)

    def test_list_orders(self):
        for i in range(50):
            Order.objects.create(customer=self.customer, total=10)
        self.client.force_login(self.customer.user)
        resp = self.client.get("/api/v1/orders/")
        self.assertEqual(resp.status_code, 200)

No query bound. Fifty orders, no items, no realism. A future N+1 regression goes unnoticed.

After — pytest, factories, query bound, scoped queryset:

# orders/factories.py
class CustomerFactory(DjangoModelFactory):
    class Meta:
        model = Customer
    name = Faker("name")
    email = Faker("email")
    credit_limit = Decimal("1000.00")

class OrderFactory(DjangoModelFactory):
    class Meta:
        model = Order
    customer = SubFactory(CustomerFactory)
    total = Decimal("100.00")
    status = "pending"

# orders/tests/test_views.py
def test_list_orders_returns_owned_only(api_client, django_assert_num_queries):
    me = CustomerFactory()
    other = CustomerFactory()
    OrderFactory.create_batch(20, customer=me)
    OrderFactory.create_batch(5, customer=other)
    api_client.force_authenticate(me.user)

    with django_assert_num_queries(5):
        resp = api_client.get("/api/v1/orders/?ordering=-created_at")

    assert resp.status_code == 200
    body = resp.json()
    assert body["count"] == 20
    assert all(o["customer_id"] == me.id for o in body["results"])

@pytest.mark.parametrize("status,expected", [("pending", 200), ("shipped", 400)])
def test_cancel_order_status_transition(api_client, status, expected):
    order = OrderFactory(status=status)
    api_client.force_authenticate(order.customer.user)
    resp = api_client.post(f"/api/v1/orders/{order.id}/cancel/")
    assert resp.status_code == expected

Five queries max. Tenant scoping verified. Status-transition matrix tested with one parametrize, not five copy-pasted tests.

The Complete .cursorrules File

Drop this in the repo root. Cursor and Claude Code both pick it up.

# Django — Production Patterns

## Models & Services
- Fat models, thin views. Views <30 lines: parse input, call service,
  return response.
- Domain operations on the model (single instance), Manager/QuerySet
  (queries), or services.py (multi-model / external).
- Models validate via clean() and field validators; save() calls
  full_clean() in user-hit paths.
- Never blocking I/O in save(). Use transaction.on_commit + Celery.
- Cross-app calls go through services, never reach into other apps'
  models.

## Query Discipline
- Every queryset crossing a relation declares select_related /
  prefetch_related.
- >1k rows iterate via .iterator(chunk_size=...). Materializing
  unbounded querysets is forbidden.
- Use .only() / .values() when only a few fields are needed.
- count() not len(qs); exists() not bool(qs); first() not [0].
- Aggregations in DB (Sum/Avg/Count), never Python sum() over qs.
- Bulk writes via bulk_create / bulk_update with batch_size; loops of
  .save() over >20 items are rejects.
- assertNumQueries on every list view test.

## Views & DRF
- ModelViewSet + DefaultRouter. @api_view only for non-resource endpoints.
- Two serializers per resource: Read (output) and Write (input);
  switch via get_serializer_class.
- get_queryset always tenant-scoped. Never Model.objects.all() on a
  scoped resource.
- Permissions as classes; no in-action user checks.
- django-filter for filtering, DRF OrderingFilter / SearchFilter; no
  hand-rolled query_params.get.
- CursorPagination (time-ordered) or PageNumberPagination configured
  globally.
- RFC-7807 error responses via custom exception_handler.

## Migrations
- Three questions per migration: table rewrite? forward-compatible?
  reversible?
- NOT NULL on a non-empty table = three migrations (nullable add,
  batched backfill, NOT NULL alter).
- RunPython operations always have a reverse (or .noop with comment).
- Inside RunPython, fetch the historical model via apps.get_model().
- AddIndexConcurrently for big-table indexes (atomic = False).
- makemigrations --check --dry-run runs in CI.
- Never edit a deployed migration in place.

## Settings
- Layered: settings/base.py, local.py, test.py, production.py.
- All secrets via environ.Env (or pydantic-settings); never literals.
- production.py: DEBUG=False, SECURE_* hardening, structured JSON
  logging, Sentry init.
- DATABASES["default"]["CONN_MAX_AGE"] and OPTIONS["connect_timeout"]
  set explicitly.
- django-storages to S3 in production for media.
- .env never committed; .env.example always committed.

## Signals
- Forbidden for intra-app coupling. Direct calls or services instead.
- Allowed only for cross-cutting (audit) and third-party hooks.
- @receiver decorator with dispatch_uid. Receivers in signals.py,
  registered in AppConfig.ready().
- Receivers enqueue Celery tasks; never call requests.post directly.
- transaction.on_commit() is the canonical "after-write" primitive.
- Bulk ops do NOT fire signals — never rely on signal side effects
  for batched paths.

## Celery
- Tasks take IDs, not model instances. No pickling models.
- Idempotent: unique constraints, "already done" guards, idempotency keys.
- Decorator: bind=True, autoretry_for, retry_backoff, max_retries,
  soft_time_limit, time_limit, acks_late=True.
- Config: task_acks_late=True, worker_prefetch_multiplier=1.
- Long tasks chunk + re-enqueue with cursor; never exceed soft_time_limit.
- Periodic tasks in beat_schedule with stable names.
- task_ignore_result=True unless results are actually consumed.
- DLQ + Sentry alert on MaxRetriesExceededError.

## Testing
- pytest + pytest-django. New tests are pytest-style; TestCase migrated
  on touch.
- Postgres in CI, SQLite only for the fastest no-Postgres-feature tests.
- --reuse-db local, --create-db on migration change. --nomigrations is
  forbidden.
- factory_boy for all test data; hand-typed model dicts forbidden.
- assertNumQueries on every list-endpoint test.
- responses / respx / mock for external; never mock the ORM layer.
- force_login / force_authenticate in tests; no manual cookie poking.
- Coverage >85% services, >75% views; hypothesis on critical input
  spaces.

End-to-End Example: A Paginated Order List With Filtering and Tenant Scoping

Without rules: function view, full ORM dump, no scoping, N+1, no pagination, no tests.

@api_view(["GET"])
def orders(request):
    q = request.GET.get("q", "")
    qs = Order.objects.all()
    if q:
        qs = qs.filter(customer__name__icontains=q)
    return Response([{"id": o.id, "customer": o.customer.name, "total": o.total} for o in qs])

With rules: viewset, scoped queryset with select_related, declarative filter and pagination, query-bounded test.

# orders/filters.py
class OrderFilter(FilterSet):
    q = CharFilter(field_name="customer__name", lookup_expr="icontains")
    status = ChoiceFilter(choices=Order.STATUS_CHOICES)
    class Meta:
        model = Order
        fields = ["q", "status"]

# orders/views.py
class OrderViewSet(viewsets.ModelViewSet):
    permission_classes = [IsAuthenticated, IsCustomerOwner]
    filter_backends = [DjangoFilterBackend, OrderingFilter]
    filterset_class = OrderFilter
    ordering = ["-created_at"]
    pagination_class = CursorPagination

    def get_queryset(self):
        return (
            Order.objects
            .filter(customer=self.request.user.customer)
            .select_related("customer", "shipping_address")
            .prefetch_related("items__product")
            .only("id", "status", "total", "created_at",
                  "customer__name", "shipping_address__city")
        )

    def get_serializer_class(self):
        return OrderReadSerializer if self.action in ("list", "retrieve") else OrderWriteSerializer

# orders/tests/test_orders_api.py
def test_list_orders_constant_query_count(api_client, django_assert_num_queries):
    me = CustomerFactory()
    OrderFactory.create_batch(40, customer=me)
    api_client.force_authenticate(me.user)
    with django_assert_num_queries(5):
        resp = api_client.get("/api/v1/orders/?ordering=-created_at")
    assert resp.status_code == 200

Constant queries regardless of row count. Tenant-scoped. Filter is declarative. The test prevents future N+1 regressions in CI.

Get the Full Pack

These eight rules cover the Django patterns where AI assistants consistently reach for the wrong idiom. Drop them into .cursorrules and the next prompt you write will look different — fat-model, query-disciplined, viewset-shaped, migration-safe, layered-settings, signal-restrained, Celery-idempotent, pytest-tested Django, without having to re-prompt.

If you want the expanded pack — these eight plus rules for Django Channels and websockets, GraphQL with Strawberry, multi-tenant patterns (django-tenants), feature flags (django-waffle), audit logging (django-simple-history), search (Postgres FTS and OpenSearch), caching with cache.get_or_set patterns, OpenTelemetry instrumentation for Django + Celery, and the deploy patterns I use for Django on Kubernetes — it is bundled in Cursor Rules Pack v2 ($27, one payment, lifetime updates). Drop it in your repo, stop fighting your AI, ship Django you would actually merge.

DE
Source

This article was originally published by DEV Community and written by Olivia Craft.

Read original article on DEV Community
Back to Discover

Reading List