* I name the django project "project"; so settings are project/settings.py, main urls are project/urls.py, etc
* I always define a custom Django user model even if I don't need anything extra yet; easier to expand later
* settings.py actually conflates project config (Django apps, middleware, etc) and instance/environment config (Database access, storages, email, auth...); I hardcode the project config (since that doesn't change between environemnts) and use python-dotenv to pull settings from environment / .env; I document all such configurable vars in .env.example, and the defaults are sane for local/dev setup (such as DEBUG=true, SQLIte database, ALLOWED_HOSTS=*, and a randomly-generated SECRET_KEY); oh and I use dj-database-url to use DATABASE_URL (defaults to sqlite:///sqlite.db)
* I immediately set up up ruff, ty, pytest, pre-commit hook and GH workflow to run ruff/ty/pytest
Previously I had elaborate scaffolding/skeleton templates, or nowadays a small shell script and I tell Claude to adapt settings.py as per above instructions :)
I've been really enjoying ruff/ty on my non-Django projects. Was there anything special you had to do to make ty play nice with Django? I kind of assumed with how dynamic a lot of its functionality is ty would just throw a type error for every Model.objects.whatever call.
I'll add one; Add shell_plus. It makes the django shell so much nicer to use, especially on larger projects (mostly because it auto-imports all your models). IIRC, it involves adding ipython and django_extensions as a dependency, and then adding django-extensions (annoyingly, note that the underscore changes to a dash, this trips me up everytime I add it) to your installed apps.
Saying that, I'm sure django-extensions does a lot more than shell_plus but I've never actually explored what those extra features are, so think I'll do that now
Edit: Turns out you can use bpython, ptpython or none at all with shell_plus, so good to know if you prefer any of them to ipython
In the default shell? I've definitely started new django projects since 2023 and I seem to remember always having to use shell_plus for that, though maybe thats just become something I automatically add without thinking
Edit: Yep, you're right, wow thats pretty big for me
> * settings.py actually conflates project config (Django apps, middleware, etc) and instance/environment config (Database access, storages, email, auth...); I hardcode the project config (since that doesn't change between environemnts) and use python-dotenv to pull settings from environment / .env; I document all such configurable vars in .env.example, and the defaults are sane for local/dev setup (such as DEBUG=true, SQLIte database, ALLOWED_HOSTS=*, and a randomly-generated SECRET_KEY); oh and I use dj-database-url to use DATABASE_URL (defaults to sqlite:///sqlite.db)
There is a convention to create "foo_settings.py" for different environments next to "settings.py" and start it with "from .settings import *"
You'll still want something else for secrets, but this works well for everything else, including sane defaults with overrides (like DEBUG=False in the base and True in only the appropriate ones).
IMO this is an antipattern because having a python file for each environment means you have bespoke code for each environment that is difficult to test and easily diverges from each other.
If you use OP's way (I do something similar using pydantic-settings) the only thing that changes is your environment vars, which are much easier to reason about.
> use python-dotenv to pull settings from environment / .env
I disagree strongly with this one. All you are doing is moving those settings to a different file. You might as well use a local settings file that reads the common settings.
On production keep things like API keys that need to be kept secret elsewhere - as a minimum outside the project directories and owned by a different user.
Sure, that works as well, for example on some deploys I set the settings in systemd service file. However, it's more convenient to just have .env right there.
> On production keep things like API keys that need to be kept secret elsewhere - as a minimum outside the project directories and owned by a different user.
Curious what extra protection this gives you, considering the environment variables are, well, in the environment, and can be read by process. If someone does a remote code execution attack on the server, they can just read the environment.
The only thing I can imagine it does protect is if you mistakenly expose project root folder on the web server.
That's something that python-dotenv enables. It can pull from environment, which you can wire up from k8s secrets or whatever is the case for your hosting.
Django aside, I think this is a really important point:
Being able to abandon a project for months or years and then come back
to it is really important to me (that’s how all my projects work!) ...
It's perhaps especially true for a hobbyist situation, but even in a bigger environment, there is a cost to keeping people on hand who understand how XYZ works, getting new people up to speed, etc.
I, too, have found found that my interactions with past versions of myself across decades has been a nice way to learn good habits that also benefit me professionally.
This is the main reason I'm extremely disciplined about making sure all of my personal projects have automated tests (configure to run in CI) and decent documentation.
It makes it so much easier to pick them up again in the future when enough time has passed that I've forgotten almost everything about them.
I'm finding that in this build fast and break things culture, it is hard to revisit a project that is more than 3 years old.
I have a couple of android projects that are four years old. I have the architecture documented, my notes (to self) about some important details that I thought I was liable to forget, a raft of tests. Now I can't even get it to load inside the new version of Android Studio or to build it. There's a ton of indirection between different components spread over properties, xml, kotlin but what makes it worse is that any attempt to upgrade is a delicate dance between different versions and working one's ways around deprecated APIs. It isn't just the mobile ecosystem.
I have relatively good experience with both Rust and Go here. It still works and maybe you need update 2-3 dependencies that released an incompatible version, but it's not all completely falling apart just because you went on a vacation (looking at you npm)
Build fast and break things works great if you're the consumer, not the dev polishing the dark side of the monolith (helps if you're getting paid well though)
As a consumer, I can not remember any feature that I was so enamored about having a week earlier than I otherwise would have, at the expense of breaking things.
Totally relate. My main project lately is for my wife, and it’s absolutely rock solid from a testing/automation standpoint. The last thing I want to do is accidentally break something and give her a headache when i’m just trying to build her a nice thing that brings her joy.
I have a rule that any commit which changes the implementation has to include the documentation update at the same time.
Most of these documentation updates are a sentence or two, or maybe a paragraph. The overhead of incremental documentation updates like that is tiny enough that I don't really think about assigning extra time for them.
This is also why I write a formal requirements document for all but the smallest throw-away projects. Much easier to know wtf you were thinking 18 months ago if you write down wtf you were thinking at the time.
If you know what you are doing, you can hibernate other kinds of tortoises by placing them in a fridge (as opposed to a freezer). One of my
friends does this with their Russian tortoise.
If you need to travel, make sure you have someone reliable who can check on them, in case of a power outage.
Django is objectively the most productive "boring technology" I've ever worked with for developing web applications. They don't regularly add too many bells and whistles on every release, but they keep it stable and reasonably backwards compatible.
Though i must admit that for maybe more complex applications, I feel like aspnetcore also fulfills this definition. I feel like it’s easier to create something more complex with aspnetcore while still keeping the code boring and opinionated.
I feel like Django, for bigger apps, fall apart in the "opinionated" side. For "simple" websites, you can’t go wrong, but for anything really big, basically everyone invents their own project structure.
But don’t get me wrong, I still love Django for what it is and it’s my first love in web frameworks anyway.
And I’d go further and say that the Django documentation is so awesome, that 15 years ago, it was where I learnt how websites/http/etc… really worked.
As a mostly-django-dev for the last 15 years, who's been exposed to FastAPI and various ORMs again recently, I should get round to write a doc about some Django bits.
Django is pretty nice, the changes between versions are small and can be managed by a human.
Part of the reason that you can have the big ecosystem is that there is a central place to register settings and INSTALLED_APPS, middleware etc.
That enables addons to bring their own templates and migrations.
There is a central place a bit further up in manage.py and that enables you to bring commandline extras to Django (and many of the things you install will have them).
Coming to a FastAPI app with alembic and finding a lot of that is build-it-yourself (and easily break it) is a bit of a shock.
The Django ORM at first can seem a little alien "why isn't this sqlalchemy" was my reaction a long time ago, but the API is actually pretty pragmatic and allows easy extension.
You can build up some pretty complex queries, and keep them optimised using the Django-Debug-Toolbar and its query viewer.
The ORM, Templates and other parts of Django pre-date many newer standards which is why they have their own versions. As a Django dev I only just discovered the rest of the world has invented testcontainers, and databases as a solution for a problem Django solved years ago with it's test database support.
I quite like the traditional setup where you have settings/common.py and then settings that extend that - e.g local.puy production.py
If you ever need a CMS in your Django project I strongly recommend Wagtail, it came after the initially most popular django-cms and learned a lot of lessons - feeling much more like a part of Django.
It has the same feeling of being productive as Django does when you first use it.
> As a Django dev I only just discovered the rest of the world has invented testcontainers, and databases as a solution for a problem Django solved years ago with its test database support.
Testing an API with model-bakery + pytest-django is absolutely joyous. As a TDD nerd, the lack of any remotely similar dev ex in FastAPI is the main reason I’ve never switched over.
As an aside, as someone who loves ergonomic testing, test containers are not the way. Dockerized services for testing are fine but their management is best done external to your test code. It is far easier to emulate prod by connecting to a general DB/service url that just happens to be running in a local container than have a special test harness that manages this internally to your test suite.
> Coming to a FastAPI app with alembic and finding a lot of that is build-it-yourself (and easily break it) is a bit of a shock.
I briefly played with FastAPI, and after the same shock I discovered Django-Ninja [1]. It's modeled after FastAPI and async-capable (if you are inclined, but warning, there be dragons). It plays nicely with all parts of Django, including the ORM.
>If you ever need a CMS in your Django project I strongly recommend Wagtail, it came after the initially most popular django-cms and learned a lot of lessons - feeling much more like a part of Django.
Nope. I would choose plain Django 100% of the time, especially with LLMs. Wagtail is an antipattern.
Its crazy to me after all these years that django-like migrations aren't in every language. On the one hand they seem so straightforward and powerful, but there must be some underlying complexities of having it autogenerate migrations.
Its always a surprise when i went to Elixir or Rust and the migration story was more complicated and manual compared to just changing a model, generating a migration and committing.
In the pre-LLM world, I was writing ecto files, and it was super repetitive to define make large database strucutres compared to Django.
Going from Django to Phoenix I prefer manual migrations. Despite being a bit tedious and repetitive, by doing a "double pass" on the schema I often catch bugs, typos, missing indexes, etc. that I would have missed with Django. You waste a bit of time on the simple schemas, but you save a ton of time when you are defining more complex ones. I lost count on how many bugs were introduced because someone was careless with Django migrations, and it is also surprising that some Django devs don't know how to translate the migrations to the SQL equivalent.
At least you can opt-in to automated migrations in Elixir if you use Ash.
There are some subtle edge cases in the django migrations where doing all the migrations at once is not the same as doing migrations one by one. This has bitten me on multiple django projects.
There's a pre, do and post phase for the migrations. When you run a single migration, it's: pre, do, post. When you run 2 migrations, it's: pre [1,2], do: [1,2], post: [1,2].
So, if you have a migration that depends on a previous migration's post phase, then it will fail if it is run in a batch with the previous migration.
When I've run into this is with data migrations, or if you're adding/assigining permissions to groups.
Did you mean migration signals (pre_migrate and post_migrate)? They are only meant to run before and after the whole migration operation, regardless of how many steps are executed. They don't trigger for each individual migration operation.
The only catch is they will run multiple times, once for each app, but that can also be prevented by passing a sender (e.g. `pre_migrate.connect(pre_migrate_signal_handler, sender=self)` if you are registering them in your AppConfig.ready method).
Does that affect the autogenerated migrations at all? Teh only time I ran into that issue as if I generated a table, created a data migration and then it failed because the table was created same transaction. Never had a problem with autogenerated migrations.
well in elixir you can have two schemas for the same table, which could represent different views, for example, an admin view and a user view. this is not (necessarily) for security but it reduces the number of columns fetched in the query to only what you need for the purpose.
There is no way to autogenerate migrations that work in all cases. There are lots of things out there that can generate migrations that work for most simple cases.
They don't need to work in every case. For the past `~15 years 100% of the autogenerated migrations to generating tables, columns or column names I have made just work. and i have made thousands of migrations at this point.
The only thing to manually migrate are data migrations from one schema to the other.
I am quite surprised that most languages do not have an ORM and migrations as powerful as Django. I get that it's Python's dynamic Meta programming that makes it such as clean API - but I am still surprised that there isn't much that comes close.
I found it very lacking in how to do CD with no downtime.
It requires a particular dance if you ever want to add/delete a field and make sure both new-code and old-code work with both new-schema and old-schema.
The workaround I found was to run tests with new-schema+old-code in CI when I have schema changes, and then `makemigrations` before deploying new-code.
Are there better patterns beyond "oh you can just be careful"?
I simplify it this way. I don't delete fields or tables in migrations once an app is in production. Only manually clean them up after they are impossible to be used by any production version. I treat the database schema as-if it were "append only" - Only add new fields. This means you always "roll-forward", a database. Rollback migrations are 'not a thing' to me. I don't rename physical columns in production. If you need an old field and a new field to be running simultaneously that represent the same datum, a trigger keeps them in sync.
This is not specific to Django, but to any project using a database. Here's a list of a couple quite useful resources I used when we had to address this:
Generally it's also advisable to set a statement timeout for migrations otherwise you can end up with unintended downtime -- ALTER TABLE operations very often require ACCESS EXCLUSIVE lock, and if you're migrating a table that already has an e.g. very long SELECT operation from a background task on it, all other SELECTs will queue up behind the migration and cause request timeouts.
There are some cases you can work around this limitation by manually composing operations that require less strict locks, but in our case, it was much simpler to just make sure all Celery workers were stopped during migrations.
1. Make a schema migration that will work both with old and new code
2. Make a code change
3. Clean up schema migration
Example: deleting a field:
1. Schema migration to make the column optional
2. Remove the field in the code
3. Schema migration to remove the column
Yes, it's more complex than creating one schema migration, but that's the price you pay for zero-downtime. If you can relax that to "1s downtime midnight on sunday", you can keep things simpler. And if you do so many schema migrations you need such things often ... I would submit you're holding it wrong :)
I'm doing all of these and None of it works out of the box.
Adding a field needs a default_db, otherwise old-code fails to `INSERT`. You need to audit all the `create`-like calls otherwise.
Deleting similarly will make old-code fail all `SELECT`s.
For deletion I need a special 3-step dance with managed=False for one deploy. And for all of these I need to run old-tests on new-schema to see if there's some usage any member of our team missed.
One option is to do multi-stage rollout of your database schema and code, over some time windows. I recall a blog post here (I think) lately from some Big Company (tm) that would run one step from the below plan every week:
1. Create new fields in the DB.
2. Make the code fill in the old fields and the new fields.
3. Make the code read from new fields.
4. Stop the code from filling old fields.
5. Remove the old fields.
Personally, I wouldn't use it until I really need it. But a simpler form is good: do the required schema changes (additive) iteratively, 1 iteration earlier than code changes. Do the destructive changes 1 iteration after your code stops using parts of the schema. There's opposite handling of things like "make non-nullable field nullable" and "make nullable field non-nullable", but that's part of the price of smooth operations.
Deploying on Kubernetes using Helm solves a lot of these cases: Migrations are run at the init stage of the pods. If successful, pods of the new version are started one by one, while the pods of the new version are shutdown. For a short period, you have pods of both versions running.
When you add new stuff or make benign modifications to the schema (e.g. add an index somewhere), you won't notice a thing.
If the introduced schema changes are not compatible with the old code, you may get a few ProgramingErrors raised from the old pods, before they are replaced. Which is usually acceptable.
There are still some changes that may require planning for downtime, or some other sort of special handling. E.g. upgrading a SmallIntegerField to an IntegerField in a frequently written table with millions of rows.
A request not being served can happen for a multitude of reasons (many of them totally beyond your control) and the web architecture is designed around that premise.
So, if some of your pods fail a fraction of the requests they receive for a few seconds, this is not considered downtime for 99% of the use cases. The service never really stopped serving requests.
The problem is not unique to Django by any means. If you insist on being a purist, sure count it as downtime. But you will have a hard time even measuring it.
The general approach is to do multiple migrations (add first and make new-code work with both, deploy, remove old-code, then delete old-schema) and this is not specific to Django's ORM in any way, the same goes for any database schema deployment. Take a peek at https://medium.com/@pranavdixit20/zero-downtime-migrations-i... for some ideas.
oh the automatic migrations scare the bejesus out of me. i really prefer writing out schemas and migrations like in elixir/ecto. plus i like the option of having two different schemas for the same table (even if i never use it)
You can ask Django to show you what exact SQL will run for a migration using `manage.py sqlmigrate`.
You can run raw SQL in a Django migration. You can even substitute your SQL for otherwise autogenerated operations using `SeparateDatabaseAndState`.
You have a ton of control while not having to deal with boilerplate. Things usually can just happen automatically, and it's easy to find out and intervene when they can't.
The nice thing in this case is that Django will meet you where you are with your preferences. Want to go the manual route? Sure. Want it to take a shot at auto-generation and then you customize? Very doable and. Want to let Django take the wheel fully the majority of the time? Sure.
is this like the "it takes 50 hours to set up a project management tool to work the way you want"? what happens if you onboard a superstar that works with django some other way?
No. Django is very good at having the autogenerated/default stuff be consistent with what you do if you want to write manually, it's not one of those "if you want to use the magic as-is it all just works, if you want to customize even one tiny piece you have to manually replicate all of the magic parts" frameworks.
Either way the end result is a single file in migrations/ that describes the change, though you do have to write it with Django's API if you want further migrations to work without issues (so no raw SQL, but this low-level API is things like CreateTable() and AddColumn() - and is what Django generates automatically from the models, so the auto-generated migrations are easily inspectable and won't change).
> what happens if you onboard a superstar that works with django some other way
If you hired a "superstar" that goes out of their way to hand-write migrations in cases where Django can do it by default (the majority of them) you did not in fact get a superstar.
I have yet to see anyone hand-roll migrations on purpose. In fact the problem is usually the opposite, the built-in migration generator works so well that a lot of people have very little expertise is doing manual migrations because they maybe had to do it like 5 times in their entire career.
I have never done it, but I believe you could setup multiple schemas under the same database -by faking it as different databases and then use a custom router to flip between them as you like.
That sounds like the path to madness, but I do believe it would work out of the box.
It is not much code to setup the router. Now, why you would want to bounce between schemas, I do not have a good rationale, but whatever floats your boat.
yeah some frameworks call these "lenses". There's even crazy people who write lenses on top of elixir schemas because they dont realize you can just have multiple schemas.
maybe more concretely: if you have a table with a kajillion columns and you want performant views onto some column (e.g. "give me the metadata only and dont show me blobs columns") without pulling down the entire jungle in the sql request, There's that.
Cool stuff. I just started open sourcing a command-line tool for deploying Django to a server. It handles SSL certs, databases and backups, automatic error emails, and background tasks via celery / redis. The best part? It does not need Docker. It just runs everything on bare metal.
Thanks for this! I wish there were more cross-comparisons like this out there of what it is actually like to use some of these frameworks, the note on Django being a little less magic than Rails makes me genuinely interested in it.
After spending a lot of my time on Django, it's fine for simple to moderately complex things. The ORM mostly good. DRF is fine for APIs. And the admin is super nice as well.
But once something gets significantly complex, the ORM starts to fall down, and DRF becomes more of a hindrance.
But if you're just doing simple CRUD apps, Django is perfectly serviceable.
What does significantly complex mean though? You have to make sure you understand the queries made by the ORM, avoid pitfalls like SELECT N+1 queries and so on. If you don't do this, it'll be slow but it's not the ORM's fault - it's that of the programmer.
Significantly complex means when ORM starts to become bigger and bigger and you need multiple threads and more complex processes that run in workers. When you start to run into scaling problems, your solution is within that framework and that becomes a limiting factor from my experience.
Then as a programmer, you have to find workarounds in Django instead of workarounds with programming.
PS: Dealing with a lot of scaling issues right now with a Django app.
The framework itself is not the limiting factor. The main constraint of performance usually comes from Python itself (really slow). And possibly I/O.
There are well established ways to work around that. In practice, lots of heavy lifting happens in the DB, can you can offload workloads to separate processes as well (whether those are Python, Go, Rust, Java etc).
You need to identify the hotspots, and blindly trusting a framework to "do the job for you" (or for that matter, trusting an LLM to write the code for you without understanding the underlying queries) is not a good idea.
I'm not saying you are doing that, but how often do you use the query planner? Whenever I've heard someone saying Django can't scale, it's not Django's fault.
> When you start to run into scaling problems, your solution is within that framework and that becomes a limiting factor from my experience.
Using Django doesn't mean that everything needs to run inside of it. I am working on an API that needs async perf, and I run separate FastAPI containers will still using Django to maintain the data model + migrations.
Occasionally I will drop down to raw SQL, or materialized views (if you are not using them with Django, you are missing out). And the obvious for any Django dev; select_related, prefetch_related, annotate, etc etc.
> And the obvious for any Django dev; select_related, prefetch_related, annotate
And sometimes not so obvious, I have been bitten by forgetting one select_related while inadvertedly joining 5 tables but using only 4 select_related: the tests work OK, but the real data has a number of records that cause a N+1. A request that used to take 100ms now issues "30 seconds timeout" from time to time.
Once we added the missing select_related we went back to sub-second request, but it was very easy to start blaming Django itself because the number of records to join was getting high.
The cases that we usually walk out of the Django path is for serializations and representations, trying to avoid the creation of intermediate objects when we only need the "values()" return.
You may already know this, this is meant for others hitting this issue frankly.
In Django, you can count the number of queries in a unit test. You don't need 1M objects in the unit test, but maybe 30 in your case.
If the unit code uses more than X queries, then you should assume you have an N+1 bug. Like if you have 3 prefetch related and 2 select related's on 30 objects, but you end up with more than 30 queries, then you have an N+1 someplace.
Even better that unit test will protect you from hitting that error in the future in that chunk of code accessing that table.
Yeah, I don’t get the issues here. I’ve led projects that served millions of requests a day, had dozens of apps and while there are always going to be pain points and bottlenecks, nothing about the framework itself is a hinderance to refactoring. If anything, Django plus good tests made me much braver about what I would try.
> Then as a programmer, you have to find workarounds in Django instead of workarounds with programming.
The mental unlock here is: Django is only a convention, not strictly enforced. It’s just Python. You can change how it works.
See the Instagram playbook. They didn’t reach a point where Django stopped scaling and move away from Django. They started modifying Django because it’s pluggable.
As an example, if you’re dealing with complex background tasks, at some point you need something more architecturally robust, like a message bus feeding a pool of workers. One simple example could be, Django gets a request, you stick a message on Azure Service Bus (or AWS SQS, GCP PubSub, etc), and return HTTP 202 Accepted to the client with a URL they can poll for the result. Then you have a pool of workers in Azure Container Apps (or AWS/GCP thing that runs containers) that can scale to zero, and gets woken up when there’s a message on the service bus. Usually I’d implement the worker as a Django management command, so it can write back results to Django models.
Or if your background tasks have complex workflow dependencies then you need an orchestrator that can run DAGs (directed acyclic graph) like Airflow or Dagster or similar.
These are patterns you’d need to reach for regardless of tech stack, but Django makes it sane to do the plumbing.
The lesson from Instagram is that you don’t have to hit a wall and do a rewrite. You can just keep modifying Django until it’s almost unrecognizable as a Django project. Django just starts you with a good convention that (mostly) prevents you from doing things that you’ll regret later (except for untangling cross-app foreign keys, this part requires curse words and throwing things).
If you're doing simple CRUD apps, try https://iommi.rocks/ which we built because imo it's way way too slow and produces too much code to use standard Django to make CRUD stuff.
That's a huge bonus point for Django. It's so prevalent that Claude/Codex are very good at setting it up the right way, using tried and true patterns.
I've been vibe coding some side projects with Claude Code + Django + htmx/tailwind, and when it's time to go some manual work in the codebase I know exactly where things are and what they do, there's way fewer weird patterns or hack the way Claude tends to do when it's not as guided
No kidding, it is really good especially with htmx which helps you get some of the advantages of a full SPA without the complexity of a separate frontend.
Been building a project in the side to help my studies and it usually implement new complete apps from one prompt, working on the first try
Yeah, I've noticed it regularly suggests htmx (and perhaps something light like alpinejs or some vanilla JS glue logic) to build powerful yet simple interfaces in Django. And it seems to get them right - saving you a lot of time.
It is probably good a HTMX for the same reason it is good at Tailwind CSS; HTMX puts the functionality on the elements being reasoned about (e.g. click this button, load the result here).
In hindsight, maybe I should've tried to use Django for my previous project instead of build a lot of custom stuff in Go and React. It was basically an admin interface, but with dozens of models and hundreds if not thousands of individual fields, each with their own validation / constraints. But it was for internal users, so visually it mainly needed to be clear.
I've lobbied to replace our internal tool with a django admin panel. I prototyped it and it showed that it would reduce our code by > 15k lines.
Any internal webapps I need to build like this will 100% be set up with django in the future due to this. I don't need it to be pretty, I just want the UI, database migrations, users, roles, groups, etc for free
The author makes a great last point about Settings and it’s something I’ve not considered… ever! I wonder if there’s a feature request for this because having a pre-configured object would be nice for the ability to verify correctness on startup.
I use a project generator tool for a Django project. One of the things it does is generate setting file using string manipulation. I have been trying to think of a more sane way to do this. leverage something like dataclass or Pydantic models to have the typing information available and render a typed and validated Python object. If Django ever made that possible, it would be amazing for dev ex.
In TypeScript, I use the same validation library (Zod) anywhere I need to validate data. So, I validate my config / environment variables on startup using a Zod schema, I validate my RPC endpoint arguments the same way, etc.
I presume you could do the same thing with Django— use Django’s validation feature to validate everything including your config. It’s a nice pattern that gives uniformity and predictability to all of your validation logic.
Not really, unfortunately. The thing is, if you mistype a configuration key, Django won’t pick it up. It’ll just leave the default value in place. I also don’t think it does any validation on settings values, it’ll just pass them to whatever uses them. That’s the last time I used it anyway.
The situation is worse than that because any plugins usually define their own settings which also don’t validate their contents.
I think something centralised that lets you properly scope and validate settings would be nice. If you mistype a key, you’d want an error that it’s just not valid.
After working with Django for 8 years, I find it hard to move on to anything else. It's just the right amount of magic, and just the right amount of flexibility, and it's just such a joy to work with.
Re: Django is OK for simple CRUD, but falls apart on anything complex - this is just untrue. I have worked in a company with a $500M valuation that is backed by a Django monolith. Reporting, recommender systems, file ingestion pipelines, automatic file tagging with LLM agents -- everything lives inside Django apps and interconnects beautifully. Just because it's a Django app doesn't mean you cannot use other libraries and do other stuff besides basic HTTP request processing.
Recently I had the misfortune of doing a contract on a classic SPA project with Flask and sqlalchemy on the backend and React on the frontend, and the amount of code necessary to add a couple of fields to a form is boggling.
> Recently I had the misfortune of doing a contract on a classic SPA project with Flask and sqlalchemy on the backend and React on the frontend, and the amount of code necessary to add a couple of fields to a form is boggling.
Same here, and the reason to do all the Flask + SQLAlchemy + React was to keep things simple, as they are simple tools but Django is a complex tool. In particular the Flask part was juggling plugins for admin, forms and templates that Django already has included. But yeah, I am sure it is easier to code and to mantain because Flask is made for simple sites :/.
> Re: Django is OK for simple CRUD, but falls apart on anything complex
Maybe my experience of working with Django on complex applications has coloured my view on it a bit, but I always think the opposite; it seems overkill for simple CRUD, even if I love using it
How do the apps "interconnect"? In my experience, unless you're careful, a Django monolith quickly becomes a big ball of mud. I've recently started to use Tach to try to combat this.
IMO Django is a buggy and poorly designed framework that locks you into bad decisions.
It's a combination of things that all suck: the
- ORM (sqlalchemy is better in every possible way. django's orm is very poor and can't express a lot of sql constructs at all)
- templates (jinja2 is basically identical except performant and debuggable)
- routing (lots of wsgi routers exist and are lightyears ahead of django)
You still need clear separation between frontend and backend (react server components notwithstanding), so nothing's stopping you from using Python on the backend if you prefer it.
Django with DRF or django-ninja works really nice for that use case.
Well... that's a valid reason. Why should I work with tool B when I prefer tool A ?
> I also do not see much reason to do more than emit JSON on the server side.
That's the "SPA over API" mindset we need to reconsider. A lot (and I mean A LOT) of projects are way easier to produce and maintain with server-side rendered views.
I'll add a few of my own:
* Set up the project using uv
* I name the django project "project"; so settings are project/settings.py, main urls are project/urls.py, etc
* I always define a custom Django user model even if I don't need anything extra yet; easier to expand later
* settings.py actually conflates project config (Django apps, middleware, etc) and instance/environment config (Database access, storages, email, auth...); I hardcode the project config (since that doesn't change between environemnts) and use python-dotenv to pull settings from environment / .env; I document all such configurable vars in .env.example, and the defaults are sane for local/dev setup (such as DEBUG=true, SQLIte database, ALLOWED_HOSTS=*, and a randomly-generated SECRET_KEY); oh and I use dj-database-url to use DATABASE_URL (defaults to sqlite:///sqlite.db)
* I immediately set up up ruff, ty, pytest, pre-commit hook and GH workflow to run ruff/ty/pytest
Previously I had elaborate scaffolding/skeleton templates, or nowadays a small shell script and I tell Claude to adapt settings.py as per above instructions :)
Saying that, I'm sure django-extensions does a lot more than shell_plus but I've never actually explored what those extra features are, so think I'll do that now
Edit: Turns out you can use bpython, ptpython or none at all with shell_plus, so good to know if you prefer any of them to ipython
Django does this by default now. Since 5.0 if I'm remembering it correctly.
Edit: Yep, you're right, wow thats pretty big for me
Also, shell_plus has --print-sql option for easy construction and debugging of ORM queries.
There is a convention to create "foo_settings.py" for different environments next to "settings.py" and start it with "from .settings import *"
You'll still want something else for secrets, but this works well for everything else, including sane defaults with overrides (like DEBUG=False in the base and True in only the appropriate ones).
If you use OP's way (I do something similar using pydantic-settings) the only thing that changes is your environment vars, which are much easier to reason about.
I disagree strongly with this one. All you are doing is moving those settings to a different file. You might as well use a local settings file that reads the common settings.
On production keep things like API keys that need to be kept secret elsewhere - as a minimum outside the project directories and owned by a different user.
> On production keep things like API keys that need to be kept secret elsewhere - as a minimum outside the project directories and owned by a different user.
Curious what extra protection this gives you, considering the environment variables are, well, in the environment, and can be read by process. If someone does a remote code execution attack on the server, they can just read the environment.
The only thing I can imagine it does protect is if you mistakenly expose project root folder on the web server.
I, too, have found found that my interactions with past versions of myself across decades has been a nice way to learn good habits that also benefit me professionally.
It makes it so much easier to pick them up again in the future when enough time has passed that I've forgotten almost everything about them.
I have a couple of android projects that are four years old. I have the architecture documented, my notes (to self) about some important details that I thought I was liable to forget, a raft of tests. Now I can't even get it to load inside the new version of Android Studio or to build it. There's a ton of indirection between different components spread over properties, xml, kotlin but what makes it worse is that any attempt to upgrade is a delicate dance between different versions and working one's ways around deprecated APIs. It isn't just the mobile ecosystem.
I have a rule that any commit which changes the implementation has to include the documentation update at the same time.
Most of these documentation updates are a sentence or two, or maybe a paragraph. The overhead of incremental documentation updates like that is tiny enough that I don't really think about assigning extra time for them.
If you need to travel, make sure you have someone reliable who can check on them, in case of a power outage.
Though i must admit that for maybe more complex applications, I feel like aspnetcore also fulfills this definition. I feel like it’s easier to create something more complex with aspnetcore while still keeping the code boring and opinionated.
I feel like Django, for bigger apps, fall apart in the "opinionated" side. For "simple" websites, you can’t go wrong, but for anything really big, basically everyone invents their own project structure.
But don’t get me wrong, I still love Django for what it is and it’s my first love in web frameworks anyway.
And I’d go further and say that the Django documentation is so awesome, that 15 years ago, it was where I learnt how websites/http/etc… really worked.
As a mostly-django-dev for the last 15 years, who's been exposed to FastAPI and various ORMs again recently, I should get round to write a doc about some Django bits.
Django is pretty nice, the changes between versions are small and can be managed by a human.
Part of the reason that you can have the big ecosystem is that there is a central place to register settings and INSTALLED_APPS, middleware etc.
That enables addons to bring their own templates and migrations.
There is a central place a bit further up in manage.py and that enables you to bring commandline extras to Django (and many of the things you install will have them).
Coming to a FastAPI app with alembic and finding a lot of that is build-it-yourself (and easily break it) is a bit of a shock.
The Django ORM at first can seem a little alien "why isn't this sqlalchemy" was my reaction a long time ago, but the API is actually pretty pragmatic and allows easy extension.
You can build up some pretty complex queries, and keep them optimised using the Django-Debug-Toolbar and its query viewer.
The ORM, Templates and other parts of Django pre-date many newer standards which is why they have their own versions. As a Django dev I only just discovered the rest of the world has invented testcontainers, and databases as a solution for a problem Django solved years ago with it's test database support.
I quite like the traditional setup where you have settings/common.py and then settings that extend that - e.g local.puy production.py
If you ever need a CMS in your Django project I strongly recommend Wagtail, it came after the initially most popular django-cms and learned a lot of lessons - feeling much more like a part of Django.
It has the same feeling of being productive as Django does when you first use it.
Testing an API with model-bakery + pytest-django is absolutely joyous. As a TDD nerd, the lack of any remotely similar dev ex in FastAPI is the main reason I’ve never switched over.
As an aside, as someone who loves ergonomic testing, test containers are not the way. Dockerized services for testing are fine but their management is best done external to your test code. It is far easier to emulate prod by connecting to a general DB/service url that just happens to be running in a local container than have a special test harness that manages this internally to your test suite.
I briefly played with FastAPI, and after the same shock I discovered Django-Ninja [1]. It's modeled after FastAPI and async-capable (if you are inclined, but warning, there be dragons). It plays nicely with all parts of Django, including the ORM.
[1] https://django-ninja.dev/
I believe it's now accurate to even say "decades ago".
Nope. I would choose plain Django 100% of the time, especially with LLMs. Wagtail is an antipattern.
Its always a surprise when i went to Elixir or Rust and the migration story was more complicated and manual compared to just changing a model, generating a migration and committing.
In the pre-LLM world, I was writing ecto files, and it was super repetitive to define make large database strucutres compared to Django.
At least you can opt-in to automated migrations in Elixir if you use Ash.
There's a pre, do and post phase for the migrations. When you run a single migration, it's: pre, do, post. When you run 2 migrations, it's: pre [1,2], do: [1,2], post: [1,2].
So, if you have a migration that depends on a previous migration's post phase, then it will fail if it is run in a batch with the previous migration.
When I've run into this is with data migrations, or if you're adding/assigining permissions to groups.
The only catch is they will run multiple times, once for each app, but that can also be prevented by passing a sender (e.g. `pre_migrate.connect(pre_migrate_signal_handler, sender=self)` if you are registering them in your AppConfig.ready method).
The only thing to manually migrate are data migrations from one schema to the other.
I am quite surprised that most languages do not have an ORM and migrations as powerful as Django. I get that it's Python's dynamic Meta programming that makes it such as clean API - but I am still surprised that there isn't much that comes close.
It requires a particular dance if you ever want to add/delete a field and make sure both new-code and old-code work with both new-schema and old-schema.
The workaround I found was to run tests with new-schema+old-code in CI when I have schema changes, and then `makemigrations` before deploying new-code.
Are there better patterns beyond "oh you can just be careful"?
https://rtpg.co/2021/06/07/changes-checklist.html
I've been meaning to write an interactive version to sort of "prove" that you really can't do much better than this, at least in general cases.
* https://github.com/tbicr/django-pg-zero-downtime-migrations
* https://docs.gitlab.com/development/migration_style_guide/
* https://pankrat.github.io/2015/django-migrations-without-dow...
* https://www.caktusgroup.com/blog/2021/05/25/django-migration...
* https://openedx.atlassian.net/wiki/spaces/AC/pages/23003228/...
Generally it's also advisable to set a statement timeout for migrations otherwise you can end up with unintended downtime -- ALTER TABLE operations very often require ACCESS EXCLUSIVE lock, and if you're migrating a table that already has an e.g. very long SELECT operation from a background task on it, all other SELECTs will queue up behind the migration and cause request timeouts.
There are some cases you can work around this limitation by manually composing operations that require less strict locks, but in our case, it was much simpler to just make sure all Celery workers were stopped during migrations.
1. Make a schema migration that will work both with old and new code
2. Make a code change
3. Clean up schema migration
Example: deleting a field:
1. Schema migration to make the column optional
2. Remove the field in the code
3. Schema migration to remove the column
Yes, it's more complex than creating one schema migration, but that's the price you pay for zero-downtime. If you can relax that to "1s downtime midnight on sunday", you can keep things simpler. And if you do so many schema migrations you need such things often ... I would submit you're holding it wrong :)
Adding a field needs a default_db, otherwise old-code fails to `INSERT`. You need to audit all the `create`-like calls otherwise.
Deleting similarly will make old-code fail all `SELECT`s.
For deletion I need a special 3-step dance with managed=False for one deploy. And for all of these I need to run old-tests on new-schema to see if there's some usage any member of our team missed.
1. Create new fields in the DB.
2. Make the code fill in the old fields and the new fields.
3. Make the code read from new fields.
4. Stop the code from filling old fields.
5. Remove the old fields.
Personally, I wouldn't use it until I really need it. But a simpler form is good: do the required schema changes (additive) iteratively, 1 iteration earlier than code changes. Do the destructive changes 1 iteration after your code stops using parts of the schema. There's opposite handling of things like "make non-nullable field nullable" and "make nullable field non-nullable", but that's part of the price of smooth operations.
When you add new stuff or make benign modifications to the schema (e.g. add an index somewhere), you won't notice a thing.
If the introduced schema changes are not compatible with the old code, you may get a few ProgramingErrors raised from the old pods, before they are replaced. Which is usually acceptable.
There are still some changes that may require planning for downtime, or some other sort of special handling. E.g. upgrading a SmallIntegerField to an IntegerField in a frequently written table with millions of rows.
So, if some of your pods fail a fraction of the requests they receive for a few seconds, this is not considered downtime for 99% of the use cases. The service never really stopped serving requests.
The problem is not unique to Django by any means. If you insist on being a purist, sure count it as downtime. But you will have a hard time even measuring it.
You can run raw SQL in a Django migration. You can even substitute your SQL for otherwise autogenerated operations using `SeparateDatabaseAndState`.
You have a ton of control while not having to deal with boilerplate. Things usually can just happen automatically, and it's easy to find out and intervene when they can't.
https://docs.djangoproject.com/en/6.0/ref/django-admin/#djan...
https://docs.djangoproject.com/en/6.0/ref/migration-operatio...
If you hired a "superstar" that goes out of their way to hand-write migrations in cases where Django can do it by default (the majority of them) you did not in fact get a superstar.
I have yet to see anyone hand-roll migrations on purpose. In fact the problem is usually the opposite, the built-in migration generator works so well that a lot of people have very little expertise is doing manual migrations because they maybe had to do it like 5 times in their entire career.
That sounds like the path to madness, but I do believe it would work out of the box.
maybe more concretely: if you have a table with a kajillion columns and you want performant views onto some column (e.g. "give me the metadata only and dont show me blobs columns") without pulling down the entire jungle in the sql request, There's that.
1: https://github.com/mherrmann/djevops
Naively, I would probably just copy the sqlite file. Is that a bad idea?
VACUUM INTO eliminates that risk.
But once something gets significantly complex, the ORM starts to fall down, and DRF becomes more of a hindrance.
But if you're just doing simple CRUD apps, Django is perfectly serviceable.
Vague arguments like this are categorically useless.
Then as a programmer, you have to find workarounds in Django instead of workarounds with programming.
PS: Dealing with a lot of scaling issues right now with a Django app.
The framework itself is not the limiting factor. The main constraint of performance usually comes from Python itself (really slow). And possibly I/O.
There are well established ways to work around that. In practice, lots of heavy lifting happens in the DB, can you can offload workloads to separate processes as well (whether those are Python, Go, Rust, Java etc).
You need to identify the hotspots, and blindly trusting a framework to "do the job for you" (or for that matter, trusting an LLM to write the code for you without understanding the underlying queries) is not a good idea.
I'm not saying you are doing that, but how often do you use the query planner? Whenever I've heard someone saying Django can't scale, it's not Django's fault.
> When you start to run into scaling problems, your solution is within that framework and that becomes a limiting factor from my experience.
Using Django doesn't mean that everything needs to run inside of it. I am working on an API that needs async perf, and I run separate FastAPI containers will still using Django to maintain the data model + migrations.
Occasionally I will drop down to raw SQL, or materialized views (if you are not using them with Django, you are missing out). And the obvious for any Django dev; select_related, prefetch_related, annotate, etc etc.
And sometimes not so obvious, I have been bitten by forgetting one select_related while inadvertedly joining 5 tables but using only 4 select_related: the tests work OK, but the real data has a number of records that cause a N+1. A request that used to take 100ms now issues "30 seconds timeout" from time to time.
Once we added the missing select_related we went back to sub-second request, but it was very easy to start blaming Django itself because the number of records to join was getting high.
The cases that we usually walk out of the Django path is for serializations and representations, trying to avoid the creation of intermediate objects when we only need the "values()" return.
In Django, you can count the number of queries in a unit test. You don't need 1M objects in the unit test, but maybe 30 in your case.
If the unit code uses more than X queries, then you should assume you have an N+1 bug. Like if you have 3 prefetch related and 2 select related's on 30 objects, but you end up with more than 30 queries, then you have an N+1 someplace.
Even better that unit test will protect you from hitting that error in the future in that chunk of code accessing that table.
The mental unlock here is: Django is only a convention, not strictly enforced. It’s just Python. You can change how it works.
See the Instagram playbook. They didn’t reach a point where Django stopped scaling and move away from Django. They started modifying Django because it’s pluggable.
As an example, if you’re dealing with complex background tasks, at some point you need something more architecturally robust, like a message bus feeding a pool of workers. One simple example could be, Django gets a request, you stick a message on Azure Service Bus (or AWS SQS, GCP PubSub, etc), and return HTTP 202 Accepted to the client with a URL they can poll for the result. Then you have a pool of workers in Azure Container Apps (or AWS/GCP thing that runs containers) that can scale to zero, and gets woken up when there’s a message on the service bus. Usually I’d implement the worker as a Django management command, so it can write back results to Django models.
Or if your background tasks have complex workflow dependencies then you need an orchestrator that can run DAGs (directed acyclic graph) like Airflow or Dagster or similar.
These are patterns you’d need to reach for regardless of tech stack, but Django makes it sane to do the plumbing.
The lesson from Instagram is that you don’t have to hit a wall and do a rewrite. You can just keep modifying Django until it’s almost unrecognizable as a Django project. Django just starts you with a good convention that (mostly) prevents you from doing things that you’ll regret later (except for untangling cross-app foreign keys, this part requires curse words and throwing things).
I am not using the main menu module, but tables and forms work really well.
I've been vibe coding some side projects with Claude Code + Django + htmx/tailwind, and when it's time to go some manual work in the codebase I know exactly where things are and what they do, there's way fewer weird patterns or hack the way Claude tends to do when it's not as guided
Been building a project in the side to help my studies and it usually implement new complete apps from one prompt, working on the first try
Any internal webapps I need to build like this will 100% be set up with django in the future due to this. I don't need it to be pretty, I just want the UI, database migrations, users, roles, groups, etc for free
I presume you could do the same thing with Django— use Django’s validation feature to validate everything including your config. It’s a nice pattern that gives uniformity and predictability to all of your validation logic.
The situation is worse than that because any plugins usually define their own settings which also don’t validate their contents.
I think something centralised that lets you properly scope and validate settings would be nice. If you mistype a key, you’d want an error that it’s just not valid.
Re: Django is OK for simple CRUD, but falls apart on anything complex - this is just untrue. I have worked in a company with a $500M valuation that is backed by a Django monolith. Reporting, recommender systems, file ingestion pipelines, automatic file tagging with LLM agents -- everything lives inside Django apps and interconnects beautifully. Just because it's a Django app doesn't mean you cannot use other libraries and do other stuff besides basic HTTP request processing.
Recently I had the misfortune of doing a contract on a classic SPA project with Flask and sqlalchemy on the backend and React on the frontend, and the amount of code necessary to add a couple of fields to a form is boggling.
Same here, and the reason to do all the Flask + SQLAlchemy + React was to keep things simple, as they are simple tools but Django is a complex tool. In particular the Flask part was juggling plugins for admin, forms and templates that Django already has included. But yeah, I am sure it is easier to code and to mantain because Flask is made for simple sites :/.
Maybe my experience of working with Django on complex applications has coloured my view on it a bit, but I always think the opposite; it seems overkill for simple CRUD, even if I love using it
IMO, type annotations should only be omitted in obvious cases or simple/MVP projects.
It's a combination of things that all suck: the - ORM (sqlalchemy is better in every possible way. django's orm is very poor and can't express a lot of sql constructs at all) - templates (jinja2 is basically identical except performant and debuggable) - routing (lots of wsgi routers exist and are lightyears ahead of django)
Don't use Django.
Reference (me saying the same thing 16 years ago): https://news.ycombinator.com/item?id=1490415
I also do not see much reason to do more than emit JSON on the server side.
Django with DRF or django-ninja works really nice for that use case.
Well... that's a valid reason. Why should I work with tool B when I prefer tool A ?
> I also do not see much reason to do more than emit JSON on the server side.
That's the "SPA over API" mindset we need to reconsider. A lot (and I mean A LOT) of projects are way easier to produce and maintain with server-side rendered views.