Has anyone here tried this in practice? It sounds appealing, but the practical lack of tooling around version control, deployment, logging, and debugging worry me.
The article is a bit basic using a very limited ser of relational databases features. There are entire banks and other large systems built entirely with stored procedures. Has anyone tried? Well, everyone, back in the day.
A few years ago, I showed a couple of young engineers how we could ditch some 100k LOC worth of backend code with 12 stored procedures, 20 to 100 loc each. We exposed them via HTTP using postgrest. They were very negative and complaining that it was old useless tech. Only by the end of the week when they got it done, had they realized what an enormous gain it was.
Edit: I have no idea where the myths that version control, logging and even debugging come from. You naturally have all these things. Perhaps not a step by step debugger, although I never looked for one.
I’ve seen it used, it’s super old school, and all of the concerns you mention are real and relevant. It results in very tight coupling between business logic/rules and the underlying state/data, and hoo boy that data had better be properly relational or else you’re in for a world of pain. It can be a good idea, in certain contexts, but IME those contexts are few and far between.
I think version control and deployment would be pretty easy to manage with bog-standard migrations.
Testing is fairly easy too and there are libraries for directly testing database queries.
Logging, and debugging idk.
The author said they wanted to use postgres functions cos they kept changing their application language, so I think they probs ended up cooling on postgres functions too.
Lots of successful applications are written in databases, tho. Look up MUMPS / GT.M.
I’ve worked with and built a few relatively heavily programmed Postgres databases. At my current job we stand a GraphQL API up over it with Postgraphile and do everything from access control with row-level security to EEG labeling task management in the database. We build Postgraphile plugins for things not amenable to SQL like JSON tree aggregation. It’s great for us, with a low baseline write load and bursts from scientific dataset transfers; it won’t apply as well to every situation out there although I think it’s generally useful at least to consider.
The 1990s style “procs are your CRUD interface” approach is mostly an artifact of the pre-RLS era. It can still be useful if you want to provide a not-quite-CRUD interface to clients: for example, we replace insert with upsert and offer dedicated operations around dataset membership. The critical component to making it work well imo is to keep the “standard” update_x type wrappers simple – they should do just one thing, and data integrity + correctness checks go in constraints not procedural magic. The biggest problems come up when different rules apply to a wrapped invocation than to a DBA-written SQL statement.
Tooling-wise testing with pgTAP is great, there’re monitoring solutions like pgbadger out there, & there’s even a pldebugger albeit built into pgadmin. Commercial databases do better in some areas but version control integration is the same for everyone – some interesting ideas out there but managing schema changes is never quite seamless.
We do this in our shop. The tools haven’t been a problem. However we have had problems with our “viral” this has been. Once important code was in the database, then we started writing more database code to use it. Ultimately, we have built systems into the database that shouldn’t be there.
Has anyone here tried this in practice? It sounds appealing, but the practical lack of tooling around version control, deployment, logging, and debugging worry me.
The article is a bit basic using a very limited ser of relational databases features. There are entire banks and other large systems built entirely with stored procedures. Has anyone tried? Well, everyone, back in the day.
A few years ago, I showed a couple of young engineers how we could ditch some 100k LOC worth of backend code with 12 stored procedures, 20 to 100 loc each. We exposed them via HTTP using postgrest. They were very negative and complaining that it was old useless tech. Only by the end of the week when they got it done, had they realized what an enormous gain it was.
Edit: I have no idea where the myths that version control, logging and even debugging come from. You naturally have all these things. Perhaps not a step by step debugger, although I never looked for one.
I’ve seen it used, it’s super old school, and all of the concerns you mention are real and relevant. It results in very tight coupling between business logic/rules and the underlying state/data, and hoo boy that data had better be properly relational or else you’re in for a world of pain. It can be a good idea, in certain contexts, but IME those contexts are few and far between.
I think version control and deployment would be pretty easy to manage with bog-standard migrations.
Testing is fairly easy too and there are libraries for directly testing database queries.
Logging, and debugging idk.
The author said they wanted to use postgres functions cos they kept changing their application language, so I think they probs ended up cooling on postgres functions too.
Lots of successful applications are written in databases, tho. Look up MUMPS / GT.M.
I’ve worked with and built a few relatively heavily programmed Postgres databases. At my current job we stand a GraphQL API up over it with Postgraphile and do everything from access control with row-level security to EEG labeling task management in the database. We build Postgraphile plugins for things not amenable to SQL like JSON tree aggregation. It’s great for us, with a low baseline write load and bursts from scientific dataset transfers; it won’t apply as well to every situation out there although I think it’s generally useful at least to consider.
The 1990s style “procs are your CRUD interface” approach is mostly an artifact of the pre-RLS era. It can still be useful if you want to provide a not-quite-CRUD interface to clients: for example, we replace insert with upsert and offer dedicated operations around dataset membership. The critical component to making it work well imo is to keep the “standard” update_x type wrappers simple – they should do just one thing, and data integrity + correctness checks go in constraints not procedural magic. The biggest problems come up when different rules apply to a wrapped invocation than to a DBA-written SQL statement.
Tooling-wise testing with pgTAP is great, there’re monitoring solutions like pgbadger out there, & there’s even a pldebugger albeit built into pgadmin. Commercial databases do better in some areas but version control integration is the same for everyone – some interesting ideas out there but managing schema changes is never quite seamless.
We do this in our shop. The tools haven’t been a problem. However we have had problems with our “viral” this has been. Once important code was in the database, then we started writing more database code to use it. Ultimately, we have built systems into the database that shouldn’t be there.