1. 61

All of the major relational client-server databases out there (MySQL, PostgreSQL, Oracle, DB2, MS-SQL, etc), which make up the vast majority of client-server database installations out there, come with extremely rich access-control mechanisms.

However, in my 20+ years of working with them, the vast majority of the time I’ve only ever seen a single username/password combination with complete access to the DB from there. Occasionally I’ve seen ACLs on client IPs or something, and once or twice I’ve seen a distinct read-only user.

Never, not once in all that time, on dozens of projects from small clients to Fortune 500 companies to major government entities, have I ever seen a per-{table,view,row} ACL. Not one time in all of the projects I’ve written myself, inherited from someone else, or been asked to look at, have I seen anything richer.

So, who out there has used these features? What have you used them for? How did that impact your application/environment? How did you decide to use them? Do they provide an expected benefit or are they more trouble than they’re worth?

    1. 22

      I agree lots of people don’t, because they never even bother to learn anything past GRANT ALL…

      So, who out there has used these features?

      We use PG’s row-level security exclusively. It is 100% worth the introduction pain. Every user of our software has their own PG DB login, and that’s how they login to the application.

      How did that impact your application/environment?

      The application does only 1 thing with access control, what shows up on the menus(well and logging in). 100% of the rest of the app is done via PG RLS, and the app code is a bunch of select * from employees; kind of thing.

      What have you used them for?

      Everything, always! :) lol. (also see next answer)

      Do they provide an expected benefit or are they more trouble than they’re worth?

      When we got a request to do a bunch of reporting stuff, we connected Excel to PG, had them login with their user/password to the PG DB, and they were off and running. If the user knows SQL, we just hand them the host name of the PG server, and let them go to town, they can’t see anything more than the application gives them anyway.

      When we added Metabase, for even more reporting, we had to work hard, and added a reporting schema, then created some views, and metabase handles the authorization, it sucks. Metabase overall is great, but It’s really sad there isn’t anything in reporting land that will take advantage of RLS.

      How did you decide to use them?

      When we were designing the application, PG was just getting RLS, we tried it out, and was like, holy cow.. why try to create our own, when PG did all the work for us!

      Trying to get access control right in an application is miserable.

      Put permissions with the data, you won’t be sorry.

      1. 6

        Doesn’t this require a lot of open connections? IME, postgres starts to struggle past a couple of hundred open connections. Have you run into that at all?

        1. 5

          If you run everything inside of transactions you can do some cleverness to set variables that the RLS checks can refer to, emulating lots of users but without requiring more connections.

          1. 2

            See my other comment, but you don’t have to work quite that hard, PG has a way now to become another user.

            see: https://www.postgresql.org/docs/12/sql-set-session-authorization.html

            1. 2

              How many users does this support? I might be being overly cautious, but we have been told to look at user counts in the millions.

              1. 2

                We are an internal staff application, we max around 100 live open DB connections, so several hundred users. This runs in a stable VM with 32GB ram and < 1TB of data. We will never be a Google or a Facebook.

                One can get really far by throwing hardware at the problem, and PG can run on pretty big hardware, but even then, there is a max point. Generally I recommend not optimizing much at all for being Google size, until you start running into Google sized problems.

                Getting millions of active users out of a single PG node would be hard to do, regardless of anything else.

        2. 2

          In our experience, the struggle is around memory, PG connections take up some memory, and you have to account for that. I don’t remember the amount per connection, but this is what I remember.

          It’s not entirely trivial, but you can re-use connections. You authenticate as a superuser(or equivalent) send AUTH or something like that after you connect, to lazy to go look up the details.

          We don’t currently go over about 100 or so open active connections and have no issues, but we do use pgbouncer for the web version of our application, where most users live.

          EDIT: it’s not AUTH but almost as easy, see: https://www.postgresql.org/docs/12/sql-set-session-authorization.html

      2. 3

        How do RLS policies impact performance? The Postgres manual describes policies as queries that are evaluated on every returned row. In practice, does that impact performance noticeably? Were there gotchas that you discovered and had to work around?

        1. 3


          It is important to keep your policies as simple as possible. E.g. if you mark your is_admin() as VOLATILE instead of STABLE, PG is going to happily call it for every single row, completely destroying performance. EXPLAIN is your best friend.

          But even then, some queries are performed needlessly. Imagine you use transitive ownership. For example Users own Boxes, Boxes contain Widgets. When you want to determine what Widgets can User manipulate, you usually cache the User-Boxes set at the application server level and query “downwards”. With RLS, you need to establish a link between the Widget and User, joining over Boxes “upwards” as there is no cache.

          The real problem here is that with sufficiently large schema, the tools are lacking. It’s really inconvenient to develop within pgAdmin4, away from git, basically within a “live” system with its object dependencies and so on.

          1. 2

            It can, as I mentioned in my other comment in this thread, we have only run into a few instances where performance was an issue we had to do something about.

            As for tools, We use liquibase[0], and our schema’s are in git, just like everything else.

            0: https://www.liquibase.org/

            1. 1

              I’ll check it out.

            2. 1

              How does the Liquibase experience compare to something like Alembic or Django migrations?

              The main difference I see is whether your migrations tool is more tightly coupled to your app layer or persistence layer.

              With Alembic you write migration modules as imperative Python code using the SQL Alchemy API. It can suggest migrations by inspecting your app’s SQL Alchemy metadata and comparing it to the database state, but these suggestions generally need refinement. Liquibase appears to use imperative changesets that do basically the same thing, but in a variety of file formats.

              1. 2

                I’m not very familiar with alembic or django migrations. Liquibase(LB) has been around a long time, it was pretty much the only thing doing schema in VCS back when we started using it.

                Your overview tracks with my understanding of those. I agree LB doesn’t really care about the file format, you can pick whatever suits you best.

                The LB workflow is pretty much:

                • Figure out the structure of the change you want in your brain, or via messing around with a development DB.

                • Open your favourite editor, type in that structure change into your preferred file format.

                • Run LB against a test DB to ensure it’s all good, and you didn’t mess anything up.

                • Run LB against your prod DB.

                • Go back to doing whatever you were doing.

        2. 1

          We actually use an OSS extension veil[0], and while performance can be an issue, like @mordae mentions, if you are careful about your use, it’s not to bad. We have only had a few performance issues here and there, but with explain and some thinking we have always managed to work around it without much hassle. You absolutely want indexes on the things you are using for checking permissions with.

          Veil makes the performance a lot less painful, in our experience.

          0: https://github.com/marcmunro/veil Though note, veil2 is the successor and more relevant for new implementations, that we don’t currently use(and have no experience with): https://github.com/marcmunro/veil2

          Veil2 talks about performance here in 23.1: https://marcmunro.github.io/veil2/html/ar01s23.html

      3. 3

        Same here, I used row level security everywhere on a project and it was really great!

        1. 2

          One mistake I’ve made is copy-pasting the same permission checks on several pages of an app. Later I tried to define the permissions all in one place, but still in the application code (using “django-rules”). But you still had to remember to check those permissions when appropriate. Also, when rendering the page, you want to hide or gray-out buttons if you don’t have permission on that action (not for security, just niceness: I’d rather see a disabled button than click and a get 403).

          With row-level permissions in the DB, is there a way to ask the DB “Would I have permission to do this update?”

          1. 2

            Spitballing but maybe you could try run the query in a transaction and then roll it back?

            Would be a bit costly because you’d have to run the query twice, once to check permissions and then again to execute, but it might be the simplest solution.

          2. 1

            Maybe what you need is to define a view that selects the things you can update and use that view to define the RLS. Then you can check whether the thing you want to update is visible through the view.

          3. 1

            With row-level permissions in the DB, is there a way to ask the DB “Would I have permission to do this update?”

            Yes, the permissions are just entries in the DB, so you can query/update whatever access you want(provided you have the access to view/edit those tables).

            I’m writing this from memory, so I might be wrong in the details… but what we do is have a canAccess() function that takes a row ID and returns the permissions that user has for that record. So on the view/edit screens/pages/etc, we get the permissions as well returned to us. So it’s no big deal to handle.

      4. 1

        Follow question: How did you handle customers (accidentally even) writing expensive sql queries?

        1. 2

          We would admonish them appropriately :) Mostly the issue is making sure they know about the WHERE clause. It hasn’t been much of an issue so far. We have _ui table views, that probably do 90% of what they are wanting anyways, and they know to just use those most of the time. The _ui views flatten the schema out, to make the UI code easier, and use proper where clauses and FK joins, to minimize resource usage.

          If our SQL user count grew enough that we couldn’t handle it off-hand like this, we would probably just spin up a RO slave mirror, and let them battle each other over resources and otherwise ignore the problem until we got complaints enough to upgrade resources again.

    2. 12

      I haven’t used it in production, but PostgREST uses the database-level access control features:

      PostgREST handles authentication (via JSON Web Tokens) and delegates authorization to the role information defined in the database. This ensures there is a single declarative source of truth for security. When dealing with the database the server assumes the identity of the currently authenticated user, and for the duration of the connection cannot do anything the user themselves couldn’t.


      1. 1

        I use it in a side project, and it’s great so far!

    3. 10

      At one gig, we used and dynamically managed user roles for database access (shared across multiple apps).

      At another, we did some weird and cool stuff with row-level security.

      I find the more I learn about these capabilities the more I find uses for them.

    4. 8

      I first ran across this concept around two years ago in Hasura (ACL* docs], which uses table and row-based ACL. At the time I was working on a Rails app and I was surprised by how much we were doing in the application code that could actually be pushed to the database level.

      I recently interviewed at quite a few companies and only one of them had any interest in discussing using DB ACL* features. There’s no bias against it, just not a lot of knowledge about how to set it up and where it becomes difficult.

      After spending some time reading about setting up database level ACLs and thinking about it, I think the issue that holds back a lot of these use cases is the opacity of seeing which options are set. Unlike with source-code where you can read the code without running it, you need to access an up-to-date instance of the database in order to find out how it’s configured.

      This is just a bit too different from how we’re used to working and that small change probably makes people investing in building startups reluctant to embrace these changes, because they’re perceived as an eventual bottleneck to scaling.

      For FAANG and Fortune500 companies, I think the main reason they don’t implement controls at the DB level is that they’ve already solved them in the application code and there’s a natural level of inertia to re-writing a solved problem.

      * ACL stands for Access control list

      1. 4

        After spending some time reading about setting up database level ACLs and thinking about it, I think the issue that holds back a lot of these use cases is the opacity of seeing which options are set. Unlike with source-code where you can read the code without running it, you need to access an up-to-date instance of the database in order to find out how it’s configured.

        Do you think it would help if the DB forced changes to these options to go through the config file? It would be like nginx: to change a setting at runtime, you edit the config and reload.

        I wonder how far you could take this idea. Would it make sense for the DB to require all DDL to go through the config file?

        1. 1

          That’s a great idea! I think it fits well with the broader movement to “infrastructure-as-code” and for me at least, the database is much more infrastructure than code. I think this is a bit how Prisma’s data modeling works.

          One of the problems I’ve had reasoning about databases in the past is that often I find myself reading through migration logs, which is the equivalent to reading through git .patch files to find the current state of your codebase.

          One of the biggest benefits I found from using Hasura for a couple projects was that it gave me a UI to view my current database schema at a glance.

        2. 1

          “Developers need to know the DB schema of production” is not a super hard problem to solve in a secure way, but most solutions in use are either insecure or subject to drift.

          A nightly job that dumps the schema somewhere accessible might fail a particularly tight security audit, but I suspect most would let it pass after giving it a close look.

    5. 4

      Currently trying to move the current gig to a read-only/read-write role split and then, eventually, hopefully to row-level access where possible - but that gets complicated when you have, eg., user A looking at user B’s profile.

      But yeah, I’ve rarely seen anything more complicated than distinct ro/rw users in 20+ years.

    6. 4

      The problem with some of those advanced features is they occasionally clash with application-level management of those features. The biggest thing I see used frequently is essentially having 4 roles:

      • data read only
      • data read/write
      • super-user who can add/drop/alter tables
      • some ad-hoc permissions based on ip/table to keep third party reporting from getting too frisky
      1. 5

        too frisky

        That is now my new technical term for attempting to effect data disclosure.

    7. 4

      I’ve used them in the US healthcare data space for the past 16 years with Oracle, DB2 and Teradata, where it’s required to tightly control access to data. They’ve been used for both row-level security (row selection) and column-level security (masking), in addition to preventing write access to the underlying data.

      This was implemented using differing sets of views against the underlying tables, with synonyms/aliases presenting a uniform environment for the application to work against. The user/password then provided the access to one set of views or another.

      This back-end mechanism allows the application to be security-agnostic in a sense, since all of that kind of security is contained in the database and not in the application. Using database views allows for easy debugging, too, compared with “more advanced” techniques such as Oracle Virtual Private Database features.

    8. 4

      One of the first serious “production” databases I worked on was a Postgres instance that tracked billable print jobs for a small college campus where students were charged a few cents per page sent to a network laser printer. It had been built by someone with a good handle on security engineering fundamentals but not a ton of knowledge of accounting practices, so while there was no double-entry accounting in evidence, there was a very solid separation between the privileges needed to submit print jobs, generate and view billing reports, and modify or fix recorded jobs before bills went out.

      That access control was primarily accomplished by putting the standard write path behind stored procedures that could run as a “print spooler” user, effectively implementing row-level security (long before Postgres supported it natively) in the procedure for user-specific SELECT and INSERT operations while completely disallowing UPDATE and INSERT. (This actually went a level beyond strict RLS in that it could introspect query payloads to make decisions about access, not just the login info.)

      Our web UI and monthly reporting happened from another “application” DB account with broad SELECT privileges, while admins had to log in via a user-specific elevated-privilege account[^1] to do manual diagnostic, repair, and migration tasks.

      TBH it worked quite well for a surprising number of years[^2], and made me a lot less nervous about some curious student coming across the IP address to the database server and guessing their way into a fun little excursion.

      In a world where I had (say) mTLS for my database clients and a centralized authorization model so I could keep my secrets and roles managed consistently across layers of my application I’d happily go back to having granular DB credentials + ACLs. Ditto if I were supporting a small, slowly-changing group of students, scientists, or analysts who need direct, low-level access to a database without intervening APIs or client applications slowing things down or limiting the tools they can use to work through a problem.

      In a modern commercial environment that needs a consistent view of identity, access privileges, and durability of data across many services and teams, though, a purely database-internal view of security will only get you so far. Today I would probably look first at the feasibility of forking the actual data (so e.g. analytics or billing happened against a sanitized, trustworthy replica with its own access control, anonymization, and retention rules) before trying to bake granular access into a big, shared DB and somehow keep that in sync with checks in application code.

      In theory microservices could be a way to do this well, but unless you’ve gone all the way to spinning up an internal CA and baking client certs into your build/test/deploy cycle you’re probably going to end up reusing a few highly-privileged user accounts amongst the services that actually need to use the core database(s). Then you’re stuck running a big fire drill every time one of those logins need to be restricted, rotated, or retired.

      Likewise, putting multiple access credentials into your configuration and switching between them based on the current API method or job being executed doesn’t keep those higher-access credentials from getting exfiltrated and reused, so the security benefits of not running everything under a highly-privileged account all the time basically end once you’ve cut any SQL injection holes in your app. (I.e., if someone owns an app server they’re going to get whatever access corresponds to the most privileged credential available to your app.)

      [^1]: All authentication was actually done via Kerberos, which meant a) there were no passwords sitting around in config files to re-use, and b) admin roles could be checked on the basis of having a <user>/admin@<REALM> principal name, which was roughly equivalent to root in prod and therefore limited to a very small number of trusted folks.

      [^2]: The core database predated my hiring by a couple of years, ticked along happily for the ~five years I worked there, and wasn’t replaced any time soon after my departure.

    9. 3

      One thing that keeps me from using database-level permissions on a per-end-user basis is that I often need to control access to things other than rows in a database. If the application needs its own access management system anyway, I think it’s a reasonable choice to only have one source of truth for all of a user’s permissions.

      1. 2

        All the database level permissions are in the database for you to poke at to your hearts desire. There is nothing stopping you from looking up that data to then make your other authorization decisions on.

        The DB knows if this record is editable, so we just ask it, hey, can I write to this record? It’s what we do for app UI friendliness, i.e. allow editing or not, show a protected field, etc.

    10. 3

      We just implemented this in Supabase - using PostgREST - to make it very easy to get started. We combine it with GoTrue (open source authentication server), to achieve a full auth system.

      PostgREST is a very elegant tool. Beyond creating the best API I’ve ever used, the way it leverages Postgres to bring auth/security is very powerful. Row level security is one of those things that seems confusing to start with, but once you see it in action it’s mind-blowing.

      I hope developers begin to recognise how powerful databases have become. They go beyond just data storage, offering a slew of features which are better than most developers could build themselves.

    11. 3

      I follow the principle of least privilege. Everyone (both user or software component) has only necessary rights. If someone e.g. fills some data, he can do INSERT, but he does not need SELECT or even DELETE. Besides that, the more „dangerous“ operations are often wrapped in procedures/functions and user has then granted permissions for execution of such routines instead of direct manipulation with records.

      On testing environments, this could be relaxed (if there are no production data). But it is better to test exactly what will run on production (which includes the system of privileges – which affects the deployment process…).

      On the other hand, however I have met per-row or per-column permissions on some projects, they are used quite rarely in my experience. If I design something, I usually take advantage of procedures/functions + views for such task.

      There are bugs in software all the time and it includes your or mine. Properly defined permissions can prevent a disaster – both data leaks and data deletions (or malicious modifications). You can gain a lot for really little.

      1. 1

        agreed on principle of least privilege! We use RLS as much as possible and love it.

        Our test and dev instances are copies of production, with the exact same permissions, as they are identical copies, updated nightly for test and whenever someone pushes the button for dev. We don’t let the DB copies onto individual laptops though, you have to work on the DB remotely. Some of us keep local copies of the schema only, with maybe some pretend data locally, but that’s all on them to manage. Though it’s not that difficult since the schema is in git, and we use liquibase to apply it.

    12. 2

      I’ve used “built in” security like you describe a couple of times, for web apps that used an LDAP DIT as primary “database”. While it can be a steep learning curve for devs not used to working with LDAP in general (particularly if the ACLs get complex), I really like this setup.

      I’ve found that for all the hooplah the last few years about “no sql” databases, OpenLDAP has most if not all those bases covered - especially for the non-extreme use cases most people need - but with solid, well understood built in support for schemas, indexes, replication, auditing, etc.

      1. 1

        Upcoming for LDAP. It’s quite a beast to tame, but it does the job immensely well.

    13. 2

      Yes we do. We have a galera mysql cluster that spans multiple nodes and of course it makes sense to use such cluster to host multiple databases. We then grant individual users access to a specific database, as long as they log in from a certain subnet.

      We believe in a you-build-it-you-run-it philosophy so we give developers a certain degree of access to production databases (although not full access of course).

      1. 1

        mysql gained the ability for row level security? That’s news! How painful is it compared to PG’s or Oracle’s do you know?

        1. 1

          I don’t know, but I wasn’t speaking about row level security. We usually grant permission for certain dml commands on whole databases to certain members of a development team (usually the team leader and the vice-team leader).

    14. 2

      I have used per db read only frequently, as well as per-table read only a few times (we gave a 3rd party vendor read only access to just a subset of tables for some analytics tool the BI folks used).

      I haven’t yet used (or ever encountered) per-view or per-row either, so it is pretty interesting reading other folks’ experiences here.

    15. 2

      We use read only and DDL vs. DML roles to lock down migrations.

    16. 2

      At a former gig I worked on a system that used roles in Oracle to give users permissions. They would login with Active Directory and the system would create them a new DB user and assign the right DB role.

      It worked so well that I basically did the exact same thing using SQL server when writing a small contract tracking app for the land department.

      EDIT: All the permissions were table based. Everyone having their own DB user made change tracking really clean and simple at the DB level.

    17. 1

      We often effectively used multiple schemas with different access levels. It’s a good solution for having multiple components with different quality levels/access patterns being close to each other. A simple example: we had one database where we stored automatic uploads through an ftp server in a table, for another process to start picking them up and transforming them and putting them into out “main” application table.

      We had 3 roles: ftp daemon (could only write to the uploads table), transformer (only read the table and write the result table) and webapp (could read everything, except the uploads table). It’s a very easy setup to neatly avoid attacks to the ftp server to be fatal and attacks through the transformer (media parsing…) to read into our main app.

      The whole credential rollout is done in a matter of hours, to huge wins.

    18. 1

      I have mostly the same experience as you but when I was a contract worker in the aerospace sector a few years ago, the place I was in used them extensively. This was on an Oracle database.

      As developers couldn’t change these on their own, I remember needing to fill requests to ask access for certain tables or views for user XYZ. It added some paperwork but it wasn’t too bad or anything.

    19. 1

      Great question! My experience is similar to yours (if less extensive).

      I’ve been meaning to try out more fine-grained access control, but never got around to it. It seems like a great way to apply defense in depth, and to make sure you can’t accidentally give more access to a user than they’re supposed to have, which happens once in a while due to a logic flaw.

      At work we use read-only access for replication/backup, and to provide db access to our client. Sometimes we directly connect Metabase to our live systems using a read-only user as well because we want to make sure that it can run only SELECT queries.

      1. 2

        What we do with Metabase, is create a reporting schema, and the MB user only gets access to that schema. That schema then has views that expose stuff from the main schema where all your data and tables are.

        This gives you lots of control about what’s exposed to MB, and how it’s exposed. So you can flatten a bunch of tables together, etc. Makes it super easy to handle stuff that way, and your users don’t have to learn much about table joins in the simple cases.

        We also use RLS though, so ;)

    20. 1

      I have seen used in governmental databases. per-table/view ACL. Haven’t seen row based ACL yet.

    21. 1

      I use it to differentiate between the web interface and what the data collector does. Each of them has a different job.

      Though it’s not very fine grained, they overlap at some places.

    22. 1

      I’ve seen Row Level security use to implement a multi-tenant database once with SQL Server, where each table would would have the ID of the tenant in it.

      I think for application-focused use-cases, most of those ACLs would end up being busywork or part of a defence-in-depth, which is more effort on auth than most projects are willing to expent.

    23. 1

      I have worked with a few developers who went about building apps where the users login details were equal to what the application accessed the database with; one of whom was a contractor I hired to build out a database for a local football league website; the website itself had SELECT permissions on a number of tables while the different level of user would have different table access permissions dependant upon what they could do.

      Incidentally all of the people I have worked with who use the database as access control came from a largely Java + Oracle background, even though they where working on PHP + MySQL projects with me.

    24. 1

      One team had their continuous metrics dashboard slam the data warehouse with a query per graph per second, so we gave them a separate set of credentials that could only run one query at a time. That left capacity for the actual data team to do data team things.

      (This was a while back, I don’t remember the details and it might have been something else we did to restrict them.)

    25. 1

      The Google “Building Secure Reliable Systems” mentions something along these lines, where the web app can only access rows that it has a valid “End-user credential” for, which is enforced by the storage layer.