The title had me concerned but the article is great. They even seemed to reinvent or re-apply… don’t know their level of education in the subject… numerous concepts from high-assurance security and data-driven security. Here’s a few I saw in action:
“Make it impossible.” In high-sec, you assume the adversary might [mis-]use any code that’s in the system. Especially trusted code that can violate the security policy. First step we do is just removing as much of it as possible with lots of simplification. High-sec systems and API’s are often minimalist.
“We simply rely on our data model.” Aside from trusted databases, I’ve seen this applied in PHP applications where there were access checks all over the place. Eventually, they decided to define a set of permissible behaviors by certain types of people that they just wrapped around the data into a new object or module. Then, each caller supplied basic info that the system checked. Whole thing became more consistent and accurate w/ less code.
“Privilege levels named after the people…” This is Role-Based Access Control. There’s a lot of info out there on its strengths, weaknesses, and using it well.
“Privilege levels are linear.” This is similar to first, security models of Bell-LaPadula and Biba. Linear models were actually one of reason businesses as a whole rejected highly-secure kernels since they couldn’t handle structure of business apps and relationships. However, they’re using RBAC as main route then selectively applying linear models for simplicity where allowed. They note a new conversation will happen for non-linear structures. So, quite practical.
Note: If your language supports Design-by-Contract or formal specs, then there’s formal tools that can support 3 and 4 at the code level where the security policy is implemented in contracts. The value of this depends on complexity of app and interconnections. They’re doing micro-apps so may not need it.
REST resources and actions. The basic component of high-assurance security is the reference monitor. It’s an enforcer that makes a decision based on a subject (acting process/person), an object (the resource), and a security policy about what subjects can do to objects. REST does map very well to this model. That part will work. When I tried this long ago, I found that much of the code implementing RESTy solutions was complex or likely to have vulnerabilities. My solution was hand-rolling a REST equivalent in memory-safe language operating right on UDP/IP since it was reliable, internal network. Complexity is so minimal even a safety-critical RTOS could handle it. Idk if REST tooling has gotten high quality over time or not since I just messed with it a few times.
“Controllers and views.” This is similar to how high-assurance databases were done. First one was probably SeaView that used views to aggregate data from different security levels to present just right info at user’s security level. It was built on GEMSOS security kernel for isolation. The weakness identified by Schell et al was allowing the app-level to produce the view was essentially discretionary, access control instead of mandatory where a compromise of app would allow adversary to produce arbitrary views. In this case, that risk would be on the code or server producing the views which would both need to have strong resistance to vulnerabilities. The model is proven, though.
So, that’s how I see things as I look at it from a background in high-assurance security in terms of prior products and CompSci work experimenting with approaches. Also note that their qualms with heavyweight frameworks seem mostly due to how many of them are bad. There’s work like Aura from Cambridge or things in Alloy or Prolog that show a purpose-built DSL can be used for both authorization modeling and checking consistency of those models with relative ease. It would basically just take some CompSci and/or FOSS people just thinking hard about how to merge something like that into web or REST frameworks so it’s smooth. Meanwhile, OP are doing a great job at handling their current problems.
Glad to see articles like this. Their rationale makes sense.
Culturally, the expectation that you should “just use current hot auth library for authorization” is harmful. I don’t think one should have to argue against this. The idea that all middleware can be successfully commoditized and meet everyone’s use case is naive. Additionally, popular libraries often buy into a framework’s design philosophy heavily (or lack thereof, cough), and don’t necessarily reflect engineering quality.
Lots of Internet discussion on topics like this get derailed by That One Time There Was A Mess At Work, So Never Again!
A theme I really liked from this is building around your current needs, and intentionally avoiding designing for unknown, future needs - a worthwhile practice for keeping both code and process maintainable.