“You probably had this feeling already, when you write server code only to proxy client’s demands for the DB and pass the results back. It feels stupid.”
“come from a desire to run the same code in two places. That very goal is wrong.”
I had to pause here. It really doesn’t feel stupid: it’s more like author hadn’t studied Security 101. Apps should validate data twice since (a) the server can’t trust the client at all and (b) the validation at the client saves wasted resources on honest mistakes by not even sending a request. Server code proxying requests can restrict access to select portions of the data by the clients. It may also be way better at data integrity and reliability than whatever device the client is using. It might also have rate limits to prevent DOS attacks which might just be buggy clients.
“UI apps used to talk directly to the DB.”
Did they? User’s local apps on their local data or terminals to their accounts might have directly interacted with it. Yet, most apps from mainframe terminals all the way to client server to three-tier web had intermediary functions for RAS or security. The author just went from not knowing common knowledge about server architecture to inventing a fictional, idealized past to support their next point.
“No, eventually DB will talk directly to the browser. Software may not be there yet”
This is possible. Except that it will need to do all the things the server software in front of it is doing. Which actually happens already with some of those application servers that have their own databases. That model was deployed in systems as different as AS/400’s with its integrated database and Allegro Common LISP with its AllegroCache. It brought its own challenges.
I’ll soldier on through the article on off chance it has something useful about web stacks.
“Web technology stack was created under assumption that people are looking at rarely-changing, mostly static data. “
Author is now in their area of expertise. That looks accurate. The goals look good. They’re the kind of stuff native tech can already handle that web must catch up on. That’s not to say it’s normal for native or anything. Just that examples exist.
“ First one is a security filter. “
That just makes the first part all the more confusing. The author knew a security filter was needed but not that it’s one of main justifications for server code in front of a database. Weird. The part about minimal filtering and immediate push is interesting, though. I’m pretty sure I’ve seen that implemented.
“ I wouldn’t say it was a walk in the park, we had to build a custom infrastructure for that, but it was certainly doable on a scale of 50K RPS and varied, user-specified, non-trivial queries. We just need something open-source like that.”
That at least corroborates that what the author wants to build has been built before with value delivered. That companies were buying it means maybe the author or someone else can build it with open core, paid extras model. Let the VC’s pay for the new web. ;)
“On a server, though, it’s just timeouts, refcounting, periodic cleaning and careful coding. No rocket science here.”
Another reason we’ve been using servers for this stuff instead of clients. The high-availability and security parts are easier, too, with the application being so special purpose with few privileges needed. Acceleration via better CPU’s, more memory, or special-purpose hardware is another I forgot to mention.
“Anyway, distributed computing is pretty crowded these days, let’s assume we know how to do it well. “
Let’s assume we can put a whole, distributed DB in the browser that works with our web apps without slowing anything down. Given author’s original requirements, it’s something like CochroachDB or FoundationDB that have strong consistency running in JavaScript on arbitrary devices. I mean, it could happen. Just noting we went from something done before to purely a thought experiment now with a huge prerequisite.
“That’s why we display effects of all user actions immediately”
That should probably be configurable. In some cases, the users get pissed if you give them a fake result that turns out not to be true. It’s part of why Amazon doesn’t say “Your order has shipped” the moment you pay, the resort doesn’t say start packing because you were successfully booked, or Facebook doesn’t immediately grant you “in a relationship” with someone on request. Businesses can loose customers delivering bad news after false, good news.
“These days it’s not that grim though:”
Names three tech that each have pieces of the puzzle with one having most of them. Well, the web devs know exactly which projects to start contributing to and/or integrating if they want the vision to happen. Get coding!
“I personally see a great opportunity in using Clojure, Datomic and DataScript to build such a system:”
First things that came to mind when I read the problem statement. If Yogthos submitted it, I thought there’s no way he didn’t consider them for this. I don’t know any of those techs but that part seemed logical. ;)
The author isn’t saying you shouldn’t hanlde security on the server. He’s saying that writing CRUD queries is tedious. Using something like GraphQL or Datascript provides a generic way to structure queries client-side that can be sent over a single endpoint. The server can still handles concerns such as authentication with this approach.
I’m guessing the author refers to desktop apps when mentioninig about the UI talking to the DB directly. The practice of building fat desktop clients was quite common in the 90s. Meanwhile, the trend of building actual applications using web tech is pretty recent.
I do agree that the idea of the client-side app talking directly to the DB has its own set of trade-offs. Personally, I don’t think this method will replace having servers in a general case, but I can see how it makes sense in some scenarios where the server is mostly proxying the data store.
I think displaying effects of user actions refers to propagating feedback to the view immediately, as opposed to simply showing it optimistically in the UI before the action is complete. One of the apps my team works on is a collaborative editing platform, where we have multiple users working on a shared document. We use WebSockets to push notifications to all the clients as soon as any user does an action. So, in that context immediate feedback means keeping multiple clients in sync in real-time.
Basically what the article boils down to is that the author thinks that web tech is moving towards the thick client model, and the traditional client/server split is not ideal for applications with complex UIs. Since majority of the logic is handled on the client, the server becomes a thin proxy around the DB that handles details like authenticaion. I don’t necessarily agree with all the points the author makes, but I did think it was an interesting read overall.
“You probably had this feeling already, when you write server code only to proxy client’s demands for the DB and pass the results back. It feels stupid.”
“come from a desire to run the same code in two places. That very goal is wrong.”
I had to pause here. It really doesn’t feel stupid: it’s more like author hadn’t studied Security 101. Apps should validate data twice since (a) the server can’t trust the client at all and (b) the validation at the client saves wasted resources on honest mistakes by not even sending a request. Server code proxying requests can restrict access to select portions of the data by the clients. It may also be way better at data integrity and reliability than whatever device the client is using. It might also have rate limits to prevent DOS attacks which might just be buggy clients.
“UI apps used to talk directly to the DB.”
Did they? User’s local apps on their local data or terminals to their accounts might have directly interacted with it. Yet, most apps from mainframe terminals all the way to client server to three-tier web had intermediary functions for RAS or security. The author just went from not knowing common knowledge about server architecture to inventing a fictional, idealized past to support their next point.
“No, eventually DB will talk directly to the browser. Software may not be there yet”
This is possible. Except that it will need to do all the things the server software in front of it is doing. Which actually happens already with some of those application servers that have their own databases. That model was deployed in systems as different as AS/400’s with its integrated database and Allegro Common LISP with its AllegroCache. It brought its own challenges.
I’ll soldier on through the article on off chance it has something useful about web stacks.
“Web technology stack was created under assumption that people are looking at rarely-changing, mostly static data. “
Author is now in their area of expertise. That looks accurate. The goals look good. They’re the kind of stuff native tech can already handle that web must catch up on. That’s not to say it’s normal for native or anything. Just that examples exist.
“ First one is a security filter. “
That just makes the first part all the more confusing. The author knew a security filter was needed but not that it’s one of main justifications for server code in front of a database. Weird. The part about minimal filtering and immediate push is interesting, though. I’m pretty sure I’ve seen that implemented.
“ I wouldn’t say it was a walk in the park, we had to build a custom infrastructure for that, but it was certainly doable on a scale of 50K RPS and varied, user-specified, non-trivial queries. We just need something open-source like that.”
That at least corroborates that what the author wants to build has been built before with value delivered. That companies were buying it means maybe the author or someone else can build it with open core, paid extras model. Let the VC’s pay for the new web. ;)
“On a server, though, it’s just timeouts, refcounting, periodic cleaning and careful coding. No rocket science here.”
Another reason we’ve been using servers for this stuff instead of clients. The high-availability and security parts are easier, too, with the application being so special purpose with few privileges needed. Acceleration via better CPU’s, more memory, or special-purpose hardware is another I forgot to mention.
“Anyway, distributed computing is pretty crowded these days, let’s assume we know how to do it well. “
Let’s assume we can put a whole, distributed DB in the browser that works with our web apps without slowing anything down. Given author’s original requirements, it’s something like CochroachDB or FoundationDB that have strong consistency running in JavaScript on arbitrary devices. I mean, it could happen. Just noting we went from something done before to purely a thought experiment now with a huge prerequisite.
“That’s why we display effects of all user actions immediately”
That should probably be configurable. In some cases, the users get pissed if you give them a fake result that turns out not to be true. It’s part of why Amazon doesn’t say “Your order has shipped” the moment you pay, the resort doesn’t say start packing because you were successfully booked, or Facebook doesn’t immediately grant you “in a relationship” with someone on request. Businesses can loose customers delivering bad news after false, good news.
“These days it’s not that grim though:”
Names three tech that each have pieces of the puzzle with one having most of them. Well, the web devs know exactly which projects to start contributing to and/or integrating if they want the vision to happen. Get coding!
“I personally see a great opportunity in using Clojure, Datomic and DataScript to build such a system:”
First things that came to mind when I read the problem statement. If Yogthos submitted it, I thought there’s no way he didn’t consider them for this. I don’t know any of those techs but that part seemed logical. ;)
The author isn’t saying you shouldn’t hanlde security on the server. He’s saying that writing CRUD queries is tedious. Using something like GraphQL or Datascript provides a generic way to structure queries client-side that can be sent over a single endpoint. The server can still handles concerns such as authentication with this approach.
I’m guessing the author refers to desktop apps when mentioninig about the UI talking to the DB directly. The practice of building fat desktop clients was quite common in the 90s. Meanwhile, the trend of building actual applications using web tech is pretty recent.
I do agree that the idea of the client-side app talking directly to the DB has its own set of trade-offs. Personally, I don’t think this method will replace having servers in a general case, but I can see how it makes sense in some scenarios where the server is mostly proxying the data store.
I think displaying effects of user actions refers to propagating feedback to the view immediately, as opposed to simply showing it optimistically in the UI before the action is complete. One of the apps my team works on is a collaborative editing platform, where we have multiple users working on a shared document. We use WebSockets to push notifications to all the clients as soon as any user does an action. So, in that context immediate feedback means keeping multiple clients in sync in real-time.
Basically what the article boils down to is that the author thinks that web tech is moving towards the thick client model, and the traditional client/server split is not ideal for applications with complex UIs. Since majority of the logic is handled on the client, the server becomes a thin proxy around the DB that handles details like authenticaion. I don’t necessarily agree with all the points the author makes, but I did think it was an interesting read overall.