I think people rely on JavaScript too much. With sourcehut I’m trying to set a good example, proving that it’s possible (and not that hard!) to build a useful and competitive web application without JavaScript and with minimal bloat. The average sr.ht page is less than 10 KiB with a cold cache. I’ve been writing a little about why this is important, and in the future I plan to start writing about how it’s done.
In the long term, I hope to move more things out of the web entirely, and I hope that by the time I breathe my last, the web will be obsolete. But it’s going to take a lot of work to get there, and I don’t have the whole plan laid out yet. We’ll just have to see.
I’ve been thinking about this a lot lately. I really don’t like the web from a technological perspective, both as a user and as a developer. It’s completely outgrown its intended use-case, and with that has brought a ton of compounding issues. The trouble is that the web is usually the lowest-common-denominator platform because it works on many different systems and devices.
A good website (in the original sense of the word) is a really nice experience, right out of the box. It’s easy for the author to create (especially with a good static site generator), easy for nearly anyone to consume, doesn’t require a lot of resources, and can be made easily compatible with user-provided stylesheets and reader views. The back button works! Scrolling works!
Where that breaks down is with web applications. Are server-rendered pages better than client-rendered pages? That’s a question that’s asked pretty frequently. You get a lot of nice functionality for free with server-side rendering, like a functioning back button. However, the web was intended to be a completely stateless protocol, and web apps (with things like session cookies) are kind of just a hack on top of that. The experience of using a good web app without JavaScript can be a bit of a pain with many different use cases (for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page). Security is difficult to get right when the server manages state.
I’ll argue, if we’re trying to avoid the web, that client-side rendering (single-page apps) can be better. They’re more like native programs in that the client manages the state. The backend is simpler (and can be the backend for a mobile app without changing any code). The frontend is way more complex, but it functions similarly to a native app. I’ll concede poorly-built SPA is usually a more painful experience than a poorly-built SSR app, but I think SPAs are the only way to bring the web even close to the standard set by real native programs.
Of course, the JavaScript ecosystem can be a mess, and it’s often a breath of fresh air to use a site like Sourcehut instead of ten megs of JS. The jury’s still out as to which approach is better for all parties.
(for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page)
Some of the UI benefits of SPA are really nice tbh.
Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying.
It’s nice when websites can display the current state of things without having to refresh.
I can’t find the video, but the desire for eliminating stale UI (like outdated notifications) in Facebook was one of the reasons React was created in the first place.
There just doesn’t seem to be a way to do things like that with static, js-free pages.
The backend is simpler (and can be the backend for a mobile app without changing any code).
I never thought about that before, but to me that’s a really appealing point to having a full-featured frontend design.
I’ve noticed some projects with the server-client model where the client-side was using Vue/React, and they were able to easily make an Android app by just porting the server.
The jury’s still out as to which approach is better for all parties.
I think as always it depends.
In my mind there are some obvious choices for obvious usecases.
Blogs work great as just static html files with some styling.
Anything that really benefits from being dynamic (“reactive” I think is the term webdevs use) confers nice UI/UX benefits to the user with more client-side rendering.
I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol).
One could take it to an extreme and say that you can have something like Facebook without any javascript, but would people enjoy that? I don’t think so.
But you don’t need to have a SPA to have notifications without refresh. You just need a small dynamic part of the page, which will degrade gracefully when JavaScript is disabled.
Claim: Most sites are mostly static content. For example, AirBNB or Grubhub. Those sites could be way faster than they are now if they were architected differently. Only when you check out do you need anything resembling an “app”. The browsing and searching is better done with a “document” model IMO.
Ditto for YouTube… I think it used to be more a document model, but now it’s more like an app. And it’s gotten a lot slower, which I don’t think is a coincidence. Netflix is a more obvious example – it’s crazy slow.
To address the OP: for Sourcehut/Github, I would say everything except the PR review system could use the document model. Navigating code and adding comments is arguably an app.
On the other hand, there are things that are and should be apps: Google Maps, Docs, Sheets.
edit: Yeah now that I check, YouTube does the infinite scroll thing, which is slow and annoying IMO (e.g. breaks bookmarking). Ditto for AirBNB.
I’m glad to see some interesting ideas in the comments about achieving the dynamism without the bloat. A bit of Cunningham’s law in effect ;). It’s probably not easy to get such suggestions elsewhere since all I hear about is the hype of all the fancy frontend frameworks and what they can achieve.
Yeah SPA is a pretty new thing that seems to be taking up a lot of space in the conversation. Here’s another way to think about it.
There are three ways to manage state in a web app:
On the server only (what we did in the 90’s)
On the server and on the client (sometimes called “progressive enhancement”, jQuery)
On the client only (SPA, React, Elm)
As you point out, #1 isn’t viable anymore because users need more features, so we’re left with a choice between #2 and #3.
We used to do #2 for a long time, but #3 became popular in the last few years.
I get why! #2 is is legitimately harder – you have to decide where to manage your state, and managing state in two places is asking for bugs. It was never clear if those apps should work offline, etc.
But somehow #3 doesn’t seem to have worked out in practice. Surprisingly, hitting the network can be faster than rendering in the browser, especially when there’s a tower of abstractions on top of the browser. Unfortunately I don’t have references at the moment (help appreciated from other readers :) )
I wonder if we can make a hybrid web framework for #2. I have seen a few efforts in that direction but they don’t seem to be popular.
edit: here are some links, not sure if they are the best references:
Oh yeah I think this is what I was thinking of. Especially on Mobile phones, SPA can be slower than hitting the network! The code to render a page is often bigger than the page itself! And it may or may not be amortized depending on the app’s usage pattern.
A good example of #2 is Ur/Web. Pages are rendered server-side using templates which looks very similar to JSX (but without the custom uppercase components part) and similarly desugars to simple function calls. Then at any point in the page you can add a dyn tag, which takes a function returning a fragment of HTML (using the same language as the server-side part, and in some cases even the same functions!) that will be run every time one of the “signals” it subscribes to is triggered. A signal could be triggered from inside an onclick handler, or even from an even happening on the server. This list of demos does a pretty good job at showing what you can do with it.
So most of the page is rendered on the server and will display even with JS off, and only the parts that need to be dynamic will be handled by JS, with almost no plumbing required to pass around the state: you just need to subscribe to a signal inside your dyn tag, and every time the value inside changes it will be re-rendered automatically.
Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.
On the other hand, it can be annoying when things update without a refresh, distracting you from what you were reading. Different strokes for different folks. Luckily it’s possible to fulfill both preferences, by degrading gracefully when JS is disabled.
I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol).
The average user does care that browsing the web drains their battery, or that they have to upgrade their computer every few years in order to avoid lag on common websites. I agree that we will continue see the expansion of heavy client-side rendering, even in cases where it does not benefit the user, because it benefits the companies that control the web.
Some of the UI benefits of SPA are really nice tbh. Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.
Is this old reddit or new reddit? The new one is sort of SPA and I recall it updating without refresh.
Old reddit definitely has the issue I described, not sure about the newer design. If the new reddit doesn’t have that issue, that aligns with my experience of it being bloated and slow to load.
In the case where you have lots of buttons like that isn’t loading multiple completely separate doms and then reloading one or more of them somewhat worse than just using a tiny bit of js? I try to use as little as possible but I think that kind of dynamic interaction is the use case js originally was made for.
Worse? Well, iframes are faster (marginally), but yes I’d probably use JavaScript too.
I think most NoScript users will download tarballs and run ./configure && make -j6 without checking anything, so I’m not sure why anyone wants to turn off JavaScript anyway, except for maybe because adblockers aren’t perfect.
I’m not sure if this would work, but an interesting idea would be to use checkboxes that restyle when checked, and by loading a background image with a query or fragment part, the server is notified of which story is upvoted.
One thing I really miss with SPA’s (when used as apps), aside from performance, is the slightly more consistent UI/UX/HI that you generally get with desktop apps. Most major OS vendors, and most oss desktop toolkits, at least have some level of uniformity of expectation. Things like: there is a general style for most buttons and menu styles, there are some common effects (fade, transparency), scrolling behavior is more uniform.
With SPAs… well, good luck! Not only is it often browser dependent, but matrixed with a myriad JS frameworks, conventions, and render/load performance on top of it. I guess the web is certainly exciting, if nothing else!
I consider the “indented use-case” argument a bit weak, since for the last 20 years web developers, browser architects and our tech overlords have been working on making it work for applications (and data collection), and to be honest it does so most of the time. They can easily blame the annoyances like pop-ups and cookie-banners on regulations and people who use ad blockers, but from a non technical perspective, it’s a functional system. Of course when you take a look underneath, it’s a mess, and we’re inclined to say that these aren’t real websites, when it’s the incompetence of our operating systems that have created the need to off-load these applications to a higher level of abstraction – something had to do it – and the web was just flexible enough to take on that job.
You’re implying it’s Unix’s fault that the web is a mess but no other OS solved the problem either? Perhaps you would say that Plan 9 attempted to solve part of it, but that would only show that the web being what it is today isn’t solely down to lack of OS features.
I’d argue that rather than being a mess due to the incompetence of the OS it’s a mess due to the incremental adoption of different technologies for pragmatic reasons. It seems to be this way sadly, even if Plan 9 was a better Unix from a purely technological standpoint Unix was already so widespread that it wasn’t worth putting the effort in to switch to something marginally better.
No, I don’t think Plan 9 would have fixed things. It’s still fundamentally focused on text processing, rather than hypertext and universal linkability between objects and systems – ie the fundamental abstractions of an OS rather than just it’s features. Looking at what the web developed, tells us what needs were unformulated and ultimately ignored by OS development initiatives, or rather set aside for their own in-group goals (Unix was a research OS after all). It’s most unprobable that anyone could have foreseen what developments would take place, and even more that anyone will be able to fix them now.
From reading the question of the interviewer I get the feeling that it’s easy for non technical users to create a website using wordpress. Adding many plugins most likely leads to a lot of bloaty JavaScript and CSS.
I would argue that it’s a good thing that non technical users can easily create website but the tooling to create it isn’t ideal. For many users a wysiwyg editor which generates a static html page would be fine but such a tool does not seem to exists or isn’t known.
So I really see this as a tooling/solution problem, which isn’t for users to solve but for developers to create an excellent wordpress alternative.
for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page)
If a user clicks a particular upvote button, you should know where on that page it is located, and can use a page anchor in your response to send them back to it.
It’s not perfectly seamless, sadly, and it’s possible to set up your reverse proxy incorrectly enough to break applications relying on various http headers to get exactly the right page back.
I’m actually very curius about what both you and Drew mean by a ‘web free world’, and personally as much as I hate the web I have a hard time imagining a world without it. Could you elaborate on what kind of ‘web free world’ you hope for in the future?
In my case it’s a world where personal devices play much less of a role in day-to-day life, and people instead use neighbourhood or communal computers-places when they need to send a message or to compute something. The goal would be to counteract centralisation by building a distributed multi-user system on a geographical basis. But it’s more of a dream-fantasy than a real goal.
I am not saying that you should not have that dream-fantasy or that your opinion is incorrect but to share my opinion, that would be my nightmare-hell. Personal computing machines (with or unpreferably without internet) are one of my favorite things in life. I would carry around an abacus and pen/paper if electric based computers were banned 😂
Just because I find the idea interesting, why is or what makes “Personal computing machines” your one of your favorite things in life? I obviously have a lot to do with personal computers, and enjoy using them, but I still find the idea interesting where we had a different relation to computers. In some sense I’d imagine this would be more social way to compute, since such a computer place would be run by people you know and could help with. Just like a cafe (since that idea was mentioned, but I’d like to think of them as hacker spaces performing a public service) there are more regular and less regular customers…
Sorry I am late at getting back to this. I enjoy personal computers because I love tinkering with ideas and learning. A personal computer allows me to immediately explore any idea I am playing with in my head either by writing code that explores that idea or looking up facts relating to that idea. I am also not great at arithmetic or memorization and having a computer on me at all times greatly mitigates my lack of ability.
I’ve been thinking about this a lot lately. I really don’t like the web from a technological perspective, both as a user and as a developer. It’s completely outgrown its intended use-case, and with that has brought a ton of compounding issues. The trouble is that the web is usually the lowest-common-denominator platform because it works on many different systems and devices.
A good website (in the original sense of the word) is a really nice experience, right out of the box. It’s easy for the author to create (especially with a good static site generator), easy for nearly anyone to consume, doesn’t require a lot of resources, and can be made easily compatible with user-provided stylesheets and reader views. The back button works! Scrolling works!
Where that breaks down is with web applications. Are server-rendered pages better than client-rendered pages? That’s a question that’s asked pretty frequently. You get a lot of nice functionality for free with server-side rendering, like a functioning back button. However, the web was intended to be a completely stateless protocol, and web apps (with things like session cookies) are kind of just a hack on top of that. The experience of using a good web app without JavaScript can be a bit of a pain with many different use cases (for example, upvoting on sites like this: you don’t want to force a page refresh, potentially losing the user’s place on the page). Security is difficult to get right when the server manages state.
I’ll argue, if we’re trying to avoid the web, that client-side rendering (single-page apps) can be better. They’re more like native programs in that the client manages the state. The backend is simpler (and can be the backend for a mobile app without changing any code). The frontend is way more complex, but it functions similarly to a native app. I’ll concede poorly-built SPA is usually a more painful experience than a poorly-built SSR app, but I think SPAs are the only way to bring the web even close to the standard set by real native programs.
Of course, the JavaScript ecosystem can be a mess, and it’s often a breath of fresh air to use a site like Sourcehut instead of ten megs of JS. The jury’s still out as to which approach is better for all parties.
Some of the UI benefits of SPA are really nice tbh. Reddit for example will have a notification icon that doesn’t update unless you refresh the page, which can be annoying. It’s nice when websites can display the current state of things without having to refresh.
I can’t find the video, but the desire for eliminating stale UI (like outdated notifications) in Facebook was one of the reasons React was created in the first place. There just doesn’t seem to be a way to do things like that with static, js-free pages.
I never thought about that before, but to me that’s a really appealing point to having a full-featured frontend design. I’ve noticed some projects with the server-client model where the client-side was using Vue/React, and they were able to easily make an Android app by just porting the server.
I think as always it depends. In my mind there are some obvious choices for obvious usecases. Blogs work great as just static html files with some styling. Anything that really benefits from being dynamic (“reactive” I think is the term webdevs use) confers nice UI/UX benefits to the user with more client-side rendering.
I think the average user probably doesn’t care about the stack and the “bloat”, so it’s probably the case that client-side rendering will remain popular anytime it improves the UI/UX, even if it may not be necessary (plus cargo-culting lol). One could take it to an extreme and say that you can have something like Facebook without any javascript, but would people enjoy that? I don’t think so.
But you don’t need to have a SPA to have notifications without refresh. You just need a small dynamic part of the page, which will degrade gracefully when JavaScript is disabled.
Claim: Most sites are mostly static content. For example, AirBNB or Grubhub. Those sites could be way faster than they are now if they were architected differently. Only when you check out do you need anything resembling an “app”. The browsing and searching is better done with a “document” model IMO.
Ditto for YouTube… I think it used to be more a document model, but now it’s more like an app. And it’s gotten a lot slower, which I don’t think is a coincidence. Netflix is a more obvious example – it’s crazy slow.
To address the OP: for Sourcehut/Github, I would say everything except the PR review system could use the document model. Navigating code and adding comments is arguably an app.
On the other hand, there are things that are and should be apps: Google Maps, Docs, Sheets.
edit: Yeah now that I check, YouTube does the infinite scroll thing, which is slow and annoying IMO (e.g. breaks bookmarking). Ditto for AirBNB.
I’m glad to see some interesting ideas in the comments about achieving the dynamism without the bloat. A bit of Cunningham’s law in effect ;). It’s probably not easy to get such suggestions elsewhere since all I hear about is the hype of all the fancy frontend frameworks and what they can achieve.
Yeah SPA is a pretty new thing that seems to be taking up a lot of space in the conversation. Here’s another way to think about it.
There are three ways to manage state in a web app:
As you point out, #1 isn’t viable anymore because users need more features, so we’re left with a choice between #2 and #3.
We used to do #2 for a long time, but #3 became popular in the last few years.
I get why! #2 is is legitimately harder – you have to decide where to manage your state, and managing state in two places is asking for bugs. It was never clear if those apps should work offline, etc.
But somehow #3 doesn’t seem to have worked out in practice. Surprisingly, hitting the network can be faster than rendering in the browser, especially when there’s a tower of abstractions on top of the browser. Unfortunately I don’t have references at the moment (help appreciated from other readers :) )
I wonder if we can make a hybrid web framework for #2. I have seen a few efforts in that direction but they don’t seem to be popular.
edit: here are some links, not sure if they are the best references:
https://news.ycombinator.com/item?id=13315444
https://adamsilver.io/articles/the-disadvantages-of-single-page-applications/
Oh yeah I think this is what I was thinking of. Especially on Mobile phones, SPA can be slower than hitting the network! The code to render a page is often bigger than the page itself! And it may or may not be amortized depending on the app’s usage pattern.
https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4
https://news.ycombinator.com/item?id=17682378
https://v8.dev/blog/cost-of-javascript-2019
https://news.ycombinator.com/item?id=20317736
A good example of #2 is Ur/Web. Pages are rendered server-side using templates which looks very similar to JSX (but without the custom uppercase components part) and similarly desugars to simple function calls. Then at any point in the page you can add a
dyn
tag, which takes a function returning a fragment of HTML (using the same language as the server-side part, and in some cases even the same functions!) that will be run every time one of the “signals” it subscribes to is triggered. A signal could be triggered from inside an onclick handler, or even from an even happening on the server. This list of demos does a pretty good job at showing what you can do with it.So most of the page is rendered on the server and will display even with JS off, and only the parts that need to be dynamic will be handled by JS, with almost no plumbing required to pass around the state: you just need to subscribe to a signal inside your
dyn
tag, and every time the value inside changes it will be re-rendered automatically.Thanks a lot for all the info, really helpful stuff.
This link may interest you as well: https://medium.com/@cramforce/designing-very-large-javascript-applications-6e013a3291a3
On the other hand, it can be annoying when things update without a refresh, distracting you from what you were reading. Different strokes for different folks. Luckily it’s possible to fulfill both preferences, by degrading gracefully when JS is disabled.
The average user does care that browsing the web drains their battery, or that they have to upgrade their computer every few years in order to avoid lag on common websites. I agree that we will continue see the expansion of heavy client-side rendering, even in cases where it does not benefit the user, because it benefits the companies that control the web.
Is this old reddit or new reddit? The new one is sort of SPA and I recall it updating without refresh.
Old reddit definitely has the issue I described, not sure about the newer design. If the new reddit doesn’t have that issue, that aligns with my experience of it being bloated and slow to load.
There are lots of ways to do this. Here’s two:
I would’ve thought the exact opposite. Can you explain?
In the case where you have lots of buttons like that isn’t loading multiple completely separate doms and then reloading one or more of them somewhat worse than just using a tiny bit of js? I try to use as little as possible but I think that kind of dynamic interaction is the use case js originally was made for.
Worse? Well, iframes are faster (marginally), but yes I’d probably use JavaScript too.
I think most NoScript users will download tarballs and run
./configure && make -j6
without checking anything, so I’m not sure why anyone wants to turn off JavaScript anyway, except for maybe because adblockers aren’t perfect.That being said, I use NoScript…
I’m not sure if this would work, but an interesting idea would be to use checkboxes that restyle when checked, and by loading a background image with a query or fragment part, the server is notified of which story is upvoted.
That’d require using GET, which might be harder to prevent accidental upvotes. Could possibly devise something though.
One thing I really miss with SPA’s (when used as apps), aside from performance, is the slightly more consistent UI/UX/HI that you generally get with desktop apps. Most major OS vendors, and most oss desktop toolkits, at least have some level of uniformity of expectation. Things like: there is a general style for most buttons and menu styles, there are some common effects (fade, transparency), scrolling behavior is more uniform.
With SPAs… well, good luck! Not only is it often browser dependent, but matrixed with a myriad JS frameworks, conventions, and render/load performance on top of it. I guess the web is certainly exciting, if nothing else!
I consider the “indented use-case” argument a bit weak, since for the last 20 years web developers, browser architects and our tech overlords have been working on making it work for applications (and data collection), and to be honest it does so most of the time. They can easily blame the annoyances like pop-ups and cookie-banners on regulations and people who use ad blockers, but from a non technical perspective, it’s a functional system. Of course when you take a look underneath, it’s a mess, and we’re inclined to say that these aren’t real websites, when it’s the incompetence of our operating systems that have created the need to off-load these applications to a higher level of abstraction – something had to do it – and the web was just flexible enough to take on that job.
You’re implying it’s Unix’s fault that the web is a mess but no other OS solved the problem either? Perhaps you would say that Plan 9 attempted to solve part of it, but that would only show that the web being what it is today isn’t solely down to lack of OS features.
I’d argue that rather than being a mess due to the incompetence of the OS it’s a mess due to the incremental adoption of different technologies for pragmatic reasons. It seems to be this way sadly, even if Plan 9 was a better Unix from a purely technological standpoint Unix was already so widespread that it wasn’t worth putting the effort in to switch to something marginally better.
No, I don’t think Plan 9 would have fixed things. It’s still fundamentally focused on text processing, rather than hypertext and universal linkability between objects and systems – ie the fundamental abstractions of an OS rather than just it’s features. Looking at what the web developed, tells us what needs were unformulated and ultimately ignored by OS development initiatives, or rather set aside for their own in-group goals (Unix was a research OS after all). It’s most unprobable that anyone could have foreseen what developments would take place, and even more that anyone will be able to fix them now.
From reading the question of the interviewer I get the feeling that it’s easy for non technical users to create a website using wordpress. Adding many plugins most likely leads to a lot of bloaty JavaScript and CSS.
I would argue that it’s a good thing that non technical users can easily create website but the tooling to create it isn’t ideal. For many users a wysiwyg editor which generates a static html page would be fine but such a tool does not seem to exists or isn’t known.
So I really see this as a tooling/solution problem, which isn’t for users to solve but for developers to create an excellent wordpress alternative.
I am not affiliated to this in any way but I know of https://forestry.io/ which looks like what you describe. I find their approach quite interesting.
If a user clicks a particular upvote button, you should know where on that page it is located, and can use a page anchor in your response to send them back to it.
It’s not perfectly seamless, sadly, and it’s possible to set up your reverse proxy incorrectly enough to break applications relying on various http headers to get exactly the right page back.
I have my disagreements with Drew, but when it comes to
we’re on the same page. I’m just afraid we’re imagining different worlds.
I’m actually very curius about what both you and Drew mean by a ‘web free world’, and personally as much as I hate the web I have a hard time imagining a world without it. Could you elaborate on what kind of ‘web free world’ you hope for in the future?
In my case it’s a world where personal devices play much less of a role in day-to-day life, and people instead use neighbourhood or communal computers-places when they need to send a message or to compute something. The goal would be to counteract centralisation by building a distributed multi-user system on a geographical basis. But it’s more of a dream-fantasy than a real goal.
I am not saying that you should not have that dream-fantasy or that your opinion is incorrect but to share my opinion, that would be my nightmare-hell. Personal computing machines (with or unpreferably without internet) are one of my favorite things in life. I would carry around an abacus and pen/paper if electric based computers were banned 😂
Just because I find the idea interesting, why is or what makes “Personal computing machines” your one of your favorite things in life? I obviously have a lot to do with personal computers, and enjoy using them, but I still find the idea interesting where we had a different relation to computers. In some sense I’d imagine this would be more social way to compute, since such a computer place would be run by people you know and could help with. Just like a cafe (since that idea was mentioned, but I’d like to think of them as hacker spaces performing a public service) there are more regular and less regular customers…
Sorry I am late at getting back to this. I enjoy personal computers because I love tinkering with ideas and learning. A personal computer allows me to immediately explore any idea I am playing with in my head either by writing code that explores that idea or looking up facts relating to that idea. I am also not great at arithmetic or memorization and having a computer on me at all times greatly mitigates my lack of ability.
What about those of us who don’t want to go and use somebody else’s computer in what sounds like an Internet Cafe?
you don’t have to use a computer at all!
Well you’re the reason why its a “dream-fantasy” goal.
But if I would want to be pragmatic, I’d say it’s not either or but the movement to make personal computers obsolete, by offering an alternative.