You know, one could capture these “colors” in a type system. One could see the different contexts in which the functions run as, I dunno, “side effects” separate from the intended action of the function. If you had these “wrapper types,” you’d be unable to inadvertently run a function that interacts with Server contexts in a Client context. In cases where either context could provide the same functionality – possibly implemented in different ways – you could implement a kind of “class” for those context types and make your functions generic across their “colors.”
They’d need some kind of catchy, short name, though. Something that would definitely not scare people.
Nice – I hadn’t looked at Unison before. That’s definitely an interesting way to encode and name these mysterious container types! Thanks for the link.
Completely agree. I think you could actually have a wonderful experience with this sort of architecture if you had the type system and ecosystem to back it up.
I’m aware there are a ton of alternative implementations here – I chose it as an example because it’s easy to understand, with a little more complexity than “fizzbuzz.” If it helps, imagine it’s actually weather simulations.
This is exactly what I’m thinking. You are constructing finite lists, so construct the list then operate on it.
let hailstone_count = hailstone(n).length
let hailstone_max = hailstone(n).fold(max)
If you cannot construct the whole list, or reconstruct it, then deal with it as a streaming problem and just combine a bunch of traversals on the stream.
With this your hailstone function doesn’t have to know something about how is it going to be consumed, and therefore is simpler and more generic.
If you’re working in a less strongly-typed language than Rust, it’s a good idea to reach for a higher-order function in these cases
I’m not clear on why higher-order functions are for less strongly-typed languages. Seems like the main benefit strategies have over HOFs is that you can restrict the API to only “strategy functions”, but that can be encoded in the type system too.
My reasoning there is that if you are accepting an object as a strategy in a duck-typed language, you then get the “excitement” of duck typing in addition to needing to null-check the argument. Consider:
def hailstone_numbers(recorder, n):
...
if recorder is not None:
try:
recorder.record(n)
except AttributeError:
pass
With a HOF, you only need the external null-check:
def hailstone_numbers(record, n):
...
if record is not None:
record(n)
This is an old brain-dump, but I’m posting it in response to https://lobste.rs/s/mxcmxg/abstraction_is_okay_magic_is_not. I should definitely update this, and add whatever magic is. I don’t think “magic” is precisely the same as either of my three categories.
It’s a good brain-dump, and I agree with it. I will note that my post (I wrote the ‘Abstraction is Okay’ article) was not exactly aimed at defining what abstraction is. Instead it’s just a musing on a pattern I’ve experienced as a consumer many times and how I consciously avoid producing it myself.
I have concrete examples of “magic,” but I’d rather not share them because they’d tend to be very recognizable to the people who wrote them. Here’s an anonymized example I debated including in the post:
@Wire(WidgetModel)
@NormalPaths(exclude = AsyncPaths)
@VarySerializer(determinant = ClientAuthentication)
class WidgetService extends BasicModelService {
// No code necessary!
}
What does this do? What does it accept? How do you modify it? It’s the kind of thing that someone presents and says proudly: “Look at how easy it is to implement a service!” but its name quickly becomes a portent of dread the minute they move on to another project or another job.
“Hey, Beth, I was wondering how this thing in MagicalWeb works…”
“Oh, man. Don’t worry about that crap, that’s Chuck’s stuff. He’s been gone for ages, and we’re trying to kill it. Add your path to the if statement in SkipThatCrap.java.”
I haven’t read it yet, but the author of An Elegant Puzzle also has a new book called Staff Engineer: Leadership Beyond the Management Track. Unfortunately, it’s self published, so it’s probably wise to wait for reviews.
Until there are Linux equivalents, not just “clones,” of the Adobe toolchain in Linux, this is to be expected. You can hire professional designers and let them use the tools they need, or you can say “GIMP and Scribus only!” and turn out terrible looking publications because most designers won’t work with those tools.
Alternately, they could use LaTeX or troff and produce the sorts of documents that those tools are great at producing.
You can hire professional designers and let them use the tools they need…
Never met these professional designers you talk of. Far more often it’s just status signalling jerks who use the most expensive hardware and software because they do the “social media magic” or “glossy brochure secret sauce” everyone is so crazy about lately.
I’ve met several video specialists who did not understand their formats nor did have any meaningful video editing skills.
I’ve met UI designers who were unable to explain white space rules for an UI kit they “designed”.
I’ve met PR staff who did not even know how they’ve licensed the illustrations for a book they’ve published only to find they were - of course - not compliant.
I’d take someone who can use Scribus and GIMP and Inkscape over anyone who “needs” Adobe products, because the later only guarantees high costs. Not better results.
Also, the brochure could’ve been set as effectively in LibreOffice on Linux. There is nothing special in it.
Never met these professional designers you talk of. Far more often it’s just status signalling jerks who use the most expensive hardware and software because they do the “social media magic” or “glossy brochure secret sauce” everyone is so crazy about lately.
You need to expand your social circle. I e-know plenty of people who work in graphic design who are both professional and passionate about their work.
Overall, I think this event is the worst I’ve experienced with Amazon. It took twenty-two hours for full recovery of some systems, and the analysis here indicates that this was at least partially because of some really poor architectural choices, like using a thread per server in the farm on every server in the farm and running so many critical subsystems on the same substrate with no damage control measures.
This is a good talk, and I think it’s a positive that we’re getting more features in terminals. Far from being bloat, I find that better font handling helps me avoid eye strain; viewing non-ASCII text is actually possible (although sometimes still ugly); and I can eliminate tmux/screen, which I have always found annoying to configure and never perfectly ergonomic. Even frivolous-seeming features like ligatures can help make symbol-heavy languages like Rust more readable.
Unfortunately, this doesn’t seem to conform to any previously-defined standards (JMESPath, JSONiq, XPath) or existing tools (jq, json-query, etc.), so this will go on the pile of Yet Another Data Querying Syntax.
Whenever I encounter a post about addresses, I like to introduce people to the wonders of Hickory, North Carolina, whose baffling streets are an artifact of a benighted attempt to “systematize” addresses within the city; Tokyo, which has many unnamed streets and uses neighborhoods and blocks as its addressing basis; and Ho Chi Minh City, which does what it feels like (e.g. “3-02 14/A Hẻm 46/2,” or a building with an address that doesn’t actually front the road in the address? entirely possible!).
I always felt like SPAs were created to make data on pages load faster, while simplifying for mobile web, but they ended up making development more complicated, introduced a bunch of bloated frameworks, required tooling to trim the bloat, and ultimately tried to unnecessarily rewrite HTTP.
Yeah, we started with “no need to load data for the header twice” and ended up with bloated multi-megabyte javascript blobs with loading times in tens of seconds. :(
I think the focus shifted more from “need to load data faster” to “need to be able to properly architecture out frontend systems”.
Even though I still “just use jQuery like it’s 2010”, I can’t deny there aren’t problems with the ad-hoc DOM manipulation approach. One way to see this is that the DOM is this big global state thing that various functions are mutating, which can lead to problems if you’re not careful.
So some better architecture in front of that doesn’t strike me as a bad thing as such, and sacrificing some load speed for that is probably acceptable too.
That being said, “4.09 MB / 1.04 MB transferred” for the nytimes.com homepage (and that’s the minified version!) is of course fairly excessive and somewhat ridiculous. I’ve always wondered what’s in all of that 🤔
“need to be able to properly architecture out frontend systems”
An absolute shitload of websites are built with React that could be built entirely with server rendered HTML, page navigations, and 100 lines of vanilla JS per page. Not everything is Google Docs.
Recent example: I recently was apartment hunting. All the different communities had SPAs to navigate the floor plans, availability, and application process. Fancy pop up windows when you click a floor plan, loading available apartments only when clicking on “Check Availability” and so on.
But why? The pop up windows just made it incredibly obnoxious to compare floor plans. They were buggy on mobile. The entire list of available units for an apartment building could have been a few kilobytes of server rendered HTML or embedded JSON.
Every single one of those websites would have been better using static layouts, page navigations, and regular HTML forms.
One reason for a lot of that is that people want to build everything against an API so they can re-use it for the web frontend, Android app, iOS app, and perhaps something else. I wrote a comment about this before which I can’t find right now: but a large reason for all of this SPA stuff is because of mobile.
Other than that, yeah, I feel most websites would be better with just server-side rendering of HTML with some JS sprinkled on top. But often it’s not just about the website.
I don’t think an API and server-side rendering have to be incompatible, you could just do internal calls to the API to get the data to render server-side.
That’s what we’re doing. We even make a call to API without HTTP and JSON serialization, but it’s still a call to an API which will be used by mobile app.
Having done this, I feel this is the way to go for most apps. Even if the backend doesn’t end up calling a web API, just importing a library or however you want to interface is fine too, if not preferable. I’m a big fan of 1 implementation, multiple interface formats.
I’ve worked in that industry. A large portion of it is based on the need to sell the sites. Property management companies are pretty bad at anything “technical,” and they will always choose something flashy over functional. A lot of the data is already exposed via APIs and file shipment, too, so AJAX-based data loading with a JavaScript front end comes “naturally.”
I agree. So, to answer the titular question, I would answer: websites and native applications.
Developing “stuff” that feels like a website fitting the HTTP paradigm is mostly straightforward, pleasant, inexpensive, and comparatively unprofitable.
Developing “stuff” that feels like an application fitting the native OS paradigm is relatively straightforward, occasionally pleasant, often expensive, and comparatively unprofitable.
If we’re limiting our scope to a technical discussion, it seems straightforward to answer the question. Of course, for better or worse, we don’t live in a world where tech stack decisions are based on those technical discussions alone; the “comparatively unprofitable” attribute eclipses the other considerations.
That’s how I remember it. I also remember building SPAs multiple years before React was announced, although back then I don’t recall using the term SPA to describe what they were.
Does the Internet ever like multiparadigm languages? I’m trying to think of one that it does. Even with something like OCaml, everyone quickly says, “and don’t use the OO part!”
Does the Internet ever look back fondly on older, dominant languages? There’s a narrative that the older languages must be defective because we’ve all moved on; this is a necessary belief for the orthodoxy of progress to be held.
These are meta questions that don’t apply to C++ as much as languages themselves.
I will freely admit that I prefer opinionated languages over anything goes languages. Multiparadigm languages I think fall in the anything goes category. I find that the more paradigms I need to hold in my head the more likely I’m going to be frustrated when I’m debugging an issue in production concerning 5 year old code. I think this is because:
My actual on the job greenfield development experience is close to nil.
I spend 80% of my on the job time reading and detangling old code.
As a result the less cognitive overhead a language forces onto me the better I like it. It’s the same reason why I prefer explicit to implicit code.
I think the core issue is that multiparadigm languages are rarely orthogonal – instead of merging functionality from different paradigms (where it makes sense) they just have them living alongside each other.
That’s why I believe that there is nothing inherently “larger” or “anything goes” about multiparadigm languages, it’s just many of the existing examples did a poor job.
One trivial example is if-then-else statements, ternary expressions and pattern matching – there is no reason why a language needs to have all three! Just merge them until you have “one way of doing things”.
I don’t know who counts as “the Internet,” but it’s definitely possible to appreciate older languages that are well-designed. For example, Ada is intriguing (I think it’s even had a tiny renaissance as people are revisiting) and, of course, Lisp lives on. Even languages like Fortran are, if not loved, then at least respected. C also isn’t hated, although experience has shown us that it has flaws (memory safety, undefined behavior, and so on.)
As for the first question, it depends on what you mean by “multiparadigm.” Python is certainly a hybrid language. You could argue that a language like Rust is a multiparadigm language with procedural and functional aspects. In fact, you can use structs in Rust and Go in a way that approximates the hybrid approaches of Python.
The issue with C++ (and some other multiparadigm languages) is that it feels like the implications of the combinations of features aren’t thought through (or, in some cases, even thought of).
My wife and I started studying Mandarin together using a “gamified” app, Super Chinese. It’s a fun little break at the end of the day as we compete to get higher scores on pronunciation and tests. 很有趣!
I use org-mode, or simple markdown files for short and self-contained notes. However, if I have a large project that spans a long time (language learning, history notes, etc.) I actually use a local instance of gitit. It’s nice to have a dedicated interface with search and so on, and you can edit pages in your text editor of choice as well.
I’ve read so many k8s articles in the past years that I feel like I almost have intimate knowledge of this piece of software, even though I have not used it productively before. It’s obviously pretty strong at allowing applications to scale up whenever in high demand.
Kubernetes will automatically manage your resources for you. If configured correctly, it can survive even if a “node”, or a server in the cluster, becomes unaccessible, with no input required from the user. Kubernetes through some service providers can scale up workloads to massive amounts temporarily, which can be incredibly useful if your service becomes very popular and you suddenly gain a lot of traffic.
Running a Kubernetes system means you can reduce your downtime to an insanely small level, and increase your computational capacity to an insanely large level. While this may not seem useful up front, you may consider it essential down the line.
Are there persons here who have designed and deployed applications on k8s for non-enterprise use, and have actually gotten to make use of the most important features of k8s: prevented downtime and scaling up (due to high increases in traffic/capacity)? I’m talking here about blogs, web services, apps, mobile apps, mostly deployed and maintained by a single developer.
I have been working on Kubernetes for my company for about two years. It’s had a lot of benefits for our use cases, but I would never use it for a one-developer project. It’s simply not worth it – the things it solves for you aren’t going to be problems you have. I’d even say things like downtime and scaling up aren’t as big a deal at that level of functionality; I’d think that it’s often better for a blog to fall over than to scale at the level that some of the apps I work on scale, simply due to cost. (Are you okay with paying $900 for one day’s traffic spike?) Some of the things that it does don’t even make sense if you’re not working in a particular type of environment.
As a sole developer, you have near-zero communications overhead; you can just remember / write down the state of play, and everyone who has to maintain it will automatically know. You never (eg) accidentally try to run two deploys at the same time, even if you don’t implement a mutex around your deploy process. Implementing k8s can consume all of your team-of-ones output for weeks.
Adding kubernetes (or similar kinds of complex system) can be fantastic for big teams. Having a single source of truth RE the status of the app is a tremendous advantage. It only takes a small fraction of the teams output to implement. The cost/benefit is totally different.
This is one of those things where I simply don’t understand why people insist that empirical science gives a definitive answer, or why people even care what the science says. I switch freely between light and dark schemes, and I do so based on the evidence of… my eyes. If it’s bright in my office I find dark screens harder and more tiring to read. If it’s dark, I find light screens harder to read. I prefer dark, overall, so I keep my office as dark as possible, usually.
The question of which is “proven” to be superior, in lab conditions not using my eyes, is irrelevant. Having statistics doesn’t make it true.
To some, science is religion, so invoking capital-S science justifies their subjective opinions, as if that’s what it’s for. To the rest of us, it looks quite silly and we can see right through it.
I blame Solarized for even putting forth the notion that color schemes are anything other than preference.
It seems like most people need permission from science to have any beliefs at all these days. I think articles like this shake people out of it a bit, but people carry that need with them to everything that’s not obviously subjective.
I don’t think there’s anything wrong with bringing science into colorschemes or anything, I think most people should try doing a bit more of it because I think programming experience and aesthetic pleasure, while connected, aren’t the same thing. Try using acme colors or otherwise minimal, or turn highlighting off entirely every once in a while. Maybe try rainbowy semantic highlighting if you’re a minimalist already.
It seems like most people need permission from science to have any beliefs at all these days.
I think it’s the opposite, no? Many people hold beliefs despite the science, heck, some chunk of them believe that if something rhymes it has to be true.
The article is trying to answer what is better for some narrow definition of better, that matches author’s beliefs. If it were written in e-simple, it would have had a different message. But, as you and many say, color schemes are highly subjective areas — better is what we claim is better.
some chunk of them believe that if something rhymes it has to be true.
Maybe we’re getting off-topic here, but I’ve never known anyone like that. Even the most religious people I know of would warn you that the devil has catchy songs, maybe even more catchy songs than God does.
This is something I’ve been wondering about, because I’m one of these people who feels the urge to tell others about how light themes are actually better than they think. What’s the impulse?
I think it comes down to culture. Dark editor themes seem to be the norm nowadays, and using a light theme actually draws attention. New colleagues will often make surprised comments when they see my editor—in the same way they’d comment about someone not being able to touch type. They don’t outwardly criticize, but it’s transparent it’s not something they expect from an experienced developer.
So, as a result, it gets me thinking a lot about why I’m using a light theme despite it being a bit looked down upon. It’s very possible this is a common motivation for speaking out about the light/dark science!
Paid hosting sign ups are closed there. Which is too bad, I don’t really want to self host this since it only works with postgres, and I have no other reason to setup/run/maintain postgres.
That does tend to be annoying, but my experience with maintaining postgres for my email setup so far has been to 1) set it up, 2) configure backups &c and 3) forget about it. So if you’re willing to put in the up-front work it’s probably pretty painless.
I’d never heard of this but just installed it, it’s cool. Light in resources, Debian package repo, clear installation instructions and it feels faster than ttrss. I’ve been using and advocating ttrss for years, but this might be my switch moment. The content parsing is better and it does not require an app on mobile. With ttrss I cannot subscribe to a feed in the ios app…
The only thing that keeps me from ditching ttrss is that miniflux seems to have no ability to adjust the sorting. I prefer to have the newest articles on top on the first page.
I have written a few microservices in both Go and Rust (including translating some from Go to Rust). There are benefits, particularly over Python (my previous go-to language), but you must recognize that you’re trading a decent amount of maturity for a number of rough edges. In the cases where I am using Rust, that’s fine – but I am also consciously using it for small, self-contained components until the web and database stories “firm up” some. If you’re curious, the frameworks I use are tokio and warp. warp is a higher-level abstraction on top of hyper. For database work I have had some success with sqlx.
Interesting. I would have the exact opposite reaction. Go will make you think about raw pointers and that kind of thing, whereas rust you can write at a high-level like Python.
Totally agree use the one you know unless you have a desire to learn a new one.
If pointers are too much cognitive load then Rust’s lifetimes and ownership juggling is going to be way worse. I’d say that the comparison is more that Python and Go are not particularly functional languages, while Rust obviously is (and that’s the appeal of it to people who like functional languages).
If Rust is faster for a given use case that’s a more like-for-like basis for comparison, but then you might want to use Fortran to go even faster depending on the use case. ;-)
Admittedly I’ve invested time in becoming comfortable with Rust, but I actually concur – after gaining some familiarity, Rust feels much higher level than Go.
Pointers in Go are pretty trivial to work with, and I say this coming to Go from Ruby. Basically your only concern with points are “they can be nil,” otherwise you barely need to differentiate between pointer and structs if you’re writing idiomatically (that is a gross oversimplification and there are performance implications at times, but it’s extremely unlike C/C++ in this respect).
You know, one could capture these “colors” in a type system. One could see the different contexts in which the functions run as, I dunno, “side effects” separate from the intended action of the function. If you had these “wrapper types,” you’d be unable to inadvertently run a function that interacts with
Server
contexts in aClient
context. In cases where either context could provide the same functionality – possibly implemented in different ways – you could implement a kind of “class” for those context types and make your functions generic across their “colors.”They’d need some kind of catchy, short name, though. Something that would definitely not scare people.
Thank you for emphasizing exactly how much the word “monad” has become a fnord. You are describing codensity monads.
Abilities? :-)
Nice – I hadn’t looked at Unison before. That’s definitely an interesting way to encode and name these mysterious container types! Thanks for the link.
https://elm-pages.com/ ?
Completely agree. I think you could actually have a wonderful experience with this sort of architecture if you had the type system and ecosystem to back it up.
Hm, I’d say the example is more specific: it is “internal iteration”, rather than general strategy. I usually code those as
It probably is important to note, for pedagogical purposes, that in this case external iteration works fine:
I’m aware there are a ton of alternative implementations here – I chose it as an example because it’s easy to understand, with a little more complexity than “fizzbuzz.” If it helps, imagine it’s actually weather simulations.
This is exactly what I’m thinking. You are constructing finite lists, so construct the list then operate on it.
If you cannot construct the whole list, or reconstruct it, then deal with it as a streaming problem and just combine a bunch of traversals on the stream.
With this your hailstone function doesn’t have to know something about how is it going to be consumed, and therefore is simpler and more generic.
I don’t thin finitness plays a role here.
iter::successors
would work just fine for an infinite list as well.I’m not clear on why higher-order functions are for less strongly-typed languages. Seems like the main benefit strategies have over HOFs is that you can restrict the API to only “strategy functions”, but that can be encoded in the type system too.
My reasoning there is that if you are accepting an object as a strategy in a duck-typed language, you then get the “excitement” of duck typing in addition to needing to null-check the argument. Consider:
With a HOF, you only need the external null-check:
Obviously, this is a matter of taste. :)
This is an old brain-dump, but I’m posting it in response to https://lobste.rs/s/mxcmxg/abstraction_is_okay_magic_is_not. I should definitely update this, and add whatever magic is. I don’t think “magic” is precisely the same as either of my three categories.
It’s a good brain-dump, and I agree with it. I will note that my post (I wrote the ‘Abstraction is Okay’ article) was not exactly aimed at defining what abstraction is. Instead it’s just a musing on a pattern I’ve experienced as a consumer many times and how I consciously avoid producing it myself.
I have concrete examples of “magic,” but I’d rather not share them because they’d tend to be very recognizable to the people who wrote them. Here’s an anonymized example I debated including in the post:
What does this do? What does it accept? How do you modify it? It’s the kind of thing that someone presents and says proudly: “Look at how easy it is to implement a service!” but its name quickly becomes a portent of dread the minute they move on to another project or another job.
“Hey, Beth, I was wondering how this thing in
MagicalWeb
works…”“Oh, man. Don’t worry about that crap, that’s Chuck’s stuff. He’s been gone for ages, and we’re trying to kill it. Add your path to the
if
statement inSkipThatCrap.java
.”I haven’t read it yet, but the author of An Elegant Puzzle also has a new book called Staff Engineer: Leadership Beyond the Management Track. Unfortunately, it’s self published, so it’s probably wise to wait for reviews.
Until there are Linux equivalents, not just “clones,” of the Adobe toolchain in Linux, this is to be expected. You can hire professional designers and let them use the tools they need, or you can say “GIMP and Scribus only!” and turn out terrible looking publications because most designers won’t work with those tools.
Alternately, they could use LaTeX or troff and produce the sorts of documents that those tools are great at producing.
There are people who produce excellent quality output using those tools, like David Revoy. At least some of those people are even available for hire.
Never met these professional designers you talk of. Far more often it’s just status signalling jerks who use the most expensive hardware and software because they do the “social media magic” or “glossy brochure secret sauce” everyone is so crazy about lately.
I’ve met several video specialists who did not understand their formats nor did have any meaningful video editing skills.
I’ve met UI designers who were unable to explain white space rules for an UI kit they “designed”.
I’ve met PR staff who did not even know how they’ve licensed the illustrations for a book they’ve published only to find they were - of course - not compliant.
I’d take someone who can use Scribus and GIMP and Inkscape over anyone who “needs” Adobe products, because the later only guarantees high costs. Not better results.
Also, the brochure could’ve been set as effectively in LibreOffice on Linux. There is nothing special in it.
You need to expand your social circle. I e-know plenty of people who work in graphic design who are both professional and passionate about their work.
Yeah, might be a central Europe thing, because that’s what sits on the other side of the table when we open a public tender or try hiring.
And don’t get me started on their salesmen…
Overall, I think this event is the worst I’ve experienced with Amazon. It took twenty-two hours for full recovery of some systems, and the analysis here indicates that this was at least partially because of some really poor architectural choices, like using a thread per server in the farm on every server in the farm and running so many critical subsystems on the same substrate with no damage control measures.
This is a good talk, and I think it’s a positive that we’re getting more features in terminals. Far from being bloat, I find that better font handling helps me avoid eye strain; viewing non-ASCII text is actually possible (although sometimes still ugly); and I can eliminate tmux/screen, which I have always found annoying to configure and never perfectly ergonomic. Even frivolous-seeming features like ligatures can help make symbol-heavy languages like Rust more readable.
This is honestly fantastic.
(sin(t/i)+tan(i/t))/(x*y)
Unfortunately, this doesn’t seem to conform to any previously-defined standards (JMESPath, JSONiq, XPath) or existing tools (jq, json-query, etc.), so this will go on the pile of Yet Another Data Querying Syntax.
Whenever I encounter a post about addresses, I like to introduce people to the wonders of Hickory, North Carolina, whose baffling streets are an artifact of a benighted attempt to “systematize” addresses within the city; Tokyo, which has many unnamed streets and uses neighborhoods and blocks as its addressing basis; and Ho Chi Minh City, which does what it feels like (e.g. “3-02 14/A Hẻm 46/2,” or a building with an address that doesn’t actually front the road in the address? entirely possible!).
I always felt like SPAs were created to make data on pages load faster, while simplifying for mobile web, but they ended up making development more complicated, introduced a bunch of bloated frameworks, required tooling to trim the bloat, and ultimately tried to unnecessarily rewrite HTTP.
Yeah, we started with “no need to load data for the header twice” and ended up with bloated multi-megabyte javascript blobs with loading times in tens of seconds. :(
I think the focus shifted more from “need to load data faster” to “need to be able to properly architecture out frontend systems”.
Even though I still “just use jQuery like it’s 2010”, I can’t deny there aren’t problems with the ad-hoc DOM manipulation approach. One way to see this is that the DOM is this big global state thing that various functions are mutating, which can lead to problems if you’re not careful.
So some better architecture in front of that doesn’t strike me as a bad thing as such, and sacrificing some load speed for that is probably acceptable too.
That being said, “4.09 MB / 1.04 MB transferred” for the nytimes.com homepage (and that’s the minified version!) is of course fairly excessive and somewhat ridiculous. I’ve always wondered what’s in all of that 🤔
An absolute shitload of websites are built with React that could be built entirely with server rendered HTML, page navigations, and 100 lines of vanilla JS per page. Not everything is Google Docs.
Recent example: I recently was apartment hunting. All the different communities had SPAs to navigate the floor plans, availability, and application process. Fancy pop up windows when you click a floor plan, loading available apartments only when clicking on “Check Availability” and so on.
But why? The pop up windows just made it incredibly obnoxious to compare floor plans. They were buggy on mobile. The entire list of available units for an apartment building could have been a few kilobytes of server rendered HTML or embedded JSON.
Every single one of those websites would have been better using static layouts, page navigations, and regular HTML forms.
One reason for a lot of that is that people want to build everything against an API so they can re-use it for the web frontend, Android app, iOS app, and perhaps something else. I wrote a comment about this before which I can’t find right now: but a large reason for all of this SPA stuff is because of mobile.
Other than that, yeah, I feel most websites would be better with just server-side rendering of HTML with some JS sprinkled on top. But often it’s not just about the website.
I don’t think an API and server-side rendering have to be incompatible, you could just do internal calls to the API to get the data to render server-side.
That’s what we’re doing. We even make a call to API without HTTP and JSON serialization, but it’s still a call to an API which will be used by mobile app.
Having done this, I feel this is the way to go for most apps. Even if the backend doesn’t end up calling a web API, just importing a library or however you want to interface is fine too, if not preferable. I’m a big fan of 1 implementation, multiple interface formats.
I’ve worked in that industry. A large portion of it is based on the need to sell the sites. Property management companies are pretty bad at anything “technical,” and they will always choose something flashy over functional. A lot of the data is already exposed via APIs and file shipment, too, so AJAX-based data loading with a JavaScript front end comes “naturally.”
I agree. So, to answer the titular question, I would answer: websites and native applications.
Developing “stuff” that feels like a website fitting the HTTP paradigm is mostly straightforward, pleasant, inexpensive, and comparatively unprofitable.
Developing “stuff” that feels like an application fitting the native OS paradigm is relatively straightforward, occasionally pleasant, often expensive, and comparatively unprofitable.
If we’re limiting our scope to a technical discussion, it seems straightforward to answer the question. Of course, for better or worse, we don’t live in a world where tech stack decisions are based on those technical discussions alone; the “comparatively unprofitable” attribute eclipses the other considerations.
That’s how I remember it. I also remember building SPAs multiple years before React was announced, although back then I don’t recall using the term SPA to describe what they were.
Does the Internet ever like multiparadigm languages? I’m trying to think of one that it does. Even with something like OCaml, everyone quickly says, “and don’t use the OO part!”
Does the Internet ever look back fondly on older, dominant languages? There’s a narrative that the older languages must be defective because we’ve all moved on; this is a necessary belief for the orthodoxy of progress to be held.
These are meta questions that don’t apply to C++ as much as languages themselves.
I see love for plain C.
I don’t think you can write off C++ hate as hipsterism.
I will freely admit that I prefer opinionated languages over anything goes languages. Multiparadigm languages I think fall in the anything goes category. I find that the more paradigms I need to hold in my head the more likely I’m going to be frustrated when I’m debugging an issue in production concerning 5 year old code. I think this is because:
As a result the less cognitive overhead a language forces onto me the better I like it. It’s the same reason why I prefer explicit to implicit code.
I think the core issue is that multiparadigm languages are rarely orthogonal – instead of merging functionality from different paradigms (where it makes sense) they just have them living alongside each other.
That’s why I believe that there is nothing inherently “larger” or “anything goes” about multiparadigm languages, it’s just many of the existing examples did a poor job.
One trivial example is if-then-else statements, ternary expressions and pattern matching – there is no reason why a language needs to have all three! Just merge them until you have “one way of doing things”.
I don’t know who counts as “the Internet,” but it’s definitely possible to appreciate older languages that are well-designed. For example, Ada is intriguing (I think it’s even had a tiny renaissance as people are revisiting) and, of course, Lisp lives on. Even languages like Fortran are, if not loved, then at least respected. C also isn’t hated, although experience has shown us that it has flaws (memory safety, undefined behavior, and so on.)
As for the first question, it depends on what you mean by “multiparadigm.” Python is certainly a hybrid language. You could argue that a language like Rust is a multiparadigm language with procedural and functional aspects. In fact, you can use structs in Rust and Go in a way that approximates the hybrid approaches of Python.
The issue with C++ (and some other multiparadigm languages) is that it feels like the implications of the combinations of features aren’t thought through (or, in some cases, even thought of).
My wife and I started studying Mandarin together using a “gamified” app, Super Chinese. It’s a fun little break at the end of the day as we compete to get higher scores on pronunciation and tests. 很有趣!
I use org-mode, or simple markdown files for short and self-contained notes. However, if I have a large project that spans a long time (language learning, history notes, etc.) I actually use a local instance of gitit. It’s nice to have a dedicated interface with search and so on, and you can edit pages in your text editor of choice as well.
I’ve read so many k8s articles in the past years that I feel like I almost have intimate knowledge of this piece of software, even though I have not used it productively before. It’s obviously pretty strong at allowing applications to scale up whenever in high demand.
Are there persons here who have designed and deployed applications on k8s for non-enterprise use, and have actually gotten to make use of the most important features of k8s: prevented downtime and scaling up (due to high increases in traffic/capacity)? I’m talking here about blogs, web services, apps, mobile apps, mostly deployed and maintained by a single developer.
I have been working on Kubernetes for my company for about two years. It’s had a lot of benefits for our use cases, but I would never use it for a one-developer project. It’s simply not worth it – the things it solves for you aren’t going to be problems you have. I’d even say things like downtime and scaling up aren’t as big a deal at that level of functionality; I’d think that it’s often better for a blog to fall over than to scale at the level that some of the apps I work on scale, simply due to cost. (Are you okay with paying $900 for one day’s traffic spike?) Some of the things that it does don’t even make sense if you’re not working in a particular type of environment.
As a sole developer, you have near-zero communications overhead; you can just remember / write down the state of play, and everyone who has to maintain it will automatically know. You never (eg) accidentally try to run two deploys at the same time, even if you don’t implement a mutex around your deploy process. Implementing k8s can consume all of your team-of-ones output for weeks.
Adding kubernetes (or similar kinds of complex system) can be fantastic for big teams. Having a single source of truth RE the status of the app is a tremendous advantage. It only takes a small fraction of the teams output to implement. The cost/benefit is totally different.
This is one of those things where I simply don’t understand why people insist that empirical science gives a definitive answer, or why people even care what the science says. I switch freely between light and dark schemes, and I do so based on the evidence of… my eyes. If it’s bright in my office I find dark screens harder and more tiring to read. If it’s dark, I find light screens harder to read. I prefer dark, overall, so I keep my office as dark as possible, usually.
The question of which is “proven” to be superior, in lab conditions not using my eyes, is irrelevant. Having statistics doesn’t make it true.
Exactly, especially when it is debatable whether the science is even answering the question we find interesting.
To some, science is religion, so invoking capital-S science justifies their subjective opinions, as if that’s what it’s for. To the rest of us, it looks quite silly and we can see right through it.
I blame Solarized for even putting forth the notion that color schemes are anything other than preference.
It seems like most people need permission from science to have any beliefs at all these days. I think articles like this shake people out of it a bit, but people carry that need with them to everything that’s not obviously subjective.
I don’t think there’s anything wrong with bringing science into colorschemes or anything, I think most people should try doing a bit more of it because I think programming experience and aesthetic pleasure, while connected, aren’t the same thing. Try using acme colors or otherwise minimal, or turn highlighting off entirely every once in a while. Maybe try rainbowy semantic highlighting if you’re a minimalist already.
I think it’s the opposite, no? Many people hold beliefs despite the science, heck, some chunk of them believe that if something rhymes it has to be true.
The article is trying to answer what is better for some narrow definition of better, that matches author’s beliefs. If it were written in e-simple, it would have had a different message. But, as you and many say, color schemes are highly subjective areas — better is what we claim is better.
Maybe we’re getting off-topic here, but I’ve never known anyone like that. Even the most religious people I know of would warn you that the devil has catchy songs, maybe even more catchy songs than God does.
I don’t know anyone who claims it at the face value, but there are always subtle psychological effects: https://apoorvupreti.com/if-it-rhymes-it-must-be-true/
This is something I’ve been wondering about, because I’m one of these people who feels the urge to tell others about how light themes are actually better than they think. What’s the impulse?
I think it comes down to culture. Dark editor themes seem to be the norm nowadays, and using a light theme actually draws attention. New colleagues will often make surprised comments when they see my editor—in the same way they’d comment about someone not being able to touch type. They don’t outwardly criticize, but it’s transparent it’s not something they expect from an experienced developer.
So, as a result, it gets me thinking a lot about why I’m using a light theme despite it being a bit looked down upon. It’s very possible this is a common motivation for speaking out about the light/dark science!
One more argument to move to miniflux.
Paid hosting sign ups are closed there. Which is too bad, I don’t really want to self host this since it only works with postgres, and I have no other reason to setup/run/maintain postgres.
That does tend to be annoying, but my experience with maintaining postgres for my email setup so far has been to 1) set it up, 2) configure backups &c and 3) forget about it. So if you’re willing to put in the up-front work it’s probably pretty painless.
Postgres has a pretty easy-to-use docker container, which would probably be adequate for this use.
Except when trying to migrate to another major postgres release, which isn’t all that easy with those containers.
In that case and for this use I’d probably just do a dump and restore. There’s no downtime limitation, so that seems like it should be adequate.
I’d never heard of this but just installed it, it’s cool. Light in resources, Debian package repo, clear installation instructions and it feels faster than ttrss. I’ve been using and advocating ttrss for years, but this might be my switch moment. The content parsing is better and it does not require an app on mobile. With ttrss I cannot subscribe to a feed in the ios app…
Thank you!
The only thing that keeps me from ditching ttrss is that miniflux seems to have no ability to adjust the sorting. I prefer to have the newest articles on top on the first page.
Under Settings there seems to be a sort option. I also want newest on top: https://i.postimg.cc/bN85Jvwn/Selectie-0988.png
Nice thanks!
I have written a few microservices in both Go and Rust (including translating some from Go to Rust). There are benefits, particularly over Python (my previous go-to language), but you must recognize that you’re trading a decent amount of maturity for a number of rough edges. In the cases where I am using Rust, that’s fine – but I am also consciously using it for small, self-contained components until the web and database stories “firm up” some. If you’re curious, the frameworks I use are tokio and warp. warp is a higher-level abstraction on top of hyper. For database work I have had some success with sqlx.
Do you know either language?
Use the one that you know.
If you love working with Python, then I’d suggest Go. Rust would likely be too much cognitive overhead for little benefit.
Interesting. I would have the exact opposite reaction. Go will make you think about raw pointers and that kind of thing, whereas rust you can write at a high-level like Python.
Totally agree use the one you know unless you have a desire to learn a new one.
If pointers are too much cognitive load then Rust’s lifetimes and ownership juggling is going to be way worse. I’d say that the comparison is more that Python and Go are not particularly functional languages, while Rust obviously is (and that’s the appeal of it to people who like functional languages).
If Rust is faster for a given use case that’s a more like-for-like basis for comparison, but then you might want to use Fortran to go even faster depending on the use case. ;-)
Admittedly I’ve invested time in becoming comfortable with Rust, but I actually concur – after gaining some familiarity, Rust feels much higher level than Go.
Rust does can definitely operate at a higher level than Go, but it also has a lot more cognitive overhead.
Pointers in Go are pretty trivial to work with, and I say this coming to Go from Ruby. Basically your only concern with points are “they can be nil,” otherwise you barely need to differentiate between pointer and structs if you’re writing idiomatically (that is a gross oversimplification and there are performance implications at times, but it’s extremely unlike C/C++ in this respect).