The author hit the nail on the head in his own piece. Most of these job offers are not for webSITE building, they’re for wepAPP building and as a consequence the framework they’re written in matters when hiring for new people.
Has anyone tried to define the difference between the two? It always seems like there’s an implicit assumption that it’s easy and obvious to tell what defines a site vs. app.
For me the differentiating factor is the degree of interactivity. A web site is largely concerned with displaying largely static information, and potentially sending some data back to a server with standard HTML forms. A web app on the other hand often has a high degree of interactivity, possibly communicates with a REST API, database, or other collaborating service, and works more like a traditional application that just so happens to run inside a web browser.
This definition is a little circular, though.
I know I’m exaggerating a little :) But I think it’s important to keep implementation out of the definition if these terms are to be useful in this discussion. Because then what do I call applications that are built with technologies like Hotwire? They aspire to handle fairly complex interactivity with very minimal JavaScript.
Every bit of that gorgeous Apple packaging is paid for with toxifying our own habitat, air, and water, slave labor, plant and animal habitat loss, and that’s just the manufacturing. Then add to that transportation to the assembly factory. And then it goes in the landfill.
I enjoyed opening up my iPhones and MacBooks back in the day as much as anyone, but once I realized the above, I stopped perceiving them as beautiful. They just look ugly and horrifying.
I’m not sure that cardboard packaging which is increasingly being made from recycled materials is quite the blight on the environment you’re painting it as here.
That “cardboard” packaging has a lot of plastic in it, and if you add in the glue, ink, coating, plastic insert, and transportation of all these ingredients into the box-making factory, adds up to quite a footprint.
Incomplete list of things that are not strings:
- Password
This is the least obvious one to me, and I notice it’s the only one for which you didn’t give examples of typed representations. Do you know of any?
I don’t quite agree they’re not strings. They are strings at least from user perspective. However, they would benefit from a type that isn’t a generic string:
In Haskell
newtype Password = Password String
in other words, it’s simply a different type with an identical representation, String.
Why does that matter? In my opinion, you should treat passwords as mostly opaque identifiers. One possible design thought experiment is “Should Password support length operations?”
Both feel reasonable, slightly different styles. There are other possible paths here too “No, Password should only support entropy evaluations”. But in any case, we can discuss how String and Password differ.
Yeah I was slightly confused by this one too. My best guess is that passwords are subject to restrictions (length, requiring non alphanumeric characters, etc.) that a plain string isn’t.
Passwords cannot be safely compared for equality using string functions; you can run into timing attacks if you do.
Yeah the way he quickly brushes over support for things like games and video editing applications seemed weird to me. While the situation has improved slowly over the past decade or two, the lack of access to industry standard video and audio editing software seems like a compelling argument for why Linux on the desktop is infeasible for so many people. Likewise, the set of games available on Linux is a strict subset of those on Windows, and said games typically run better and are better supported on Windows. If either of these categories of applications are important to you, Linux is a poor choice.
There’s a lot of discussion of games being the factor for desktop Linux, but I don’t see it; at least as anything more than a value add. You can live without games, but you can’t live without the tools for work, whatever it might be. (Programmers like us have a luckier break here.) I think a lot of that discussion is because of how big the overlap between sites like Reddit and people who eat, live, and sleep PC gaming are.
You can live without games, but you can’t live without the tools for work, whatever it might be.
The home desktop computer is a dying breed. Its use case is slowly being usurped by laptops (which are still mostly desktops), tablets, and phones. However, one use case which is not going away is gaming. Desktop computers offer one of the best gaming experiences out there. Many users stick to windows primarily because their favorite game only runs there.
lack of access to industry standard video and audio editing software seems like a compelling argument for why Linux on the desktop is infeasible for so many people
Do many people use this kind of software? I would imagine it’s fairly specialized?
(Lack of) games are probably more important, especially because there’s a social component to that as well: if your friends are playing a game then you’d like to join in on that specific game. When the whole coronathing started some of my friends were playing Soldat, but I couldn’t get it running on my Linux machine so I missed out 🙁 (wine errored out, it’s been open sourced yesterday though, with Linux support, so need to look again).
I have helped around a dozen or so ‘regular people’ over the years who did not want to pay the Apple tax and whose Windows laptops had become totally unstable move over to Linux desktops. This is the list of apps they care about:
One time I had to help someone get some weird Java app for their schoolwork installed. Most people are content consumers, not creators.
The games situation is pretty good, in no small way thanks to Valve. Nearly half of my large steam library is linux-native, whereas most of the remaining games work with proton, without hassle.
However, the situation with Video and Audio is more of a joke. All the open video editors are terrible. At least we can play videos now; I remember the pre-mplayer era, and now we have mpv which is fantastic.
As for audio, the laymen suffer from Pulseaudio, which is still irritatingly bad. The production side of audio is much better thanks to jack, but only if running under Linux-rt, as when for whatever reason I boot mainline, jack gets xruns after a while, even with 20ms buffer.
All the open video editors are terrible
It depends somewhat what you want to do; OBS studio is pretty nice for its use case, but I wouldn’t want to produce a film.
As for audio
The lack of reliable timeslices is pretty terrible on mainline linux. Doesn’t affect me often, as I have 24 cores, but if I’m running something intensive I’ll sometimes get audio skipping in 2020 (which literally never happened back in 2005 on windows).
if I’m running something intensive I’ll sometimes get audio skipping in 2020
The Linux kernel likes to thread into long, dark and narrow corridors, not yielding the cpu to SCHED_FIFO/RR tasks until much later than the time they become runable.
I did boot into mainline recently and saw some of the usual pathological behaviour. Then I ran cyclictest -S -p99
and spotted 10000µs peak within seconds. Appalling.
(which literally never happened back in 2005 on windows).
Or 1985 on AmigaOS.
The Linux kernel likes to thread into long, dark and narrow corridors, not yielding the cpu to SCHED_FIFO/RR tasks until much later than the time they become runable.
Are there open-source kernels that don’t do this and support a variety of mainstream hardware? Genuinely curious.
Most RTOSs do try hard to handle this reasonably.
If lives depend on it, seL4 is the only protected mode (to my knowledge) kernel with formal proofs of response time (WCET) and correctness.
But if your use case is audio, you’ll probably be fine by simply booting into linux-rt (Linux with the realtime patchset) and generally avoiding pathologically bad (pulseaudio) software in your audio chain, using straight alsa or a pro-audio capable audio server (jackd, hopefully also pipewire in the future).
You should also ensure the relevant software does not complain about permissions not allowing execution as SCHED_FIFO or SCHED_RR. In my opinion, they should outright refuse to run in this situation (except perhaps by forcing them to SCHED_OTHER with a parameter) rather than run in a degraded manner, but it is a separate issue, another of the many issues in the ecosystem.
I’m more impressed by the 2005 result because it had a task scheduler. Of course the Amiga didn’t leave jobs paused - it didn’t pause them!
What do you specifically mean by task scheduler?
AmigaOS’s “kernel” (exec.library) provides preemptive multitasking with priorities.
This is what a task looks like: http://amigadev.elowar.com/read/ADCD_2.1/Libraries_Manual_guide/node02BB.html
And this for reference on the bitmap flags: http://amigadev.elowar.com/read/ADCD_2.1/Includes_and_Autodocs_2._guide/node008E.html
Essentially the run/ready/wait we’re used to.
Where the Amiga is almost cheating is by using a 68k CPU. They have very fast interrupt response, further helped by being vectored. x86 is a sloth on this.
Wait, really? That’s earlier than I’d realized (the computer I was using in 1990 could hibernate a process but switching back was 2-3 seconds wait and IIRC the process had to ask to be switched away from).
Yes, AmigaOS had preemptive multitasking with priorities from day0.
Furthermore, it also had a message-passing system called ports, which was used for IPC, including a good part of user tasks talking to the OS, which was a multi-server OS (internally split as services, running as tasks, with exec.library being the closest thing to a kernel we had, as it did handle task switching, IPC, memory management and early hw initialization).
AmigaOS supports shaded libraries, and the os itself looks like a bunch of shaded libraries to application programmers.
Your Amiga curiosity can further be satisfied by the documents available at: http://amigadev.elowar.com/
Early AmigaOS (1985-1990) is <V37. Functions that claim to be V37+ are from the more modern AmigaOS days (AmigaOS 2.0+, 1991+), so you can easily see what was made available when.
The famous “Every OS sucks” song special cases AmigaOS as the exception… for good reason. Overall, I still consider its design to progress in many ways relative to UNIX. If you ever see me rant about UNIX-like systems, now you know where I am coming from.
I’m going to give a null option here. There just isn’t anything native that’s both good and portable.
I haven’t seen anything close to web dev tools for native GUI development. Xcode can inspect GUI, but it’s a toy compared to browser’s inspector. I can edit CSS live without reloading my UI, and control every pixel with relative ease. I can build fancy animated web UI in a shorter time than it takes me to get a makefile work across systems.
You can use a Python binding with Qt. There’s PyQT5 and PySlide (and Qt, which is a wrapper around both; depending on what’s installed).
Interesting! I can’t recall seeing such apps in the wild. Do you know any popular apps that are built using that combo?
Yeah I’m yet to find a cross platform GUI toolkit that makes for nice Mac apps, Cocoa is really the only option if you want to make something that feels polished and high quality. But often the apps using these cross-platform toolkits are apps that wouldn’t get ported to the Mac otherwise, so I’m willing to accept the tradeoff of a slightly clunky GUI in exchange for having access to a useful piece of software.
Re the first point; why not
text-run = span text-run | span
span = strong | em | strong-em | normal-text
strong = "**" normal-chars "**"
em = "*" normal-chars "*"
strong-em = "***" normal-chars "***"
normal-text = [a-zA-Z0-9 ]
normal-chars = normal-text normal-chars | ""
I was only talking about the first point; which is talking about ambiguity between double em and strong
Oh, I see what you’re saying now. Mea culpa. I think there’s still some ambiguity though with how ****
should be interpreted, as normal-chars
can be empty.
Hmm, I see. It could be either (**)(**) or (****). However, there is no empty bold or italic block; so we could also require at least a single char here, and make the empty a part of the text-run.
tex-run = span text-run | ""
normal-chars = normal-text normal-chars | normal-text
I suddenly get why language designers love fuzzers so much.
(Edit: because finding ambiguous cases here is hard on my brain)
And what is the proper way to match **this **text**
? I entered it into Babelmark, and it seems like there are substantial differences even between the most well-known Markdown parsers.
Here’s an edge case that breaks this grammar:
**Bold and *Italic***
which renders as Bold and Italic using Lobsters’ Markdown renderer
You are right, this does break my posted grammar. I think I can fix it with a leveled grammar though – i.e strong2 only allows normal text or a single level em, etc. Will post an update if I find it.
Lots of good things were originally unintended or semi-intended results of technical limitations. The /usr split is still a good idea today even if those technical limitations no longer exist. It’s not a matter of people not understanding history, or of people not realising the origins of things, but that things outgrow their history.
Rob’s email is, in my opinion, quite condescending. Everyone else is just ignorantly cargo-culting their filesystem hierarchy. Or perhaps not? Perhaps people kept the split because it was useful? That seems a bit more likely to me.
I’m not sure it is still useful.
In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin
, /sbin/
, and /usr/sbin
all are simply symlinks (for compatibility) to /usr/bin
. Background on the archlinux change.
I’m not sure it is still useful.
I think there’s a meaningful distinction there, but it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.
In fact, some linux distributions have moved to a “unified usr/bin” structure, where /bin, /sbin/, and /usr/sbin all are simply symlinks (for compatibility) to /usr/bin. Background on the archlinux change.
I’m not quite sure why they chose to settle on /usr/bin
as the one unified location instead of /bin
.
That wasn’t the argument though. There was a good reason for the split (they filled up their hard drive). But that became a non-issue as hardware quickly advanced. Unless you were privy to these details in the development history of this OS, of course you would copy this filesystem hierarchy in your unix clone. Cargo culting doesn’t make you an idiot, especially when you lack design rationale documentation and source code.
… it’s a reasonable decision to say ‘there are tradeoffs for doing this but we’re happy with them’. What I’m not happy with is the condescending ‘there was never any good reason for doing this and anyone that supports it is just a cargo culting idiot’ which is the message I felt I was getting while reading that email.
Ah. Gotcha. That seems like a much more nuanced position, and I would tend to agree with that.
I’m not quite sure why they chose to settle on /usr/bin as the one unified location instead of /bin
I’m not sure either. My guess is since “other stuff” was sticking around in /usr
, might as well put everything in there. /usr
being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.
Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.
My guess is since “other stuff” was sticking around in /usr, might as well put everything in there. /usr being able to be a single distinct mount point that could ostensibly be set as read-only, may have had some bearing too, but I’m not sure.
That was a point further into the discussion. I can’t find the archived devwiki entry for usrmerge, but I pulled up the important parts from Allan.
Personally, I think I would have used it as an opportunity to redo hier entirely into something that makes more sense, but I assume that would have devolved into endless bikeshedding, so maybe that is why they chose a simpler path.
Seems like we did contemplate /kernel
and /linker
at one point in the discussion.
What convinced me of putting all this in /usr rather than on / is that I can have a separate /usr partition that is mounted read only (unless I want to do an update). If everything from /usr gets moved to the root (a.k.a hurd style) this would require many partitions. (There is apparently also benefits in allowing /usr to be shared across multiple systems, but I do not care about such a setup and I am really not sure this would work at all with Arch.)
https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022629.html
Evidently, we also had an request to symlink /bin/awk
to /usr/bin/awk
for distro compatability.
This actually will result in more cross-distro compatibility as there will not longer be differences about where files are located. To pick an example, /bin/awk will exist and /usr/bin/awk will exist, so either hardcoded path will work. Note this currently happens for our gawk package with symlinks, but only after a bug report asking for us to put both paths sat in our bug tracker for years…
https://lists.archlinux.org/pipermail/arch-dev-public/2012-March/022632.html
Sorry, I can’t tell from your post - why is it still useful today? This is a serious question, I don’t recall it ever being useful to me, and I can’t think of a reason it’d be useful.
My understanding is that on macOS, an OS upgrade can result in the contents of /bin
being overwritten, while the /usr/local
directory is left untouched. For that reason, the most popular package manager for macOS (Homebrew) installs packages to /usr/local
.
I think there are cases where people want / and /usr split, but I don’t know why. There are probably also arguments that the initramfs/initrd is enough of a separate system/layer for unusual setups. Don’t know.
It’s nice having /usr
mounted nodev
, whereas I can’t have /
mounted nodev
for obvious reasons. However, if an OS implements their /dev
via something like devfs
in FreeBSD, this becomes a non-issue.
It is on FreeBSD, which is why I mentioned devfs
, but idk what the situation is on Linux, Solaris and AIX these days off the top of my head. On OpenBSD it isn’t.
I dunno, hasn’t been useful to me in the last 20 years or so. Any problem that it solves has a better solution in 2020, and probably had a better solution in 1990.
Perhaps people kept the split because it was useful? That seems a bit more likely to me.
Do you have a counter-example where the split is still useful?
The BSDs do have the related /usr/local
split which allows you to distinguish between the base system and ports/packages, which is useful since you may want to install different versions of things included in the base system (clang and OpenSSL for example). This is not really applicable to Linux of course, since there is no ‘base system’ to make distinct from installed software.
I tend to rush for /opt/my-own-prefix-here
(or per-package), myself, mainly to make it clear what it is, and avoid risk of clobbering anything else in /usr/local
(like if it’s a BSD). It’s also in the FHS, so pedants can’t tell you you’re doing it wrong.
It does - this is generally used for installing software outside the remit of the package manager (global npm packages, for example), and it’s designated so by the FHS which most distributions follow (as other users have noted in this thread), but it’s less prominent since most users on Linux install very little software not managed by the package manager. It’s definitely a lot more integral in BSD-land.
[…] since most users on Linux install very little software not managed by the package manager
The Linux users around me still do heaps of ./configure && make install
; but, I see your point when contrasted against the rise of PPAs, Docker and nodenv
/rbenv
/pyenv
/…
Yeah, I do tons of configure make install stuff, sometimes of things that are also in the distro - and this split of /usr/local is sometimes useful because it means if I attempt a system update my custom stuff isn’t necessarily blasted.
But the split between /bin and /usr/bin is meh.
That sounds sensible. Seems like there could be a command that tells you the difference. Then, a versioning scheme that handles the rest. For example, OpenVMS had file versioning.
Catalina runs best on Macs with hardware specifications that Apple marketing isn’t yet prepared to make the baseline for models such as the iMac.
So MacOS Catalina is like Windows Vista?
And then it will be cursed to become a user-hostile ad platform that you have to pay for, until the end of time.
Neat! There are several Scheme/LISP implementations that run on IOS already and are in the app store. Pixie Scheme, Gambit Scheme, and one other Scheme/LISP editor/evaluation environment I can’t remember the name of.
Be great to see Racket added to the list!
Also also there’s another new one but it’s just XLisp based called LispCube, and that editor/REPL I was thinking of is for Closure and is called Replete.
Agreed, the idea of running an application written in Racket on an iOS device is appealing, but I’d really love to see a full fledged programming environment like DrRacket ported over.
To give a bit of context as to where VS Code fits into my development workflow: I use Xcode for iOS development as using anything else feels masochistic, when I need to make quick edits to config files or scripts I use vi
, and for basically everything else I use VS Code.
For my main use case of JavaScript, Ruby, and occasional Rust development, I have the following setup:
I’ve also made a few settings tweaks to hide the useless “Open Editors” sidebar section,
I’m using fish
as well as my daily driver, and switched to it more than 2 years ago.
I like that everything is to my liking out of the box, without having to use any plugins for advanced features. I’m using the vim mode in it, and use starship
as prompt.
It doesn’t support some bash syntax such as &&
or !!
, but is otherwise very compatible. It changed some things for the better. Problems with this when copy-pasting snippets from others are rare, but I sometimes open bash
just to run that snippet.
Same. I bounced off zsh
a couple times before this, and fish
itself once due to some bugs which the maintainer was quite gracious about (one of my first open source interactions iirc!), but after a long hiatus of putting up with bash
’s nonsense I tried fish
again and fell in love with it for just providing the Right Stuff for my use case.
Unfortunately, $WORK has quite a lot of important dev tooling written in bash, so I’m still using it by default there. Maybe someday I’ll have it all migrated to Python.
I use fish
on macOS. There’s occasional headaches due to its lack of bash
compatibility, but I find the ergonomics of using it to be much nicer than bash
, zsh
, or any other bash
-compatible shell I’ve tried.
As you I use fish
for my shell (with oh-my-fish).
For short/quick scripts I use zsh
generally using a nix-shell
bang pattern like this:
#!/usr/bin/env nix-shell
#!nix-shell -i zsh
#!nix-shell -I nixpkgs="https://github.com/NixOS/nixpkgs/archive/19.09.tar.gz"
#!nix-shell -p minify
minify $1 > $2
And when I’m serious about a script that I switch to turtle
.
Unfortunately a lot of these CSS-based tricks are pretty terrible in terms of accessibility. Granted, a lot of JS-heavy websites tend to be pretty poor when it comes to accessibility, but that’s largely a result of it being a low priority, rather than any technical limitations of JS-based web apps.
The headline is a bit weird. The article is about “these apps are so popular, Apple went out of their way to ensure they still worked”. It’s not clear that the apps were buggy, except in the NSBundle unload case.
To me it’s more accurate to say that Apple frameworks are buggy and older versions of those apps had to use workarounds and depend on the buggy behaviour until Apple decides to fix those bugs.
I wouldn’t be surprised if Apple knew about some of the bugs from the developers of the apps during the beta period of a new OS. It doesn’t necessarily mean that Apple actively tests all of those apps as part of its OS QA routine.
Yeah to me the article read like “Apple consider’s these apps important enough that they go to extra lengths to ensure OS updates don’t break them”. The title here seems weirdly judgey and negative.
Why oh why did they have to call it “bionic”…were they trying to increase general nomenclature confusion?
All the names that are actual words are already taken by someone somewhere. Naming something these days with a word is a guaranteed collision.
Increasing general nomenclature confusion is kind of all the Ubuntu release code-names are good for, yes.
This is the first April Fool’s thing I’ve seen that’s brought a sincere smile to my face, what a fun idea.
Great article, the over-dependence on lodash in many modern JS codebases reminds me of sites back in the day which would import jQuery just so they could run code on DOMContentLoaded
. Howere,
if you really do need lodash, and there’s still plenty of good reasons to use it, I’d recommend using the lodash-es package,,which breaks lodash down into ES Modules. This means modern JS bundlers like webpack and rollup which are aware of ES modules can strip the unused parts of lodash from your production bundle.
I’m continuing to work away at that Lobsters iOS / Android app, and it’s getting to the point where I’d probably feel okay throwing it up on the App Store / Google Play as a free download for others to use. It really has a pretty limited use case though, since as far as I know Lobsters doesn’t have any way for third-party applications to authenticate users, limiting my app to being read-only.
I’ve also been learning Rust with the fantastic official docs, and Ive started porting a half-complete Swift Game Boy emulator I worked on years ago to Rust. It’s definitely taking me a while to wrap my head around lifetimes and borrow semantics, but I’m really enjoying the experience.
Re Lobsters login, can you not simulate the regular login process with a library? And then use that session for posting?
The repository pattern he describes here sounds very similar to how Elixir’s Ecto library works, which is one of the things I’ve historically really liked about Elixir / Phoenix over Ruby / Rails. It’s great to see those ideas gaining traction in the much larger Ruby community.
Safari and Chrome are both IE, it’s just that they’re at different stages of IE’s lifespan.
Chrome is like late 90s/early 2000s IE, dominating the market and full of support for new features that other browsers didn’t carry, thus resulting in ‘Best viewed in X’ type apps.
Safari is like late stage IE, lacking support of new features (at least in certain contexts), requiring special attention from developers in order to make certain things work that were taken for granted in other browsers. In the case of IE there were better alternatives, but they were unavailable due to corporate desktop lock down policies. The audience size affected by this issue was large enough that developers had to cater to them.
In the case of Safari, better alternatives exist …. provided you run a new enough version of OS X. If you run an old enough version of OS X the only browser available is Safari (due to FF and Chrome no longer supporting that OS version). I once worked for a SAAS company whose audience was artists, the majority were Mac users and quite a few were stuck on old Safari on Snow Leopard (this was a number of years ago). They couldn’t upgrade their OS without replacing their perfectly functioning hardware. It was certainly a pain point for us.
It depends on what you consider “better” in a browser. Safari still tends to have a lower energy footprint, which is a big deal for users on portable machines. The battery life savings you can get by switching from Chrome to Safari as a daily driver are nontrivial.
Oh for sure! In this context I was referring to ‘better’ as having better support for standards, specifically on older versions of OSX which are stuck on older versions of Safari but still have updated Chrome or Firefox browsers available.