1. 9

This is something I wrote the other day. It requires nothing other than POSIX sh, [, printf, dd and stty. Implementing this was rather interesting as POSIX shell is very limiting (this is what makes it fun).

I’ve also written a file manager in bash which you may have heard of. https://github.com/dylanaraps/fff

I hope you find this interesting/enjoy using it. Happy to answer any questions too. :)

1.

Interesting use of dd for input. It took me longer than I like to admit to figure out how/where it actually reads in the files, TIL:

for name [ in word … term ] do list done where term is either a newline or a ;. For each word in the specified word list, the parameter name is set to the word and list is executed. If in is not used to specify a word list, the positional parameters (”$1”, “$2”, etc.) are used instead.

And:

set (…) [option]] [±name] [–] [arg …] The set command can be used to set (-) or clear (+) shell options, set the positional parameters, or set an array parameter. (…) Remaining arguments, if any, are positional parameters and are assigned, in order, to the positional parameters (i.e., 1, 2, etc.).

set – *

arguments are set up so that they contain the results of glob expansion (no hidden files handling..) - then explicitly passed on: https://github.com/dylanaraps/shfm/blob/master/shfm#L267

redraw “$@” to be iterated over by: https://github.com/dylanaraps/shfm/blob/master/shfm#L147 for file do (…) https://linux.die.net/man/1/ksh 1. Yeah! Interesting use of dd for input. This is one of the only(?) portable methods of reading input a single byte(char?) at a time. Bash’s read supports single character input which is really nice. arguments are set up so that they contain the results of glob expansion (no hidden files handling..) then explicitly passed on This is the closest thing to an array one has access to in POSIX shell (short of creating a string and using some delimiter). You are also limited to one “list” at a time! Scoping is also weird as functions cannot modify the parent’s “list”. This is why input handling and anything needing to modify the list is scoped “globally” (in main()). for file do (…) This is really fun syntax. Looks especially cool when used with a case statement. for file do case$file in
name1) : do thing ;;
name2) : do other thing ;;
esac done


Functions in POSIX shell can also be defined in the following ways:

func() echo hi
func() (echo hi) # code runs in subshell, variables all locally scoped
func() if ...
func() for ...
func() while ...
...
etc


This does /not/ work in bash funny enough.

1.

This is the closest thing to an array one has access to in POSIX shell (short of creating a string and using some delimiter).

There’s always text “files” via fifos or pipes (or temp files) - which may or may not be better than (potentially very large) argument arrays (eg: navigating a maildir in this fashion).

Eg:

cd_list=$(mktmp) for f in * do echo "${f}" >> "${cd_list}" done redraw # pass in$cd_list or use as global


But then it’d make more sense to use find, zero-terminated names (the above doesn’t handle filenames with newlines) - and at that point you can pipe through sort and do all kinds of things.

I lean heavily towards using streams of text for shell scripts - because using text “files” as the “Turing tape” is generally preferable to the rather anemic data types shell provides.

But at any rate - now I know another shell idiom I’ll strive to avoid using ;)

Ed: oh, BTW-please don’t look too closely at the code snippet - the IFS can play havoc with that loop. Say with filenames with tabs or spaces… The point was more about storing the list, than generating the list…

2.

I’m curious: the readme (and this comment) say it depends on [, but the readme also says implementation uses case everywhere to avoid a dependency on [

1.

Nice catch. I was avoiding [ at the beginning. Later I had to make use of it (to see if entry is a directory or not, etc). Will fix the README, thanks.

1. 12

I recommend Emacs instead: it is a full-blown text-oriented operating system built atop Lisp, with a ton of useful packages building on decades of work.

It even comes with vim compatibility, if you want it!

1.

“it is a full-blown text-oriented operating system built atop [x]” actually sounds like a scary antifeature?

1.

Yeah, it could as well have been. By a miracle stroke of luck, it actually works out really well for Emacs. Think less JavaScript, more JVM.

1.

I believe that’s a reference to the joke

Emacs is a great operating system missing only a decent text editor

2.

I’ve been using emacs (spacemacs) for a few months, and there are some things I like about it. However the “vim mode” is clearly not desgned by vim users. By default it jumps all over your file highlighting things as you search! I had to write nontrivial lisp to fix that, there is no config option.

I also still haven’t found a way to make my tab key behave sensibly. I figured out how to neuter the awful indenting in prog mode, and set tabstop+shiftwidth and not insert spaces when I push tab… but every new mode I use may require redoing all this work. More than once I’ve opened a new type of file, made a trivial edit, sent it off only to find emacs had inserted whitespace errors facepalm

1.

I think before slamming Evil mode you should try it on its own, without all of the other Spacemacs stuff. Spacemacs notoriously adds a lot of things that they (and, I guess, many of their users) think are nice, but it’s far from plain Evil mode. Plain Evil mode is indeed desigend by Vim users, and any incompatibility with Vim is a bug, not a feature.

1.

I will try it, but from my reading if the code both of the problems I list are not spacemacs extras.

1.

Then i urge you to filé a bug report. The Evil maintainers are serious about compatibility.

1.

I filed one of them https://github.com/emacs-evil/evil/issues/1304 and got the help needed for a workaround.

Everywhere I look for “how to make tab key behave” is just people asking why one would want that, but I guess since it’s a regression from vim I coukd try to file it for evil…

1. 2

Maybe purely functional languages don’t exist.

The reason why Haskell is pure, even though you can still do things with it, is well-documented. However you are very right that if thinking about that doesn’t help you, then don’t do it! Mental models cannot be proscribed.

1. 3

While most people complain about C for “unsafety” and annoying manual memory stuff, this right here is why you should pick another language. If you want the compiler’s help just pick a compiler that knows how to help.

1. 5

I would encourage you not to think of a ‘career’ as something you ‘grow’ but a job as something you have. If the question is “how do I get my job to pay me more?” the answer is highly dependent on where you work, and for some the only way to make more is to find a new job.

You’re right that life isn’t like schooo with a defined end goal: we worknso that we may eat, and we hope to enjoy the job somewhat because most of us spend more than half our time at it.

As far as “personal growth” you say you want to write code that isn’t C#? What prevents you? Does your job exhaust you so much that there is no space for coding at home? If so, or if you want to code in not-C# all day, then a new job might be in order.

there, a ramble right back at’cha ;)

1. 2

I tried ubuntu touch when I got my PinePhone, but the strict apparmor, read only root file system, not xorg, etc was just too much. If I wanted the android experience I know where to find it.

Very happy now with sxmo

1. 12

Built in OpenPGP!

How did it take this long?

But still, gratitude.

1. 6

It took this long because you were not submitting the required patches :-P

1. 4

“Only YOU can prevent broken software.”

1. -3

the biggest thing since bitcoin

Meaning, “fails at its primary (only) purpose and only useful for running Ponzi schemes, while accelerating climate change?”

1. 4

You might want to actually give it a shot. Take it from someone who hates these headlines too.

1. 0

I’ve read some (what I think will be) more nuanced posts on GPT-3. I guess it’s interesting, but I’m not really invested enough to have formed an opinion on this one (yet).

1. 8

1. 11

2. 3

How does Bitcoin fail at being a peer-to-peer electronic cash system?

1. 1

It’s too slow to replace cash. People do the bulk of the transactions off-chain, which kinda defeats the purpose.

1. 3

Your main criticism is “it’s too slow”? All digital money is slow. It only looks fast because banks take on the risk of digital money transfers and give you the benefit of the doubt. For “digital cash”, I’d say 10 minutes is pretty good.

1. 4

banks take on the risk of digital money transfers and give you the benefit of the doubt

That’s kind of a killer feature, though.

1. 3

If you desperately need that kind of thing, yes. Bitcoin provides benefits traditional money and banking doesn’t, hence it’s existence. There is nothing preventing banking solutions on top of Bitcoin.

1. 2

The primary benefits of Bitcoon are lack of regulation and high volatility due to same, and a secondary benefit of being distributed with no bias towards societal economic utility for the people getting lucky while mining.

2. 1

You just got done comparing it to cash, not debit or credit card transactions. Cash is instantaneous. Credit cards have fraud detection, which Bitcoin lacks.

1. 2

How is cash instantaneous acorss the ocean?

1. 1

I’m not sure why I need to say this but transporting money is not the same as exchanging it

2. 1

Most bank transfers days 2 days anyway

1. 19

Syntax coloring isn’t useless, it is childish, like training wheels or school paste. It is great for a while, and then you might grow up.

This seems… Overly dismissive? I’ve seen a few programmers talk about working without syntax highlighting, but I know for me, it’s a nice aid for navigating code. It gives my eyes and brain more structural hooks to hang things on, and makes skimming a lot easier, even when it’s all in gray-scale. (Skimming is one of the main ways I read code, looking for the relevant part of a file or function). I will admit that I’ve not been programming as long as Crockford, but I’ve also not been developing for an insignificant amount of time either.

Plus, pervasive Navigate To Definition for the languages I use, or Vim’s # and * commands help me track down what context a variable comes from pretty easily in most cases.

1. 5

Yeah, that part irked me as well.

I also see his point: common uses of syntax highlighting tend to go overboard. Each piece of syntax we style differently attracts attention. Thus, in a language that leans on parenthesis, rainbow parenthesis are remarkably sane. In another language, coloring every set of parenthesis isn’t as useful.

I just checked CLion and found a few things that I appreciate that use syntax highlighting:

• strings as a different color, so I can tell when I miss one
• macros as a different color
• unused vars grayed out
• keywords as a different color

Not all of them are must have, but it’s a much shorter list than a lot of the popular syntax coloring schemes, which seem to revel in using a different color for each type of syntax it can find. Less really is more.

1. 2

Yes, there definitely is a balance in how many different things are syntax highlighted, and one can definitely go overboard. (Right now, I’m used to having types highlighted differently than variables, but it’s not a must).

2. 5

The main idea is not bad, but I agree it’s a weird and unnecessary flex. He sounds extremely out-of-touch.

1. 7

I thought for years that I needed syntax highlighting, until I gave going without a serious shot. I would never go back to angry fruit salad after a couple years working with it off. Colour can have some uses but syntax highlighting defaults are not the way for me.

I think a lot of people are where I was: convinced they need the highlighting because it is what they have known.

1. 6

I can definitely imagine how you can do without, or even thrive in the absence of, syntax highlighting, and I should experiment with that some time. I’m calling him out of touch for insinuating that something that’s the default for probably 99.99% of programmers is a conscious decision favored by the immature code-illiterate newbs. Gratuitous, because one can simply remove those statements to no detriment to the point.

1. 4

I think the best approach is to come up with your own theme. For example, I’ve been using my own theme (the screenshots are a bit dated) for the last decade or so. Over the years I’ve gradually reduced the amount of colour, leaving colour for elements that I want to stand out. I did try a few greyscale/low-colour themes, but I found it made my eyes more tired than usual.

1. 2

+1 for making your own theme.

It’s been a lot of work and even though I think mine is pretty rudimentary based some others I’ve seen, it’s been really nice.

mavi

2. 4

It’s something Russ Cox(?) has mentioned too. Maybe it comes from programming before it was widely available. Personally I don’t think it helps you navigate code, because syntax highlighting doesn’t particularly highlight the logic or program flow, it instead helps one navigate text, to form a mental model of where one might find code. But it’s nice to have, at least to stop the screen blurring into a mess if you stare too long, at most to identify where things are happening at a glance as you said.

1. 1

I suppose I don’t see the difference between navigating code, and navigating text? I know that code isn’t just text, but text (or some representation), is the representation of code one usually interacts with. Or rather, the distinction seems small enough as not to matter? In order to navigate code, you must first be able to navigate the text it is comprised of, no?

1. 1

My beef is that it generally highlights the obvious (keywords) and not so much the less obvious (confusingly similar symbol names, brace matching, assignment over equality, etc.). I would at minimum want comments and strings highlighted though.

1. 1

I very much want an emacs color theme that does this, but haven’t managed to find one.

2. 3

The language being used here is certainly not how I would phrase it; It’s almost, ehm, childish :-) But ignoring the choice of language, I do think he has a decent point overall. The quote continues with “I no longer need help in separating operators from numbers. But assistance in finding the functions and their contexts and influences is valuable”

In other words: I don’t think Crockford is against this kind of usage of syntax highlighting, just “trivial” syntax highlighting which doesn’t give the advantages you describe (and even proposes a novel way to do highlighting to get similar advantages).

This matches my experience; some syntax highlighting I see wants to colour every little thing, which in my opinion is much less useful than colouring useful “anchor points” such as comment blocks, control statements (if, for, return, etc.), and function/class definitions. I also find it slightly useful to highlight strings as I often use that when scanning code as well.

Highlighting stuff beyond that like operators, numbers, function names, function calls, highlighting function calls different from method calls, and whatnot: I don’t really see how that’s helpful, and find it even detracts from the ability to scan. It’s some matter of personal taste of course, but syntax highlighting is UX design and like all UX design there’s good designs and bad ones. An interesting and perhaps more striking example of this is this page on colour design in flight control systems.

1. 1

How about syntax coloring the natural languages? So we give nouns, verbs, adjectives, and etc different colors. All the punctuation marks also get their distinctive color. I would call that childish.

Context coloring would be useful for natural languages. So we can immediately see which sentences expand the previous statements, and which sentences start a new argument. But has anybody ever done this? Some people do like to make a whole sentence bold. I guess that’s similar to the concept.

Both arguments seem to be stronger for natural languages than code. What about formal logic? Lambda calculus? Brainfuck?

1. 11

From the “truly modern” headline and the “not trying to be” start I thought this was going to be some bold new attempt, or something novel.

1. 5

Ah, “truly modern”, the “all natural” of technology.

If only this distro was both truly modern and ergonomic it would really have some legs.

1. 4

What would have satisfied you? I don’t think “another distro” is a bad thing

1. 5

Oh, I didn’t mean to imply that it’s bad… just… boring compared to the headline and tone of the article which seem to think the project is some kind of revolution.

You may doubt if Serpent OS will see the light of the day and if it would be able to keep all the promises it made.

etc

1. 2

I’ve been forced to use Synology stuff over the years, and it’s just a giant pain. Yes, there are community packages for lots of things, but most of them are poorly integrated and often the thing you actually want is still missing anyway. How they managed to ship so many units that have ssh server and rsync but not scp/sftp server for example is just beyond me.

1. 4

the Snap Store, the app store for Linux

oof

1. 1

Trying with WKD (https://keyoxide.org/singpolyma@singpolyma.net) I see:

TypeError: keyData.publicKey.users[i].userId is null

1. 2

Thanks for the report! I have just pushed a fix, it seems to be working for me now.

1. 1

Thanks. Looks pretty good! The “encrypt message” and “verify message” links have the last dot replace with an underscore in them, which then causes the page that loads up when clicking them to not work right.

The fingerprint is also a link to the key on the default keyserver. This is totally fine, but maybe a link to the WKD would be better for a WKD profile?

1. 2

Fixed WKD links on WKD profiles.

1. 1

Thanks!

Ok, one more thing I noticed because of the hoop-jumping for the keyserver: the profile page happily shows revokes UIDs. For example, my gmail address is revoked in my key but is shown on the WKD profile page.

1. 1

That’s bad! Added to top of todo, thanks for letting me know!

2. 1

The dot/underscore thing is strange, I’ll look into it right away.

WKD links for a WKD profile make a whole lot of sense :) will be fixed today.

2. 1

When using the profile view, keys are only fetched from https://keys.openpgp.org . This is the default keyserver everywhere on the website, but you always get a form with the possibility to overrule which keyserver is used. Since this profile page doesn’t have this possibility, it just uses https://keys.openpgp.org .

1. 2

I’ve seen this before. Using https://dump.sequoia-pgp.org , I can see there are no identities inside the key. The keyserver does this to new keys when the uploader hasn’t confirmed the upload yet by clicking a link sent by email. Could this be happening?

1. 1

Oh, weird. I’d never seen this keyserver before, so didn’t know they wanted me to go in and provide “consent” to distribute my public information that they presumably got from another keyserver already distributing it :P

I’m jumping through this hoop now, so hopefully that fixes it.

2. 1

Indeed it is! Really strange, looking into this today

1. 12

I message my friends with Signal or Keybase

I mean, sure IM has its place but you can’t equate it with email. They exist mutually exclusive of each other. You recommend it as an “alternative”, however—what guarantee do you have that these services will exist in the future—say the next 10 years, or 5 even? I know that email will. It’s not centralized, and most definitely not run by a VC funded org.

1. 1

You could use standardized IM, but even so I agree that at least the apps that exist today cannot replace the email use case.

1. 2

what is standardized IM that you speak of ? sorry, but i haven’t come across any such beast.

1. 1

There are a few. My favourite is https://tools.ietf.org/html/rfc6121 but another obvious one would be https://tools.ietf.org/html/rfc3428

1. 23

It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript. There can be great value in interactive demos and things like MathJax, but there is no excuse to ever use JavaScript for buttons, menus, text and images which should be done in HTML/CSS as mentioned in the blog post. Additionally, the website should degrade gracefully if JavaScript is missing, e.g. interactive examples revert to images or stop rendering, but the text and images remain in place.

I wonder how we can combat this “JavaScript for everything” trend. Maybe there should be a website that names and shames offending frameworks and websites (like https://plaintextoffenders.com/ but for bloat), but by now there would probably be more websites that belong on this list than websites that don’t. The web has basically become unbrowsable without JavaScript. Google CAPTCHAs make things even worse. Frankly, I doubt that the situation is even salvageable at this point.

I feel like we’re witnessing the Adobe Flash story all over again, but this time with HTML5/JS/Browser bloat and with the blessing of the major players like Apple. It’ll be interesting to see how the web evolves in the coming decades.

1. 5

Rendering math on the server/static site build host with KaTeX is much easier than one might have thought: https://soap.coffee/~lthms/cleopatra/soupault.html#org97bbcd3

Of course this won’t work for interactice demos, but most pages aren’t interactice demos.

1. 9

If I am making a website, there is virtually no incentive to care about people not allowing javascript.

The fact is the web runs on javascript. The extra effort does not really give any tangible benefits.

1. 21

You just proved my point. That is precisely the mechanism by which bloat finds its way into every crevice of software. It’s all about incentives, and the incentives are often stacked against the user’s best interest, particularly if minorities are affected. It is easier to write popular software than it is to write good software.

1. 7

Every advance in computers and UI has been called bloat at one time or another.

The fact of the matter is that web browsers “ship” with javascript enabled. A very small minority actually disable it. It is not worth the effort in time or expense to cater to a group that disables stuff and expects everything to still work.

Am I using a framework?

Most of the time, yes I am. To deliver what I need to deliver it is the most economical method.

The only thing I am willing to spend extra time on is reasonable accommodation for disabilities. But most of the solutions for web accessibility (like screenreaders) have javascript enabled anyhow.

You might get some of what you want with server side rendering.

Good software is software that serves the end user’s needs. If there is interactivity, such as an app, obviously it is going to have javascript. Most things I tend to make these days are web apps. So no, Good Software doesn’t always require javascript.

1. 10

I actually block javascript to help me filter bad sites. If you are writing a blog and I land there, and it doesn’t work with noscript on, I will check what domains are being blocked. If it is just the one I am accessing I will temp unblock and read on. If it is more than a couple of domains, or if any of them are unclear as to why they need to be loaded, you just lost a reader. It is not about privacy so much as keeping things neat and tidy and simple.

People like me are probably a small enough subset that you don’t need our business.

1. 4

Ah, the No-Script Index!

How many times does one have to click “Set all this page to temporarily trusted” to get a working website? (i.e. you get the content you came for)

Anything above zero, but definitely everything above one is too much.

1. 3

The absolute worst offender is microsoft. Not only is their average No-Script index around 3, but you also get multiple cross site scripting attack warnings. Additionally when it fails to load a site because of js not working it quite often redirects you to another page, so set temp trusted doesn’t even catch the one that caused the failure. Often you have to disable no-script altogether before you can log in and then once you are logged in you can re-enable it and set the domains to trusted for next time.

That is about 3% of my total rant about why microsoft websites are the worst. I cbf typing up the rest.

2. 3

i do this too, and i have no regrets, only gratitude. i’ve saved myself countless hours once i realized js-only correlates heavily with low quality content.

i’ve also stopped using medium, twitter, instagram, reddit. youtube and gmaps, i still allow for now. facebook has spectacular accessibility, ages ahead of others, and i still use it, after years away.

1. 1

My guess is that a lot of people who use JS for everything, especially their personal blogs and other static projects, are either lazy or very new to web development and programming in general. You can expect such people to be less willing or less able to put the effort into making worthwhile content.

1. 2

that’s exactly how i think it work, and why i’m happy to skip the content on js-only sites.

3. 6

The only thing I am willing to spend extra time on is reasonable accommodation for disabilities.

Why do you care more about disabled people than the privacy conscious? What makes you willing to spend time for accommodations for one group, but not the other? What if privacy consciousness were a mental health issue, would you spend time on accommodations then?

1. 12

Being blind is not a choice: disabling JavaScript is. And using JavaScript doesn’t mean it’s not privacy-friendly.

1. 4

It might be a “choice” if your ability to have a normal life, avoid prison, or not be executed depends on less surveillance. Increasingly, that choice is made for them if they want to use any digital device. It also stands out in many places to not use a digital device.

1. 2

This bears no relation at all to anything that’s being discussed here. This moving of goalposts from “a bit of unnecessary JavaScript on websites” to “you will be executed by a dictatorship” is just weird.

1. 4

You framed privacy as an optional choice people might not need as compared to the need for eyesight. I’d say people need sight more than privacy in most situations. It’s more critical. However, for many people, privacy is also a need that supports them having a normal, comfortable life by avoiding others causing them harm. The harm ranges from social ostracism upon learning specific facts about them to government action against them.

So, I countered that privacy doesn’t seem like a meaningless choice for those people any more than wanting to see does. It is a necessity for their life not being miserable. In rarer cases, it’s necessary for them even be alive. Defaulting on privacy as a baseline increases the number of people that live with less suffering.

1. 2

You framed privacy as an optional choice

No, I didn’t. Not even close. Not even remotely close. I just said “using JavaScript doesn’t mean it’s not privacy-friendly”. I don’t know what kind of assumptions you’re making here, but they’re just plain wrong.

1. 3

You also said:

“Being blind is not a choice: disabling JavaScript is.”

My impression was that you thought disabling Javascript was a meaningless choice vs accessibility instead of another type of necessity for many folks. I apologize if I misunderstood what you meant by that statement.

My replies don’t apply to you then: just any other readers that believed no JS was a personal preference instead of a necessity for a lot of people.

2. 3

The question isn’t about whether it’s privacy-friendly, though. The question is about whether you can guarantee friendliness when visiting any arbitrary site.

If JS is enabled then you can’t. Even most sites with no intention of harming users are equipped to do exactly that.

1. -1

disabling js on a slow device is not a choice, but required for functioning. you are basically saying fuck you to all the disadvantaged.

and all because you are being lazy.

1. 4

When you can get a quad core raspberry pi for $30 and similar hardware in a$50 phone, I really doubt that there are devices that can’t run most JS sites and someone who has a device of some sort can’t afford.

What devices do you see people using which can’t run JS?

The bigger question in terms of people being disadvantaged is network speed, where some sites downloading 1MB of scripts makes them inaccessible - but that’s an entirely separate discussion.

1. 1

how is that a separate discussion? it’s just one more scenario when js reduces accessibility.

as for devices, try any device over 5 years old.

2. 2

I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”.

Turns out, that many JavaScript sites work just fine. Yeah, Slack and Twitter don’t always – I don’t know how they even manage to give their inputs such input latency – but Lobsters works just fine (which uses JavaScript), my site works just fine as well (which uses JavaScript), and my product works great on low-end devices (which requires JavaScript), etc. etc. etc.

You know I actually tried very hard to make my product work 100% without JavaScript? It was a horrible experience for both JS and non-JS users and a lot more code. Guess I’m just too lazy to make it work correct 🤷‍♂️

So yeah, please, knock it with this attitude. This isn’t bloody Reddit.

1. 6

“I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”. Turns out, that many JavaScript sites work just fine.”

I’ve met lots of people in America who live dollar to dollar having to keep slow devices for a long time until better hand-me-downs show up on Craigslist or just clearance sales. Many folks in the poor or lower classes do have capable devices because they would rather spend money on that than other things. Don’t let anyone fool you that being poor always equals bad devices.

That said, the ones taking care of their families, doing proper budgeting, not having a car for finding deals, living in rural areas, etc often get stuck with bad devices and/or connections. I don’t have survey data on how many are in the U.S.. I know poor and rural populations are huge, though. It makes sense that some people push for a baseline that includes them when the non-inclusive alternative isn’t actually even necessary in many cases. When it is, there were lighter alternatives not used because of apathy. I’ve rarely seen situations where what they couldn’t easily use was actually necessary.

The real argument behind most of the sites is that they didn’t care. The ones that didn’t know often also didn’t care because they didn’t pay enough attention to people, esp low-income, to find out. If they say that, the conversations get more productive because we start out with their actual position. Then, strategies can be formed to address the issue in an environment where most suppliers don’t care. Much like we had to do in a lot of other sectors and situations where suppliers didn’t care about human cost of their actions. We got a lot of progress by starting with the truth. The web has many, dark truths to expose and address.

1. 3

thank you for writing this out. the cheapest new phone in indonesia is probably much faster than your typical “obamaphone” or 3-year-old average device.

1. 1

The Obama phones are actually Android devices that also have pre-installed government malware that can’t be removed. They have Chrome and run JS fine.

1. 2

They have Chrome, and they run JS very slowly.

1. 1

Are you going to cite any devices here? Which JS do they run slowly?

My guess is that the issue is on specific documents. I’d think that the fact that JS is so often used in ways that don’t perform well is a much larger issue than this one. Sites using JS in ways that are slow is a completely different debate to be had in my opinion. Although giving someone a version of the page without JS seems a solution, it ignores the entire concept of progressive web apps and the history of the web that got us to them.

EG, would you prefer the 2008 style of having a separate m.somesite.com that works without JS but tends to be made for small devices which tends to let corporations be okay with removing necessary functionality to simplify the “mobile experience”? Generally, that’s what we got that solution.

The fact that even JS-enabled documents like https://m.uber.com allow you to view a JS map and get a car to come pick you up with reasonable performance on even the cheapest burner phones shows just how much bad programming plays into your opinion here instead of simply whether or not JS is the problem itself.

It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately) but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com. The exception to this rule seems to be bank & travel companies in my experience.

1. 2

Here is an example device I test with regularly:

This iPad is less than 10 years old, and still works well on most sites with JS disabled. With JS enabled, even many text-based sites slow it down to the point of being unresponsive.

This version of iOS and Safari are gracious enough to include a JavaScript on/off toggle under Advanced, but no fine-grained control. This means that every time I want to toggle JS, I have to exit Safari, open Settings, scroll down to Safari, scroll down to Advanced, toggle JS, and then return to Safari.

Or are you going to tell me that my device is too old to visit your website? I’ll be on my way, then.

1. 2

It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately)

I think this is changing for the better, slowly but faster more recently.

but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com.

I think with some feature checking and progressive enhancement, you can do a lot. For example, my demo offers basic forum functionality in Mosaic, Netscape, Opera 3.x, IE 3.x, and modern browsers with and without JS. If you have JS, you get some extra features like client-side encryption and voting buttons which update in-place instead of loading a new page.

I think it’s totally doable, with a little bit of effort, to live up to the original dream of HTML which works in any browser.

The exception to this rule seems to be bank & travel companies in my experience.

2. 3

Aside from devices without a real browser, JavaScript should run fine on any device people are going to get in 2020 - even through hand-me-downs.

1. 3

I’m going to try to replace my grandmother’s laptop soon. I’ve verified it runs unbearably slow in general but especially on JS-heavy sites she uses. It’s a Toshiba Satellite with Sempron SI-42, 2GB of RAM and Windows 7. She got it from a friend as a gift presumably replacing her previous setup. Eventually, despite many reinstalls to clear malware, the web sites she uses were unbearably slow.

“When you can get a quad core raspberry pi for $30 and similar hardware in a$50 phone,”

She won’t use a geeky setup. She has a usable, Android phone. She leaves it in her office, occasionally checking the messages. In her case, she wants a nice-looking laptop she can set on her antique-looking desk. Big on appearances.

An inexpensive, decent-looking, Windows laptop seems like the best idea if I can’t get her on a Chromebook or something. I’ll probably scour eBay eventually like I did for my current one ($240 Thinkpad T420). If that’s$240, there’s gotta be some deals out there in the sub-Core i7 range. :)

1. 3

Sure, but just to clarify - we are talking about people who may need to save money to get the $30 for something like a raspberry pi. Not someone who can drop$240 on a new laptop.

1. 3

Oh yeah. I was just giving you the device example you asked for. She’s in the category of people who would need to save money: she’s on Social Security. These people still usually won’t go with a geeky rig even if their finances justify it. Psychology in action.

I do actually have a Pi 3 I could try to give her. I’d have to get her some kind of nice monitor, keyboard, and mouse for it. I’m predicting, esp with the monitor, the sum of the components might cost the same as or more than a refurbished laptop for web browsing. I mentioned my refurbished Core i7 for $240 on eBay as an example that might imply lower-end laptops with good performance might be much cheaper. I’ll find out soon. 2. 1 But what about a device people got in 2015 or 2010? Or, dare I say, older devices, which still work fine, and may be kept around for any number of reasons like nostalgia or sentimental attachment? Sure, you can tell all these people to also stuff it, but don’t pretend they don’t exist. 2. 12 Why do you care more about disabled people than the privacy conscious? Oh that one is easy: Its the law. Being paranoid isn’t a protected class, it might be a mental health issue - but my website has nothing to do with its treatment. For the regular privacy, you have other extensions and cookie management you can do. 3. 3 You have some good points. One thing I didn’t see addressed is the number of people on dial-up, DSL, satellite, cheap mobile, or other bad connections. The HTML/CSS-type web pages usually load really fast on them. The Javascript-type sites often don’t. They can act pretty broken, too. Here’s some examples someone posted to HN showing impact of JavaScript loads. “If there is interactivity, such as an app, obviously it is going to have javascript. “ I’ll add that this isn’t obvious. One of the old models was client sending something, server-side processing, and server returns modified HTML. With HTML/CSS and fast language on server, the loop can happen so fast that the user can barely perceive a difference vs a slow, bloated, JS setup. It would also work for vast majority of websites I use and see. The JS becomes necessary as the UI complexity, interactivity (esp latency requirements), and/or local computations increase past a certain point. Google Maps is an obvious example. 1. 3 It is interesting to see people still using dialup. Professionally, I use typescript and angular. The bundle sizes on that are rather insane without much code. Probably unusable on dialup. However, for my personal sites I am interested in looking at things like svelte mixed with dynamic loading. It might help to mitigate some of the issues that Angular itself has. But fundamentally, it is certainly hard to serve clients when you have apps like you mention - Google Maps. Perhaps a compromise is to try to be as thrifty as can be justified by the effort, and load most of the stuff up front, cache it as much as possible, and use smaller api requests so most of the usage of the app stays within the fast local interaction. 1. 2 <rant> Google Maps used to have an accessibility mode which was just static pages with arrow buttons – the way most sites like MapQuest worked 15 years ago. I can only guess why they took it away, but now you just get a rather snarky message. Not only that, but to add insult to injury, the message is cached, and doesn’t go away even when you reload with JS enabled again. Only when you Shift+reload do you get the actual maps page. This kind of experience is what no-JS browsers have to put up with every fucking day, and it’s rather frustrating and demoralizing. Not only am I blocked from accessing the service, but I’m told that my way of accessing it itself invalid. Sometimes I’m redirected to rather condescending “community” sites that tell me step by step how to re-enable JavaScript in my browser, which by some random, unfortunate circumstance beyond my control must have become disabled. All I want to say to those web devs at times like that is: Go fuck yourself, you are all lazy fucking hacks, and you should be ashamed that you participated in allowing, through action or inaction, this kind of half-baked tripe to see the light of day. My way of accessing the Web is just as valid as someone’s with JS enabled, and if you disagree, then I’m going to do everything in my power to never visit your shoddy establishment again. </rant> Edit: I just want to clarify, that this rant was precipitated by other discussions I’ve been involved in, my overall Web experience, and finally, parent comment’s mention of Google Maps. This is not aimed specifically at you, @zzing. 2. 9 It shouldn’t be extra effort, is the point. If you’re just writing some paragraphs of text, or maybe a contact form, or some page navigation, etc etc you should just create those directly instead of going through all the extra effort of reinventing your own broken versions. 1. -2 Often the stuff I am making has a lot more than that. I use front end web frameworks to help with it. Very few websites today have just text or a basic form. 1. 10 Ok, well, that wasn’t at all clear since you were replying to this: It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript. Many websites I see fit this description. They’re not apps, they don’t have any “behaviour” (at least none that a user can notice), but they still have so much JS that it takes over 256MB of RAM to load them up and with JS turned off they show a blank white page. That’s the topic of this thread, at least by the OP. 1. 0 Very few websites today have just text or a basic form. Uhh… Personal websites? Blogs? Many of the users here on Lobsters maintain sites like these. No need to state falsehoods to try and prove your point; there are plenty of better arguments you could be making. As an aside, have you seen Sourcehut? That’s an entire freakin’ suite of web apps which don’t just function without JavaScript but work beautifully. Hell, Lobsters almost makes it into this category as well. 2. 1 Some types of buttons, menus, text and images aren’t implemented in plain HTML. These kinds should still be built in JS. For instance, 3-state buttons. There are CSS hacks to make a button appear 3-state, but no way to define behavior for them without JS. People can hack together radio inputs to look like a single multi-state button, but that’s a wild hack that most developers aren’t going to want to tackle. 1. 1 I’m trying to learn more about accessibility, and recently came across a Twitter thread with this to say: “Until the platform improves, you need JS to properly implement keyboard navigation”, with a couple video examples. 1. 2 I think that people that want keyboard navigation will use a browser that supports that out of the box, they won’t rely on each site to implement it. 1. 2 The world needs more browsers like Qutebrowser. 1. 9 Great news! I am eager to try this! Turn on -XLinearTypes, and the first thing you will notice, probably, is that the error messages are typically unhelpful: you will get typing errors saying that you promised to use a variable linearly, but didn’t. How hasn’t it been used linearly? Well, it’s for you to puzzle out. And while what went wrong is sometimes egregiously obvious, it can often be tricky to figure the mistake out. So, basically, GHC just got its own “Syntax error” a la OCaml… just a bit more specialized :p. 1. 11 Maybe it’s just me, but to me OCaml’s errors are terse and unhelpful and GHC’s errors are… verbose and unhelpful. ;) There are interesting papers that show working ways to improve both, but I wonder why none of those improvements are in the mainline compilers. 1. 2 Good error reporting is easiest if it’s built into the compiler front end from the start. If a new algorithm comes along to improve the error information it’s almost never going to be a simple job to drop it into an existing compiler. You need type information & parse information from code that’s potentially incorrect in both spaces, so any error algorithm usually has to be tightly integrated into both parts of the compiler front end. That tight integration usually means that improving compiler errors is a significant amount of work. 1. 3 It varies. What puzzles me is that a lot of time ready to use, mergeable patches take much longer to merge than they should. Like this talk: https://ocaml.org/meetings/ocaml/2014/chargueraud-slides.pdf 1. 1 Do you also have a link for a patch for the improved error messages? A lot of work has been going on to move OCaml to a new parser and improve error messages. Even though there is a lot still needed to be done, latest releases have started improving a lot. Maybe we can still extract some useful bits from that effort and try again 1. 2 Turns out it was even made into a pull request that isn’t merged yet: https://github.com/ocaml/ocaml/pull/102 1. 1 Thanks. It is quite an informative PR actually, and explains why the change is not there yet and once can infer why it is easier to add informative messages in new languages and complier but it may be quite hard to retrofit them to seasoned ones 2. 7 Would you be kind enough to give me an ELI5 about what linear types are and what you can do with them? 1. 29 In logic, normal implication like A implies B means whenever you have A, you can derive B. You have tautologies like “A implies (A and A)” meaning you can always infinitely duplicate your premises. Linear implication is a different notion where deriving B from A “consumes” A. So “A linearly implies B” is a rule that exchanges A for B. It’s not a tautology that “A linearly implies (A and A).” The classic example is you can’t say that “$3 implies a cup of coffee” but “\$3 linearly implies a cup of coffee” makes sense. So it’s a logical form that reasons about resources that can be consumed and exchanged.

Same in functional programming. A linear function from type A to type B is one that consumes an A value and produces a B value. If you use it once with an A value then you can’t use it again with the same A value.

This is nice for some performance guarantees, but also for encoding safety properties like “a stream can only be read once” etc.

1. 6

1. 5

It can be used to model protocols with type signatures. The following is in theory what you should be able to do.

data ConsoleInput
= Input String ConsoleOutput
| ExitInput

data ConsoleOutput
= PrintLines ([String] ⊸ Console)
& PrintLastLines ([String] ⊸ ())

greet :: ConsoleOutput ⊸ ()
greet console
= let PrintLines f = console
in step2 (f ["name?"])

step2 :: ConsoleInput ⊸ ()
step2 ExitInput = ()
step2 (Input input console)
= let PrintLastLines f = console
in f ["hello " ++ input]


If you combine it with continuation passing style, you get classical linear logic and it’s a bit more convenient to use.

If you model user interfaces with types, they should be quite useful.

I’m also examining and studying them: http://boxbase.org/entries/2020/jun/15/linear-continuations/

1. 1

Wikipedia gives a reasonable overview. The closest analogy would be something like move semantics – for example ownership in Rust can be considered as manifestation of linear types.

1. 6

Rust ownership is linear affine types. Linear types are similar but differ in the details. A shitty way of understanding it is affine types mimic ref counting and prevent you from having a ref count < 0. Linear types are more a way of acting like RAII in that you might create a resource but just “know” that someone later on in the chain does the cleanup.

Which I’m sure sounds similar but affine types allow for things like resource leaks but linear types should guarantee overall behavior to prevent it.

This all assumes my understanding and explanation is apt. I’m avoiding a ton of math and i’m sure the shitty analogy doesn’t hold up but behaviorally this is how I have it in my brain.

1. 2

Linearity Design Space: https://i.imgur.com/s0Mxhcr.png

1. 2

I’m personally of the stance that the 2020 linear ghc stuff is more <= 1 usage, and kinda misses out on a lot of really fun expressivity that can fall out of making full classical linear logic first class. But that’s a long discussion in its own right , and I’ve yet to make the time to figure out the right educational exposition on that front

1. 1

it definitely seems more limited in scope/ambition compared to the effort ongoing for dependent types, for better or worse. Can’t say I know much about what first class linear logic would look like, but perhaps now there will be more discussion about such things.

1. 2

The really amazing thing about full linear logic is it’s really sortah a rich way to just do mathematical modelling where everything has a really nice duality. The whole thing about linearity isn’t the crown jewel (though wonderfully useful for many applications ), it’s that you get a fully symmetric bag of dualities for every type / thing you can model.

The paper that really made it click for me was mike shulmans linear logic for constructive mathematics paper. It’s just a fun meaty read even at a conceptual level. There’s a lot of other work by him and other folks that taken together just point to it being a nice setting for formal modelling and perhaps foundations of category theory style tools too!

2. 1

Not sure I can agree that Uniqueness types are the same as Linear types. Care to explain they’re similar sure but not the same thing and your… screenshot of a powerpoint? isn’t very illustrative of whatever point you’re trying to make here.

And from my experience with Idris, I’m not sure I’d call what Rust has Uniqueness types.

1. 1

They are different rows in the matrix because they are different, of course.

it’s from this presentation about progress on linear ghc a little over a year ago https://lobste.rs/s/lc20e3/linear_types_are_merged_ghc#c_2xp2dx skip to 56:00

What is meant by Uniqueness types here is “i can guarantee that this function gets the unique ptr to a piece of memory” https://i.imgur.com/oJpN4eN.png

2. 2

Am I the only one thinking this is not how you ship language features?

If the compiler can’t even report errors correctly, the feature shouldn’t ship.

1. 15

If the compiler can’t even report errors correctly, the feature shouldn’t ship.

Its more this is an opt-in feature with crappy error reporting for now using computer programming design features not in use in most programming languages. Its going to have rough edges. If we required everything to be perfect we’d never have anything improved. Linear types like this also might not have a great way to demonstrate errors, or the domain is new so why not ship the feature for use and figure out what kind of error reporting you want based on feedback.

1. 13

Many people do not realize that haskell is a research language and GHC is one of the main compilers for it. This is an experimental feature in a research language. If it works out well, then it will be standardized.

2. 5

Other people have sort-of said it, but not clearly enough I think. This is not a language feature being added. It is a feature-flagged experimental feature of a particular compiler. Most such compiler extensions never make it into real Haskell, and the ones that do take years after they are added to a compiler to make it to a language spec.

1. 4

for all practical purposes isn’t “real Haskell” defined by what ghc implements these days?

1. 2

Yes, all the other implementations are dead. They still work, but they won’t run most modern Haskell code, which usually uses a bunch of GHC extensions.

1. 1

You might say “isn’t it not popular to write standards-compliant Haskell these days?” and you’d be right. Of course it’s often trendy to write nonstandard C (using, say, GNU extensions) or nonstandard HTML/JavaScript. However, ignoring the standard being trendy doesn’t mean the standard doesn’t exist, or even that it isn’t useful. I always make sure my Haskell is Haskell2010, and I try to avoid dependencies that use egregious extensions.

2. 2

Honestly curious: are there any other Haskell compilers out there? Are they used in production?

Also, what is a definition of a true Haskell? I always thought it’s what’s in GHC.

1. 5

There’s a Haskell which runs on the JVM - Frege. But it makes no attempt to be compatible with the version of Haskell that GHC impements, for good reasons. Hugs is a Haskell interpreter (very out of date now, but still works fine for learning about Haskell.) There a bunch of other Haskell compilers, mostly research works that are now no longer in development - jhc, nhc98 etc etc.

But GHC is the dominant Haskell compiler by far. I don’t think there are any others in active development, apart from Frege, which isn’t interested in being compatible with GHC.

1. 2

There are other compilers and interpreters. None of them is anywhere near as popular as GHC, and usually when one does something interesting GHC consumes the interesting parts.

The whole reason language extensions are called “extensions” and require a magic pragma to turn on is that they are not features of the core language (Haskell) but experimental features of the compiler in question.

2. 1

In short, GHC Haskell is a language designed by survival-of-the-fittest.

1. -2

If you haven’t noticed, the language spec is dead.

2. 3

Overly terse error messages are bad, but they are better than wrong error messages. Some things are much harder to give helpful error messages for than others.

I wish people spend more time improving error reporting, at least in cases when the way to do it is well understood. There is no reason for say TOML or JSON parsers to just say “Syntax error”. But, YAML parsers are pretty much doomed to give unhelpful errors just because the language syntax is ambiguous by design.

And then some errors are only helpful because we know what their mean. Consider a simple example:

Prelude> 42 + "hello world"

<interactive>:1:1: error:
• No instance for (Num [Char]) arising from a use of ‘+’
• In the expression: 42 + "hello world"
In an equation for ‘it’: it = 42 + "hello world"


How helpful is it to a person not yet familiar with type classes? Well, it just isn’t. It’s not helping the reader to learn anything about type classes either.

1. 1

I’ve seen some good suggestions on r/haskell for improving the wording of these errors.

2. 2

The error they’re talking about is a kind of type error they’ve not worked with. It’s produced if you forget to construct or use a structure. I I’m guessing it’s technically “proper” but the produced error message may be difficult to interpret.

They’ve ensured it’s a feature you can entirely ignore if you want to. Everybody’s not convinced they need this.

I otherwise dunno what they’re doing and I’m scratching my head at the message. Something like “Oh cool you’ve done this… … … So where are the types?”

1. 2

So you never got a C++ template error in the good olden days? Seriously though, it just got merged. It’s not released or “shipped” in any means.

1. 0

So you never got a C++ template error in the good olden days?

No, because I looked at the language, figured out that the people involved completely lost their fucking mind, and moved on.

Seriously though, it just got merged. It’s not released or “shipped” in any means.

They took 4 years to arrive at the current state, which I’ll approximate at roughly 10% done (impl unfinished, spec has unresolved questions, documentation doesn’t really seem to exist, IDE support not even on the radar).

So if you assume that there will be a Haskell version in the next 36 years, then this thing is going to end up in some Haskell release sooner or later.

1. 2

So if you assume that there will be a Haskell version in the next 36 years, then this thing is going to end up in some Haskell release sooner or later.

Could you elaborate on this? If practical users of linear types will only use them if they have good error messages, and early testers want to work out the kinks now, what’s wrong with having a half-baked linear types feature with no error messages permanently enshrined in GHC 8.12?

1. 1

Is there a comparison somewhere of Oil vs Rc?

1. 2

Well rc is not POSIX or bash compatible, and Oil is, so that’s a huge difference. If you’re doing something in particular with rc that you’d like to do with Oil, I’m interested.

1. 1

As I understand it, Oil as a program implements POSIX and bash languages, but also implements a new language that is meant to be better (which is basically the point of the project).

Rc is currently the main thing I see pointed to if you want to write shell scripts but have the language be more sane. So I guess I’m wondering about comparison of Rc the language to the “new” Oil language.

1. 4

Linux distros have no concept of sandboxing, or any meaningful application security model. Any app running under Xorg can see the contents of any other app runing under Xorg.

In practice, I find this isn’t really that much of a problem. How many people have fallen victim to a malicious app snooping on their screen or clipboard? Part of the reason for this is that 1) most Linux users tend to “power users”, and more or less know what they’re doing, and 2) tend to install software from packaging instead of random adverts, Google results, or whatnot.

Not that sandboxing applications is a bad idea as such, but for user-facing software (i.e. stuff that isn’t a daemon) the UX problems are real and non-trivial to solve. The classic of example of this is the “screenshot problem” under Wayland, which seems to be partly solved but with a rather complex mechanism which doesn’t work everywhere.

1. 2

I think Android has actually shown us that sandboxing applications is a bad idea, at least if taken too far. Android’s over-zealous sandboxing is one of the main reasons people have to resort to exploiting security holes, gaining root on their own device, just so syncthing can backup a folder from the SD card. And similar terribleness. The fact that every app by default has a tiny folder it puts its data in and nothing else can see that folder is just a nightmare. This is one of the reasons I have avoided Android like the plague, and why people are flocking to things like the PinePhone.

1. 5

Is it a good idea? I think the question is misplaced. It’s a preference. This is like asking “Is the colour red a good idea?”

I love dark view. I use it everywhere. Does that mean you have to? No. Do I think I’m improving my life because of it? Yes, but only because it makes me happy.

1. 5

You are right here, it is a preference

I prefer light over dark, but at night, before going to sleep dark mode is easy on the eyes.