Threads for jmtd

  1. 4

    I was on-board until I got to the Javascript bit. It’s a shame they don’t regenerate the served-page as partof the NOTIFY step, then you wouldn’t have a javascript requirement to view comments.

    1. 5

      There’s an alternative approach further down which eliminates the need for javascript that involves writing the comments directly into the HTML file

      1. 4

        This is really what I was expecting when I saw “static comments”. In theory, you could POST data to an endpoint, and code a server to write that data to HTML. But this is just an implementation detail, the fact remains that most of the content is HTML.

        It’s a shame they don’t regenerate the served-page as part of the NOTIFY step

        Without something to protect you from serving a 404 in the moment of time that the file is being rewritten, it’s definitely possible to encounter weird temporary errors by doing things this way. It happens to me a lot when I’m writing documentation and watching files in my app directory for changes…you’re pretty much guaranteed to hit an error every once in a while because it just hasn’t gotten around to generating that file yet. I think solutions like Next.js may handle this at the app server level, because they can block the request until the new file is done generating, but that of course requires a server and edges on the JAMstack side of things rather than fully static “web 1.0” :D

        1. 5

          Without something to protect you from serving a 404 in the moment of time that the file is being rewritten

          Sounds like an issue with the site-generator tool. Ideally it would build the site into a new directory and then atomically replace the old directory with the new one, instead of deleting the directory first and then rebuilding it.

          1. 2

            This was a documentation generator (typedoc --watch), so technically not designed for that purpose…but the fact that I was experiencing made me realize that it’s not quite as easy as just rewriting an existing file in place. What you describe definitely sounds like it will work better, especially for most static generation use cases.

          2. 3

            In theory, you could

            so you can in practice: https://codeberg.org/mro/form2xml

          3. 2

            Yes, it’s not a fully static site with this little bit. But in this case I totally don’t mind. Derek’s site is really, really minimal, and even the use of Javascript is minimal. So how his comments work is they’re NOT loaded initially. And only if you scroll down to bottom, it’ll trigger a load - both comments and the new comment form. It’s quite neat, old school use of JavaScript.

            1. 2

              Thanks. My other comment here - https://lobste.rs/s/byail8/static_html_comments#c_fongzy - explains why I did that, even though I’m generally averse to JavaScript: it stopped spam-bots.

          4. 2

            I agree. (I’m the author.) I posted this as a response to someone asking by email how I do comments.

            As I was writing it up, though, I thought, “Huh. There’s actually a much simpler way.” — the way you describe.

            But for now I just posted how I do it now, and in the future if I change it I’ll write it up again.

          1. 1

            It seems like such a good idea, but there needs to be some workflow adjustment. Installing dependencies in each copy separately is wasteful. Copying the configuration each time is slightly annoying. It’s a bit easier for ruby where I can point bundler at a shared space, but node for example has no such configuration and an allergy to symbolic links - hope your file system can deduplicate, I guess?

            1. 2

              This very much sounds like a Node problem. I don’t do Node, but for things I do do where I might need per-project dependencies installed somewhere, I guess Python, I would use virtualenvs outside of the git repositories. (I’m not yet yet using multiple git worktrees with any of my Python projects.)

            1. 7

              We should distinguish between websites and web apps. Does a web app have to be under 14kb in size? I don’t think so.

              1. 0

                We should distinguish between websites and web apps.

                No such thing like “web apps” exists. There are still HTML documents, just because you rearrange them faster than display pipeline can show them doesn’t mean there aren’t regular web pages still.

                After all, the whole movement to make HTML document browser a some sort of application runtime is just silly, if not stupid.

                1. 9

                  There are still HTML documents, just because you rearrange them faster than display pipeline can show them doesn’t mean there aren’t regular web pages still.

                  I don’t think anybody is arguing this. But its important to acknowledge different “HTML documents” on the web serve different purposes. “Web apps” do serve a different purpose to regular HTML “documents”.

                  the whole movement to make HTML document browser a some sort of application runtime is just silly, if not stupid.

                  Well, its too late for that unfortunately. The fact is for most users of the WWW, the browser is an application runtime more so than a document browser.

                  1. 7

                    Yet it exists as a distinct term because enough people understand the phrase “web app” as a thing separate from “web site”, especially when used in context or in comparison to each other. I can use the term “web app” and expect a large majority of people to understand me. Refining it into a technically-precise definition may be more fraught, but that does not mean there is “no such thing”. It is just as useful to use as other generally-acknowledged categories with fuzzy boundaries: fruit, vehicle, city, democracy.

                    As the kids might say, “Language be like that sometimes.”

                    1. 4

                      It would seem most disagree with you. As a developer of a webapp myself, there are stark differences between that which is meant as a document and that which is meant to be worked with.

                      1. 2

                        If you think there’s no meaningful distinction between a web app and a web site, I disagree with you.

                        1. 1

                          The web browser was initially created to have links between text documents. That’s it. That proved to be not useful enough in many cases, because people want more interactivity. Hence almost every big feature in the web standards in the last 25 years has been targeted at web apps, not HTML documents.

                          Calling these apps just “HTML documents” is like saying that modern applications are just “assembly programs” because that’s all they are under the hood.

                      1. 1

                        I find it fairly concerning that there’s a general pivot from “we care about binary compatibility” until there’s a problem, and then it’s niche, oh it’s doing something it shouldn’t, silly to rely on that, etc

                        Take away that specific case but still on the context of gaming on Linux. How much work is needed to get the Loki games binaries (circa 2000) to run on a modern system? For alpha centauri It’s certainly easier and more robust to use wine on the similar-age windows binaries (sadly)

                        1. 2

                          In this specific case we’re talking about Epic’s Easy Anti-Cheat:

                          1. it’s closed source (for reasons, see below)
                          2. its USP is that it’s able to detect of the computer it’s running on is running “cheat” software - software that makes it easier to defeat enemies in online gameplay
                          3. to do this, it requires low-level access to every binary on the system
                          4. to defeat countermeasures, it’s proprietary closed source

                          I can’t think of anything less in line with FLOSS software. I do realise that gamers are incredibly averse to cheating and are in favor of technical countermeasures, but it’s also important to realize that anti-cheat software is a restriction in software freedom.

                          I don’t see why a closed-source proprietary spyware solution should hold back innovation in software size and performance by demanding software be compiled with an older hash symbol scheme.

                        1. 1

                          With regards Debian, nobody has bothered to file a bug to report the copyright problem.

                          1. 13

                            Every programmer who is not a games dev is a failed games dev. :-)

                            1. 13

                              I’m not sure that ‘failed’ is the right framing. I found game development really fun when I was a child and it was a great way of learning to program. The best way of learning to program is to write the kind of software that you care about. As a child, games counted for close to 100% of what I wanted to do with a computer, and easily 100% of what I wanted to do with a computer and couldn’t with existing off-the-shelf software, and so were the main thing to drive my interest in programming. As I grew older, I didn’t stop enjoying games, but I started doing a lot more things with computers. The set of things that I wanted to do with a computer but can’t do with existing software still has games in there somewhere, but as a tiny percentage and so I’m much happier playing off-the-shelf games and writing bespoke software in other domains.

                              1. 7

                                I have never been interested in game dev, even as a kid, despite playing tons of video games. In elementary school a bunch of my friends were tinkering around with GameMaker, but I never was interested. I toyed around with it for less than an hour before getting bored.

                                1. 3

                                  I don’t know what this means. I’m neither a games dev nor a failed games dev. So… am I not a programmer?

                                  1. 1

                                    Don’t take it personal, I think that OP just tried to make a joke based on his own experience that just fell flat.

                                  2. 2

                                    That’s just nonsense.

                                  1. 2

                                    Vimwiki supports a similar, though less rich, syntax. That’s the only TODO list I’ve ever found to work for me (I only wish it worked on my phone, and no, I’m not going to try to use Vim on my phone). The part that makes it “work”, I think, is that my notes are right inline with my TODO items.

                                    1. 1

                                      Is that the task wiki add on for vimwiki or something native to it?

                                      1. 1

                                        It’s built-in. You just do * [ ] Do some stuff to create a TODO item, and then you can toggle it with ctrl-space.

                                        1. 1

                                          Ah, thanks! Taskwiki (if you don’t know) extends vimwiki to store the todo items in “task warrior” aka /usr/bin/task. It uses the same short hand and i wasn’t really sure where vimwiki stopped and taskwiki started.

                                    1. 7

                                      As I recall, CGI was present very early on, definitely by 1995, and early websites definitely made use of it — obviously for form submission, but it was also sometimes used for serving pages.

                                      There were also early servers, like Netscape’s, that ran their own custom server-side app code — I don’t know for sure but I suspect they had their own C-level plugin system for running handlers in-process to avoid the high overhead of CGI.

                                      I’m still wondering why only PHP became available as an easy in-process scripting language. It’s not like you couldn’t build a similar system based on Python or Ruby or JS. Maybe it was the ubiquity of Apache, and the Apache developers not wanting to add another interpreter when “we already have PHP?”

                                      1. 14

                                        As mentioned in the article, there were other Apache modules providing similar functionality, such as mod_python. There were also CGI approaches to the same template-forward bent, such as Mason (which was perl). If there was anyone saying “why support another since we already have PHP?” it was admins on shared hosting services. Each additional module was yet another security threat vector and a customer service training.

                                        1. 6

                                          I was at a talk given by Rasmus Lerdorf (creator of PHP) once and he claimed it was because the PHP implementation was the most basic, limited version possible and it therefore it was very simple to isolate different users from each other. This made PHP very popular with cheap shared hosters. Whereas the Perl implementation was much more thorough and hooked (not sure what the correct terms are) into the whole of Apache and therefore it needed a dedicated server. Much more expensive.

                                          1. 2

                                            Yeah. Even though mod_php is a single module loaded into a single Apache instance, it was designed with some sandboxing options like safe_mode. Or you could use PHP CGI and isolate things even better (running as the user’s UID).

                                            Other language hosting modules for Apache like mod_perl didn’t offer the same semantics. I also recall mod_perl being pretty oriented towards having access to the web server’s configuration file to set it up. People did use Perl before the rise of PHP, but most often via CGI (remember iKonboard?)

                                            1. 3

                                              mod_perl was more oriented toward exposing the apache extension API so that you could build apache modules in perl, as I remember it. It got used to write some cool web applications (slashcode springs to mind) that’d have been hard to write (at that scale) any other way at the time. But mod_php was a very different beast, just aiming to be a quick way to get PHP to work well without the overhead of CGI.

                                              I agree with the article… there’s nothing now (other than PHP, which I still use now for the kind of pages you mention, the same way I did in the early ‘00s) that’s nearly as low-friction as PHP was back then to just add a couple of dynamic elements to your static pages.

                                              1. 2

                                                Yeah, I was at a small web hosting company in the late ’90s, early 2000s, and we used PHP CGI with our shared hosting.

                                          2. 10

                                            It’s not like you couldn’t build a similar system based on Python or Ruby or JS.

                                            Not quite. The article touches this, although not explicitly, you have to read a bit between the lines.

                                            PHP allowed for easy jump in and out static and dynamic context like no other alternative. It still does this better than anything else. This was in the core of the language no need to install third party libraries. It also included a MySQL client library in its core with work out if the box. Essentially, it shipped with everything necessary in the typical setup. No need to fiddle with server set up.

                                            The language was also arguably more approachable for beginners than perl with a multitude of simple data structures easily accessible through the infamous array() constructor. It also retained familiarity for C programmers, which were a big audience back then. While python for example, didn’t.

                                            One thing I don’t agree with is the simplicity nor the deployment model. It’s only simple in the context of the old shared hosting reality. If you include setting up the server yourself like we do nowadays, it is actually more cumbersome than a language that just allows you to fire up a socket listening on port 80 and serve text responses.

                                            It.s.how it was marketed and packages that made all the difference.

                                            1. 9

                                              Yes, but it was “better” in the sense of “making it easy to do things that are ultimately a lousy idea”. It’s a bit better now, but I used it back then and I remember what it was like.

                                              Convenience feature: register_globals was on by default. No thinking about nasty arrays, your query params are just variables. Too bad it let anyone destroy the security of all but the most defensively coded apps using nothing more than the address bar.

                                              Convenience feature: MySQL client out of the box. Arguably the biggest contributor to MySQL’s success. Too bad it was a clumsy direct port of the C API that made it far easier to write insecure code than secure. A halfway decent DB layer came much, much later.

                                              Convenience feature: fopen makes URLs look just like files. Free DoS amplification!

                                              Convenience feature: “template-forward”, aka “my pages are full of business logic, my functions are full of echo, and if I move anything around all the HTML breaks”. Well, I guess you weren’t going to be doing much refactoring in the first place but now you’ve got another reason not to.

                                              The deployment story was the thing back then. The idea that you signed up with your provider, you FTP’d a couple files to the server, and… look ma, I’m on the internet! No configuration, no restarting, no addr.sin_port = htons(80). It was the “serverless” of its day.

                                              1. 21

                                                Yes, but it was “better” in the sense of “making it easy to do things that are ultimately a lousy idea”. It’s a bit better now, but I used it back then and I remember what it was like.

                                                It was better, in the sense of democratizing web development. I wouldn’t be here, a couple decades later, if not for PHP making it easy when I was starting out. The fact that we can critique what beginners produced with it, or the lack of grand unified design behind it, does not diminish that fact. PHP was the Geocities of dynamic web apps, and the fact that people now recognize how important and influential Geocities was in making “play around with building a web site” easy should naturally lead into recognizing how important and influential PHP was in making “play around with building a dynamic web app” easy.

                                                1. 3

                                                  Author here, I couldn’t have put it better. “PHP was the Geocities of dynamic web apps” — this is a brilliant way to put it. In fact I’m now peeved I didn’t think of putting it like this in the article. I’m stealing this phrase for future use. :)

                                                2. 2

                                                  Absolutely. And indeed, I saw those things totally widespread to their full extent in plenty of code bases. To add a bit of [dark] humor to the conversation, I even whiteness code that would use PHP templating capabilities to assemble PHP code that was fed to eval() on demand.

                                                  But I am really not sure you can do anything about bad programmers. No matter how much safety you put in place. It.s a similar situation with C. People complaining of all the footguns.

                                                  Can you really blame a language for people doing things like throwing a string in an SQL query without escaping it? Or a number without asserting its type? I really don’t have a clear opinion here. Such things are really stupid. I .not sure it is very productive to design technology driven by a constant mitigation of such things.

                                                  EDIT: re-reading your post. So much nostalgia. The crazy things that we had. Makes me giggle. Register globals or magic quotes were indeed… punk, for lack of a better word. Ubernostrum put it really well in a sister comment.

                                                  1. 4

                                                    But I am really not sure you can do anything about bad programmers. No matter how much safety you put in place. […] Can you really blame a language for people doing things like throwing a string in an SQL query without escaping it?

                                                    Since you mention magic quotes … there’s a terrible feature that could have been a good feature! There are systems that make good use of types and knowledge of the target language to do auto-escaping with reasonable usability and static guarantees, where just dropping the thing into the query does the secure thing 98% of the time and throws an “I couldn’t figure this out, please hint me or use a lower-level function” compile error the other 2%. PHP could have given developers that. Instead it gave developers an automatic data destroyer masquerading as a security feature, again, enabled by default. That’s the kind of thing that pisses me off.

                                                3. 3

                                                  I definitely had a lot of fun making mildly dynamic websites in PHP as a teen, but I wouldn’t want to get back to that model.

                                                  They might have a style selector at the top of each page, causing a cookie to be set, and the server to serve a different stylesheet on every subsequent page load. Perhaps there is a random quote of the day at the bottom of each payload.

                                                  JS in modern browsers allows that kind of dynamicity very nicely, and it’s easy to make it degrade gracefully to just a static page. It will even continue to work if you save the page to your own computer. :)

                                                4. 6

                                                  I’m still wondering why only PHP became available as an easy in-process scripting language. It’s not like you couldn’t build a similar system based on Python or Ruby or JS. Maybe it was the ubiquity of Apache, and the Apache developers not wanting to add another interpreter when “we already have PHP?”

                                                  I am someone who is, these days, primarily known for doing Python stuff. But back in the early 2000s I did everything I could in PHP and only dabbled in Perl a bit because I had some regular business from clients who were using it.

                                                  And I can say beyond doubt that PHP won, in that era, because of the ease it offered. Ease of writing — just mix little bits of logic in your HTML! — and ease of deployment via mod_php, which for the developer was far easier than messing around with CGI or CGI-ish-but-resident things people were messing with back then. There are other commenters in this thread who disagree because they don’t like the results that came of making things so easy (especially for beginning programmers who didn’t yet know “the right way” to organize code, etc.) or don’t like the way PHP sort of organically grew from its roots as one guy’s pile of helper scripts, but none of that invalidates the ease PHP offered back then or the eagerness of many people, myself included, to enjoy that easiness.

                                                  1. 4

                                                    mod_php was always externally developed from Apache and lived in PHP’s source tree.

                                                    1. 3

                                                      The other options did exist. There were mod_perl and mod_python for in-process (JS wasn’t really a sensible server-side option at the time we’re talking about), mod_fastcgi and mod_lisp for better-than-CGI out-of-process (akin to uwsgi today), and various specialized mod_whatevers (like virgule) used by individual projects or companies. mod_perl probably ran a sizeable fraction of the commercial web at one point. But they didn’t take PHP’s niche for various reasons, but largely because they weren’t trying to.

                                                      1. 2

                                                        There was also the AOL webserver, which was scriptable with TCL. It looks like this was around in the early nineties, but perhaps it wasn’t open sourced yet at that point? That would definitely make it harder to gain momentum. Of course TCL was also a bit of an odd language. PHP still had the benefit of being a seamless “upgrade” from HTML - just add some logic here and there to your existing HTML files. That’s such a nice transition for people who never programmed before (and hell, even for people who had programmed before!).

                                                        Later on, when Ruby on Rails became prominent (ca 2006), it was still not “easy” to run it. It could run with CGI, but that was way too slow. So you basically had to use FastCGI, but that was a bit of a pain to set up. Then, a company named Phusion realised mod_passenger which supposedly made running Ruby (and later, other languages like Python) as easy as mod_php. The company I worked for never ran it because we were already using fastcgi with lighttpd and didn’t want to go back to Apache with its baroque XML-like config syntax.

                                                        1. 2

                                                          I worked at at shared hosting at the time of the PHP boom. It all boiled down to the safe mode. No other popular competitor (Perl / Python) had it.

                                                          Looking back, it would have been fairly cheap to create a decent language for the back-end development that would have worked way better. PHP language developers were notoriously inept at the time. Everyone competent was busy using C, Java, Python and/or sneering at the PHP crowd, though.

                                                          1. 1

                                                            It’s not like you couldn’t build a similar system based on Python or Ruby or JS.

                                                            There’s ERuby which was exactly this. But by then PHP was entrenched.

                                                            I did a side project recently in ERuby and it was a pleasure to return to it after >10 years away.

                                                          1. 12

                                                            I think the Unicode consortium made a huge mistake giving in to adding emojis to Unicode. It’s a bottomless pit, very politically charged and definitely ambiguous (compare for example the different emoji-styles across operating systems/fonts).

                                                            It severely complicates most of the Unicode algorithms (grapheme cluster detection, word/sentence/line-segmentation, etc.) and, compared to dead and alive languages, feels very short-lived, like a fashion.

                                                            How will emojis be seen in 50 years? I can already feel the second-hand-embarassment.

                                                            1. 15

                                                              It looks like people were already using emoji, and Unicode had to add them for compatibility. https://unicode.org/emoji/principles.html

                                                              1. 13

                                                                Every thread about emoji has a “Unicode shouldn’t have added them” comment (or several), and I feel like I then always step in to remind those commenters that basically every single chat/message system humans have built in the internet era has reinvented emoticons in some form or another, whether purely textual (“:-)” and “:/“ and friends) or custom graphics, or a mix of text abbreviations that get replaced by graphics.

                                                                This suggests that they are a non-negotiable part of how humans conduct written communications in this era. Which means Unicode must find a way to capture them, by the nature of Unicode itself.

                                                                1. 4

                                                                  This suggests that they are a non-negotiable part of how humans conduct written communications in this era. Which means Unicode must find a way to capture them, by the nature of Unicode itself.

                                                                  You might as well use the same argument to claim that Unicode should capture all words, too.

                                                                  1. 3

                                                                    Doesn’t it try? Morally, is there any difference between a code sequence of letters representing a word, and a code sequence of letters and combining characters that come together to create a single glyph?

                                                                  2. 1

                                                                    This is solved well with ligatures at the font level.

                                                                    Solving it at the font level has the additional benefit of not blocking the addition of new emoji on a standards body, as well as allowing graceful degradation to character sequences that anyone, including those on older software, can view.

                                                                    1. 8

                                                                      Ligatures can’t and don’t solve all the traditional emoticons, let alone emoji.

                                                                      Emoji are a part of written communication, no matter how much someone might personally dislike them, and as such belong in Unicode.

                                                                      1. 1

                                                                        Ligatures can’t and don’t solve all the traditional emoticons, let alone emoji.

                                                                        Why not? This approach is more or less used for flags, where flag emoji are – for political reasons like ‘TW’ – ligatures of country codes in a special unicode range. If you happen to put ‘Flag{T}’ beside ‘Flag{W}’, you may get the letters ‘TW’, or you may get a flag that enrages China, depending on your font.

                                                                        If you want to avoid ASCII ‘bar:(foo)’ from being interpreted as a smiley emoji , maybe unicode could standardize non-rendering ‘emoji brackets’ as a way of hinting to a font system that it could render a sequence of characters as an emoji ligature.

                                                                        There’s no need to restrict emoji to the slow pace of the unicode consortium, when dropping in a new font will get you the new hotness, especially since using text sequences will render legibly for everyone not using that font.

                                                                        This is win/win. It makes things more usable for those that dislike emoji, and it makes more emoji available to those that like emoji.

                                                                        1. 5

                                                                          Because fonts cannot change emoticons into images? They have different meaning, so font ligature processing, which is essentially replaceAll(characters/glyphs/whatever, graphic), does not work.

                                                                          No one can adopt a system font that magically turns one set of characters into another system. Because it can’t be adopted as the system font, then no apps get emoji. A person can’t simply change the default, for the same reason the system couldn’t: you made ligatures that potentially change meaning of bytes.

                                                                          As far a font is concerned, there is no difference between :) in “see you :)” “(: I’ve seen this comment format somewhere :)” but you ligature “solution” makes the latter nonsense.

                                                                          Emoticons also have characters that have no equivalent emoticon, either due to number of characters, or the lack of color.

                                                                          Now, you may not like emoji, but arguing “we didn’t need it before” is pretty weak sauce: we didn’t have it. The goal of text is to communicate, and it is clear that a vast proportion of all people alive use emojis in their communication. So computers should be able to facilitate that communication rather than requiring workarounds.

                                                                          The use of semagrams in alphabetic languages is nothing new - even hieroglyphics used semagrams.

                                                                          1. 2

                                                                            Because fonts cannot change emoticons into images?

                                                                            That’s… just untrue.

                                                                            You ignored the entire paragraph where I pointed out that flags ALREADY work this way. Then, you ignored the second paragraph which addresses the problem you mentioned in the third paragraph, where something like an RTL marker could mark emoji. Then you invented me saying “we didn’t need it before”.

                                                                            In fact, you seem to have ignored everything I wrote.

                                                                            It would be nice if you responded to what I said, rather than what you imagined I said.

                                                                          2. 3

                                                                            The simple counterpoint to this is to imagine the Unicode Consortium declaring that all the writing systems and characters which ever will be needed have been invented already — anything new will just be a variant or a ligature of something existing!

                                                                            That would be dangerously incorrect, and would not work at all.

                                                                            So, look. I get that some people really really really really don’t like emoji and wish they didn’t exist. But they do exist and they are a perfectly valid form of written communication and they are not sufficiently captured by ligatures or other attempts to layer on top of ASCII emoticons, any more than an early-2000s forum would have been happy with just the ASCII forms. For decades we’ve been used to a richer set of these, and it is right and proper for Unicode to include them. Complaints about them, to me, feel like ranting that kids these days say ”lol” instead of typing out the fully-punctuated-and-capitalized sentence “That is funny!”

                                                                            1. 1

                                                                              and they are not sufficiently captured by ligatures or other attempts to layer on top of ASCII emoticons, any more than an early-2000s forum would have been happy with just the ASCII forms.

                                                                              So far in this thread, I’ve seen this asserted – but I don’t see why flags are appropriately captured by ligatures, while emoticons are not. What is the technical difference that allows one to work while the other does not?

                                                                              Again, I’m arguing that for emoji lovers, ligatures are BETTER and MORE FUNCTIONAL than encoding emoji individually into unicode. That this would be an improvement in availability and usability, not a regression.

                                                                              We already have messaging programs ignoring the emoji range and adding their own custom :emoji: sequences because Unicode moves too slowly for them. We can wait years for unicode to standardize animated party parrots, or we can add :party_parrot: as text that gets interpreted by our application. Slack, and most others programs, chose the latter. Not to mention adding stickers – which arguably need the same position in Unicode as emoji.

                                                                              Unicode’s charter is to standardize existing practice. Why not let Unicode standardize the way that emoji ranges are worked around in practice, today, this with standardized “emoji brackets” that allow clients to mark any text sequence as an emoji ligature? This matches the way things actually work, and fills the need for custom emoji (and stickers) that the Unicode consortium is not serving.

                                                                              1. 1

                                                                                I offer the following counter proposal: since you seem to think it’s at least possible and perhaps even easy, I challenge you to pick, say, 20 code points at random from among the emoji and come up with distinct, memorable ASCII sequences you think would suffice to be ligature’d into those emoji. I think that this will help you to understand why I don’t think “just ligature them” is going to work.

                                                                        2. 2

                                                                          This is solved well with ligatures at the font level.

                                                                          Demonstrably false by the number of systems that screw up trying to auto detect smileys from colons and parentheses. 🙂 is unambiguous semantically; “:)” is not.

                                                                          1. 4

                                                                            I actually feel the opposite. “:)” is unambiguously a smiling face, and is mostly uniform in appearance across system UI fonts. The icon “🙂” is rendered differently depending on not only the operating system but also the specific app being used. The recipient of my message may see a completely different image then I intend for them to see. Even worse, the meaning and tone of my past emoji messages can completely change whenever Apple or Google or Telegram decides to redesign their emoji.

                                                                            Too many apps have no way to disable auto-replacement of ascii faces.

                                                                      2. 5

                                                                        I was going to mock your post by pointing out all of the other stuff in Unicode which is “politically charged”, from Tibetan to Han unification to the Hangul do-over to that time that a single character was added just for Japan’s government. But this is a grand understatement of exactly how political and pervasive the Consortium’s work is. Peruse the list of versions of Unicode and you’ll see that we already have a “bottomless pit” of natural writing systems to catalogue.

                                                                        I think that the most inaccurate part of your claim is that emoji are “like a fashion”. Ideograms are millennia old and have been continuously used for communicating mathematics.

                                                                        1. 4

                                                                          I think the Unicode consortium made a huge mistake giving in to adding emojis to Unicode. It’s a bottomless pit, very politically charged and definitely ambiguous (compare for example the different emoji-styles across operating systems/fonts).

                                                                          This applies to other planes in Unicode, due to https://en.wikipedia.org/wiki/Han_unification

                                                                          Also, any kind of character system is politically charged an interesting read here is: https://www.hastingsresearch.com/net/04-unicode-limitations.shtml (I do not agree with the points here and history has proven the author wrong, but it’s a good specimen to look at political unicode arguments pre-Emoji)

                                                                          1. 2

                                                                            It severely complicates most of the Unicode algorithms (grapheme cluster detection, word/sentence/line-segmentation, etc.)

                                                                            If there were no emojis in Unicode, but everything else remained, would any of these things really be simpler? The impression I get is there are corner cases across the languages Unicode covers for all of the complexity, independent of emoji; emoji just exposes them to westerners more.

                                                                          1. 5

                                                                            I always recommend that people give taskwarrior a try. (There is a good vim integration too) It is an excellent tool that deserves to be better known https://taskwarrior.org/

                                                                            1. 1

                                                                              I currently use this (with the vimwiki integration) and is good but when you hit bugs it’s damn hard to debug them!

                                                                            1. 3

                                                                              I use the go-jira cli client.

                                                                              I just type at a terminal jira browse PROJECT-1234 and my browser opens up, happy as you can be.

                                                                              1. 1

                                                                                I use this too. I patched mine to add support for managing external links. The project seems to be dead or nearly so, though, and I’ve ran into more and more problems that I’m close to giving up on it.

                                                                                1. 1

                                                                                  Any concrete problems? It makes jira tolerable to me, and I only just found it, so I want my eyes to be wide open.

                                                                                  1. 1

                                                                                    The biggest issue is auth periodically fails for no obvious reason and when it does it retries too fast and gets blocked by the server for a while.

                                                                              1. 9

                                                                                Sometimes I think people designing things need to take a step back. One can see the chain of reasoning that results in --directory=/ --volatile=yes actually meaning only /usr being mounted, but on the face of it that’s pretty confusing.

                                                                                1. 3

                                                                                  Totally agree. I much prefer just running a rootless container with my container I build semi regularly from puppet.

                                                                                  It ends up being the same as my host, but I can actually change what’s installed without polluting my host distro. I can also choose different distros.

                                                                                  I can see this getting better over time, but this feels like uncanny valley.

                                                                                  1. 1

                                                                                    Agreed. I also think they need a higher level command or option which encapsulates all of these options. Having an explosion of CLI options like that means it’s too low level and confusing for the user IMO.

                                                                                  1. 31

                                                                                    I think both OCaml and Haskell are here to stay and are ‘safe’ bets. Erlang will probably still be kicking as well.

                                                                                    1. 8

                                                                                      I’d like to think Haskell is a contender but it’s mutating quite fast and I’m not entirely convinced we can build all the older Haskell programs today, which doesn’t bode well for a higher magnitude timescale.

                                                                                      1. 11

                                                                                        Some of the programs are intentionally not buildable because older versions of the compiler used to accept code that wasn’t meant to be accepted.

                                                                                    1. 3

                                                                                      I’m always concerned about the potential foot guns of enabling features and losing my ability to boot.

                                                                                      There is a cool project (that admittingly I haven’t tried yet) ZFS Boot Menu[1] that solves a lot of these problems for Linux users. Roughly how it works is put a Linux kernel and an initramfs with a ZFS kernel module on a small non ZFS file system (eg efi system partition). Eliminating the need for a traditional bootloader to support zfs.

                                                                                      [1] https://zfsbootmenu.org/

                                                                                      1. 5

                                                                                        Eliminating the need for a traditional bootloader to support zfs.

                                                                                        The easiest is to have a separate /boot partition in an universal-ish filesystem like ext2 or vfat.

                                                                                        1. 4

                                                                                          One of the most compelling features of ZFS on root in my opinion is boot environments. Any time something is about to change on my system a snapshot can be taken and a bootable clone created so in the event something breaks I can select the pre broken environment at the the bootloader.

                                                                                          By putting /boot on a separate partition it creates the possibility of a split brain system where for example I’ve gotten a new kernel and some other important package has broken so I’ve booted into my old environment. In this scenario I’m booting (assuming the break wasn’t the kernel upgrade itself) but I’m on a newer kernel and my package manager believes that still on an older version.

                                                                                          For a consistent environment, having /boot on zfs included is vital.

                                                                                          1. 2

                                                                                            When I upgrade my kernel I keep the old one around, sometimes more than one. So in that situation I’ll have it available if I booted an older root filesystem.

                                                                                        2. 1

                                                                                          I have tried this. I love it. The install process even caught my custom-added kernel cmdline from grub and brought it along.

                                                                                          Disclaimer: I know the maintainers (on IRC).

                                                                                          1. 1

                                                                                            I like the ZFS Boot Menu project.

                                                                                            Do you know any Linux distribution that use it? Like I mean you just install such Linux as Ubuntu or Debian and after reboot you end up with Linux system with ZFS Boot Environments and ZFS Boot Menu setup?

                                                                                            Regards.

                                                                                            1. 2

                                                                                              I don’t know, but I think it sounds like a decent idea.

                                                                                          1. 5

                                                                                            I wonder if they were aware of systemd’s existing dash encoding scheme.

                                                                                              1. 1

                                                                                                That’s it! Or at least that’s a tool to work with them.

                                                                                                It’s domain specific, but for paths, having to escape dashes is a real pain in the bum :-)

                                                                                            1. 2

                                                                                              the kernel has moved its minimum GCC requirement to version 5.1

                                                                                              Yeah, thanks for moving to a 5.x gcc and failing compilation on CentOS 7’s default gcc which is 8 years old, while still

                                                                                              look to moving to the C99 standard — it is still over 20 years old

                                                                                              I am a little bit confused where you can always have the latest and greatest linux kernel from upstream (lesser than 1 week old and being confident about it), but hesitant to trust a compiler or C standard that’s been there for 20 years. While asking developers to trust a compiler that’s 7 years old (gcc 5.1).

                                                                                              1. 1

                                                                                                Is anyone still updating centos7? It’s legacy now surely

                                                                                                1. 2

                                                                                                  Centos7 still gets package updates via upstream, unlike 8.

                                                                                                  1. 1

                                                                                                    But there’s precious few of those. Only cve severity important or higher are applied by default, with other updates at Redhat’s discretion. Disclaimer: I work for red hat, but may still be misrepresenting their policy.

                                                                                              1. 3

                                                                                                I didn’t really understand this at first, but this technique is based on the fact that a struct inside a struct is essentially flattened out in terms of memory representation. The technique is called an intrusive data structure and it lets you do things generically (with macros) with only one linked list struct. I am used to making a new linked list struct for every data type in C - so it’s a pretty clever hack!

                                                                                                1. 2

                                                                                                  These macros originated with 4BSD, as far as I can tell. Modified versions are present in the *BSD, Linux, and the NT kernel. They’re slightly terrifying because they require pointer arithmetic that removes type information, including using offsetof to identify the location of the field and then subtracting it.

                                                                                                  1. 1

                                                                                                    They’re quite useful! Like you said, it lets you write generic functions easily (you could write generic versions of a non intrusive list, but with indirection and allocation overhead). The other thing that’s very nice is they allow for local manipulation - if you have a reference to the struct, you can eg. remove it in O(1) from the containing list without traversing the list or even needing a reference to the list head. This can make cleanup code much simpler (like a C++ destructor could remove the object from lists it’s present in), and it also makes cases where you have a single item that needs to be present in multiple lists much easier to manage. They tend to show up a decent amount in systems and games programming.

                                                                                                    1. 1

                                                                                                      When I first saw this I thought it was genius. The macro to convert a list pointer to its parent is a fun piece of pointer arithmetic to unpick. It involves casting the literal 0 to a pointer.

                                                                                                      Another significant advantage is normal use (outside of that controlled and tested macro) doesn’t need casts: with the traditional list-struct-on-the-outside, you end up having to use void pointers for the payload, and there are many more opportunities to make a type error. Also the list structure and parent structure can be easily contiguous in memory.

                                                                                                    1. 21

                                                                                                      Ref the harmful consequences of the robustness principle, I think the user agent string and its role in compatibility is the pinnacle of failure on this metric for web standards: It enables exactly the wrong kind of compatibility and becomes a compatibility item in itself that shouldn’t be. Thus why browser sniffing is evil. As such, this kind of breakage is pure karma. May the UA string fade into unimportance.

                                                                                                      To be ridiculous, let’s continue the tradition of telling them what they want first and instead append the truth:

                                                                                                      Mozilla/5.0 (X11; Windows x86_64; KHTML) Chrome/99.0 (Wayland; Linux Aarch64; Gecko) Firefox/100.0

                                                                                                      1. 4

                                                                                                        Given Chrome’s dominance, they should flex their muscles and remove UA strings entirely. Let the world catch up to them.

                                                                                                        1. 8

                                                                                                          It’s not removal, but they are definitely intending to reduce User Agent data, replacing a bunch of platform data with hard-coded strings. For example, Chrome on Android will (eventually) always report Linux; Android 10; K regardless of kernel, Android version, or phone model.

                                                                                                          On the other hand, a bunch of that information is going to be exposed as Client Hints, so sites can still ask for information even if it’s not collected passively.

                                                                                                          1. 3

                                                                                                            Safari has frozen their User-Agent string: https://twitter.com/rmondello/status/943545865204989953

                                                                                                            My Safari reports “Intel Mac OS X 10_15_7”, and I’m on ARM macOS 12.2.

                                                                                                            1. 3

                                                                                                              My hope is that more chaos → less value → disuse → easier to remove.

                                                                                                              But the normative fix would be a new HTTP version.

                                                                                                              1. 2

                                                                                                                When making statements like this that support a chrome monopoly and giving google total control of the internet, I can’t help but wonder if you would have sided with MS and IE in the late 90s and early 2000s when they had the better (speed, features, site compatibility, etc) and a substantially weaker monopoly? (Remember, css was an IE driven feature, xhr was an ms created de facto standard)

                                                                                                                1. 4

                                                                                                                  I don’t see anything about OP’s post that suggests they approve of chrome’s dominance, they’re just acknowledging it, and suggesting something they could do with that dominance that might improve things for the open web.

                                                                                                                  1. 4

                                                                                                                    I had intended my comment to be a little tongue in cheek, but on reread it doesn’t have any hint of “this is kind of a joke” so that’s on me.

                                                                                                                    I am strongly in favor of diversity of browsers, but like CSS coming from Microsoft’s dominance, it’s possible for the de facto leader to use their position to improve the situation for everyone. UA strings are bad and little-to-nothing is gained by continuing to use them (except for interoperability); either we should freeze them and make it a specified string, or they should be removed entirely.

                                                                                                                    I don’t like code churn or breaking changes but I also think infinite compatibility leads to bad code and the inability to write new conforming implementations (see the above link about the failures of the Robustness Principle, or Microsoft Excel’s awful date type maintaining compatibility with Lotus 1-2-3 to the detriment of everyone).

                                                                                                                    1. 1

                                                                                                                      Ah fair enough - I’ve had the occasional joking comment be read as serious :)

                                                                                                                      User agents are a challenging thing though simply because so many existing sites use them that they can’t simply be removed. I did think all the browsers agreed that only the version numbers would change from now on, but I don’t know how serious that was

                                                                                                                    2. 2

                                                                                                                      It’s too easy to make mistakes with UA strings. We could live in a world: If you need a feature, check whether it’s there. If a feature is there but not up to spec, it should be fixed.

                                                                                                                      1. 1

                                                                                                                        There are some bugs that can’t be reasonably detected and so the best option is a version check. The problem is that the majority of developers who look at the user agent are doing it instead of feature checks :-/

                                                                                                                1. 4

                                                                                                                  “this wouldn’t have happened with ZFS” is a strange conclusion to come to after a user error. Also: I’d recommend a mundane backup strategy. having to package something smells of novelty. Although I’ve not heard of the system they mention, it might be fine.

                                                                                                                  1. 6

                                                                                                                    ZFS would have told you why the drive wasn’t in the array anymore, with a counter showing how many checksums failed (the last column in zpool status, it should be 0). The author would thus have known there was something wrong with the SSD, and think twice before mindlessly adding it to the array.

                                                                                                                    I’m not entirely sure what would happen if you add the SSD back to the array anyway, at the very least you must give it a clean bill of health with zpool clean. I would also expect that ZFS urges or maybe even forces you to do a resilver of the affected device, which would show the corruption again. The main problem with mdadm in this case was that when re-adding the device, it found it was already part of the array before and decided to trust it blindly, not remembering that it was thrown out earlier, or why.

                                                                                                                    1. 3

                                                                                                                      ZFS should resilver when you add the drive back the array and verify/update the data on the failed drive.

                                                                                                                    2. 5

                                                                                                                      the readme in the repo for that project says in bold text that it is experimental. which is exactly what i would avoid if i was looking for some reliable backup system… but to each their own.

                                                                                                                      1. 5

                                                                                                                        How was this user error? This raid array silently corrupted itself. Possibly because of the ram?

                                                                                                                        the filesystem in the treefort environment being backed by the local SSD storage for speed reasons, began to silently corrupt itself.

                                                                                                                        ZFS checksums the content of each block, so it would have been able to tell you that what you wrote is not what is there anymore. It could also choose the block from the disk that was NOT corrupted by matching the checksum. It would have also stopped changing things the moment it hit inconsistencies.

                                                                                                                        1. 2

                                                                                                                          The drive failed out of the array and they added it back in.

                                                                                                                          1. 4

                                                                                                                            Yeah, but why did the array think it was fine when it had previously failed out?

                                                                                                                            1. 2

                                                                                                                              I don’t know, it’s a reasonable question but doesn’t change that fundamentally it was a user mistake. ZFS may have fewer sharp edges but it’s perfectly possible to do the wrong thing with ZFS too.