1. 1

    AWESOME. As someone who has made simple, mostly-text websites for a long time I’ve been looking for something like this.

    1. 8

      It’s really satisfying to fix up old, broken code and get it running again, especially when the results are as visible as a game.

      1. 1

        Totally! A while back I ported BSD rain to Linux (original source is here). I was surprised my distro didn’t have it. While it wasn’t broken (it obviously compiled on NetBSD), it was nice to have an old friend back.

      1. 10

        It’s going to be interesting to see how much this is going to affect the future of how the WWW functions. GDPR sure didn’t manage to be as severe of a measure as we’d hoped it be. Heck, I’m having troubles getting the relevant authorities to understand clear violations that I’ve forwarded to them, where they then end up just being dismissed.

        But this law here is of course not for the people, no… This is here for the copyright holders, and they carry much more power. So will this actually result in the mess we expect it to be?

        1. 25

          GDPR and the earlier cookie law have created a huge amount of pointless popup alert boxes on sites everywhere.

          1. 10

            The one thing I can say is that, due to the GDPR, you have the choice to reject many cookies which you couldn’t do before (without ad-blockers or such). That’s at least something.

            1. 10

              Another amazing part of GDPR is data exports. Before hardly any website had it to lock you in.

              1. 4

                You had this choice before though, it’s normal to make a cookies whitelist for example in firefox with no addons. The GDPR lets you trust the site that wants to track you to not give you the cookies instead of you having personal autonomy and choosing not to save the cookies with your own client.

                1. 26

                  I think this attitude is a bit selfish since not every non-technical person wants to be tracked, and it’s also counter-productive, since even the way you block cookies is gonna be used to track you. The race between tracker and trackee can never be won by any of them if governments don’t make it illegal. I for one am very happy about the GDPR, and I’m glad we’re finally tackling privacy in scale.

                  1. 2

                    it’s not selfish it’s empowering

                    if a non-technical person is having trouble we can volunteer to teach them and try to get browsers to implement better UX

                    GDPR isn’t goverments making tracking illegal

                    1. 15

                      I admire your spirit, but I think it’s a bit naive to think that everyone has time for all kinds of empowerment. My friends and family want privacy without friction, without me around, and without becoming computers hackers themselves.

                  2. 18

                    It’s now illegal for the site to unnecessarily break functionality based on rejecting those cookies though. It’s also there responsibility to identify which cookies are actually necessary for functionality.

                2. 4

                  On Europe we’re starting to sign GDPR papers for everything we do… even for buying glasses…

                  1. 12

                    Goes on to show how much information about us is being implicitly collected in my honest opinion, whether for advertisement or administration.

                    1. 1

                      Most of the time, you don’t even have a copy of the document, it’s mostly a tl;dr document full of legal jargon that nobody reads… it might be a good thing, but far from perfect.

                3. 4

                  “The Net interprets censorship as damage, and routes around it.”

                  1. 22

                    That old canard is increasingly untrue as governments and supercorps like Google, Amazon, and Facebook seek to control as much of the Internet as they can by building walled gardens and exerting their influence on how the protocols that make up the internet are standardized.

                    1. 13

                      I believe John Gilmore was referring to old-fashioned direct government censorship, but I think his argument applies just as well to the soft corporate variety. Life goes on outside those garden walls. We have quite a Cambrian explosion of distributed protocols going on at the moment, and strong crypto. Supercorps rise and fall. I think we’ll be OK.

                      Anyway, I’m disappointed by the ruling as well; I just doubt that the sky is really falling.

                      1. 4

                        I agree that it is not the sky falling. It is a burden for startups and innovation in Europe though. We need new business ideas for the news business. Unfortunately, we now committed to life support for the big old publishers like Springer.

                        At least, we will probably have some startups applying fancy AI techniques to implement upload filters. If they become profitable enough then Google will start its own service which is for free (in exchange for sniffing all the data of course). Maybe some lucky ones get bought before they are bankrupt. I believe this decision is neutral or positive for Google.

                        The hope is that creatives earn more, but Germany already tried it with the ancillary copyright for press publishers (German: Leistungsschutzrecht für Presseverleger) in 2013. It did not work.

                        1. 2

                          Another idea for a nice AI startup I had: Summarizing of news with natural language processing. I do not see that writing news with an AI is illegal, only copying the words/sentences would be illegal.

                          Maybe however, you cannot make public from where you aggregated your original news that you feed into your AI :)

                      2. 4

                        Governments, corporations, and individual political activists are certainly trying to censor the internet, at least the most popularly-accessible portions of it. I think the slogan is better conceptualized as an aspiration for technologists interested in information freedom - we should interpret censorship as damage (rather than counting on the internet as it currently works to just automatically do it for us) and we should build technologies that make it possible for ordinary people to bypass it.

                    2. 2

                      I can see a really attitude shift coming when the EU finally gets around to imposing significant fines. I worked with quite a few organisations that’ve a taken ‘bare minimum and wait and see’ attitude who’d make big changes if the law was shown to have teeth. Obviously pure speculation though.

                    1. 3

                      Respectfully, is that something an org can brag about?

                      The time-to-patch metric heavily depends on the nature of the bug to patch.

                      I don’t know the complexity of fixing these two vulns, surely fixing things fast is something to be proud of, but if they don’t want people pointing fingers at Mozilla when a bug stays more than one week in the backlog, don’t brag about it when it doesn’t in the first place.

                      1. 17

                        Assuming that the title refers to fixing and successfully releasing a bugfix, a turnaround of less than 24 hours is a huge accomplishment for something like a browser. Don’t forget that a single CI run can take several hours, careful release management/canarying is required, and it takes time to measure crash rates to make sure you haven’t broken anything. The 24 hours is more a measure of the Firefox release pipeline than the developer fix time; it’s also a measure of its availability and reliability.

                        1. 10

                          This. I remember a time when getting a release like this out took longer than a week. I think we’ve been able to do it this fast for a few years now, so still not that impressive.

                        2. 6

                          As far as I can tell, the org isn’t bragging; the “less than 24h” boast is not present on the security advisory.

                          1. 1

                            To be fair, you’re right.

                          2. 2

                            also the bugs are not viewable - even if logging in

                            so its hard to get any context on this

                            1. 2

                              Is it possible to check the revisions between both versions, and they do not seem so trivial.

                              These are the revisions (without the one that blocks some extensions):
                              https://hg.mozilla.org/mozilla-unified/rev/e8e770918af7
                              https://hg.mozilla.org/mozilla-unified/rev/eebf74de1376
                              https://hg.mozilla.org/mozilla-unified/rev/662e97c69103

                              1. 1

                                Well, sorta-the-same but with the context is them fixing pwn2own security vulnerabilties with less than 24 hours 12 months ago

                                https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

                              2. 2

                                Respectfully, is that something an org can brag about?

                                I always assume it’s P.R. stunt. Double true if the product is in a memory-unsafe language without lots of automated tooling to catch vulnerabilities before they ship. Stepping back from that default, Mozilla is also branding themselves on privacy. This fits into that, too.

                                EDIT: Other comments indicate the 24 hrs part might be editorializing. If so, I stand by the claim as a general case for “we patched fast after unsafe practices = good for PR.” The efforts that led to it might have been sincere.

                              1. 1

                                @pushcx / @alynpost / @Irene, does this seem like enough support to add the tags?

                                1. 3

                                  Whenever you learn something new, take this mental model: Never do things for their own sake. Which translate to: Never learn Rust just because you want to learn Rust.

                                  This is great advice to follow! I have a related rule for personal projects: I can either write something I know in a language I don’t know, or I can write something I don’t know in a language I know. Mixing the two means bad news.

                                  (side-note: I just signed up for Rust and Tell Berlin! see you there)

                                  1. 15

                                    After the recent announcement of the F5 purchase of NGINX we decided to move back to Lighttpd.

                                    Would be interesting to know why instead of just a blog post which is basically an annotated lighthttpd configuration.

                                    1. 6

                                      If history has taught us anything, the timeline will go a little something like this. New cool features will only be available in the commercial version, because $$. The license will change, because $$. Dead project.

                                      And it’s indeed an annotated lighttpd configuration as this roughly a replication of the nginx config we were using and… the documentation of lighttpd isn’t that great. :/

                                      1. 9

                                        The lighttpd documentation sucks. Or at least it did three years ago when https://raymii.org ran on it. Nginx is better, but still missing comprehensive examples. Apache is best, on the documentation font.

                                        I wouldn’t move my entire site to another webserver anytime soon (it runs nginx) but for new deployments I regularly just use Apache. With 2.4 being much much faster and just doing everything you want, it being open source and not bound to a corporation helps.

                                        1. 1

                                          Whatever works for you. We used to run our all websites on lighttpd, before the project stalled. So seemed a good idea to move back, before nginx frustration kicked in. :)

                                          1. 3

                                            Im a bit confused. You’re worried about Nginx development stalling or going dead in the future. So, you switched to one that’s already stalled in the past? Seems like the same problem.

                                            Also, I thought Nginx was open source. If it is, people wanting to improve it can contribute to and/or fork it. If not, the problem wouldn’t be the company.

                                            1. 2

                                              The project is no longer stalled and if it stalls again going to move, again. Which open source project did well after the parent company got acquired?

                                              1. 3

                                                I agree with you that there’s some risk after a big acquisition. I didnt know lighttpd was active again. That’s cool.

                                                1. 2

                                                  If it was still as dead as it was a couple of years ago I would have continued my search. :)

                                                  1. 1

                                                    Well, thanks for the tip. I was collecting lightweight servers and services in C language to use for tests on analysis and testing tools later. Lwan was main one for web. Lighttpd seems like a decent one for higher-feature server. I read Nginx was a C++ app. That means I have less tooling to use on it unless I build a C++ to C compiler. That’s… not happening… ;)

                                                    1. 3

                                                      nginx is 97% C with no C++ so you’re good.

                                                      1. 1

                                                        Thanks for correction. What’s other 3%?

                                                        1. 2

                                                          Mostly vim script with a tiny bit of ‘other’ (according to github so who knows how accurate that is).

                                                          1. 1

                                                            Alright. I’ll probably run tools on both then.

                                                            1. 2

                                                              Nginx was “heavily influenced” by apache 1.x; a lot of the same arch, like memory pools etc. fyil

                                                2. 2

                                                  SuSE has been going strong, and has been acquired a few times.

                                                  1. 1

                                                    SuSE is not really an open-source project though, but a distributor.

                                                    1. 3

                                                      They do have plenty of open-source projects on their own, though. Like OBS, used by plenty outside of SuSE too.

                                          2. 5

                                            It’s a web proxy with a few other features, in at least 99% of all cases.

                                            What cool new features are people using?

                                            Like, reading a few books on the topic suggested to me that despite the neat things Nginx can do we only use a couple workhorses in our daily lives as webshits:

                                            • Virtual hosts
                                            • Static asset hosting
                                            • Caching
                                            • SSL/Let’s Encrypt
                                            • Load balancing for upstream servers
                                            • Route rewriting and redirecting
                                            • Throttling/blacklisting/whitelisting
                                            • Websocket stuff

                                            Like, sure you can do streaming media, weird auth integration, mail, direct database access, and other stuff, but the vast majority of devs are using a default install or some Docker image. But the bread and butter features? Those aren’t going away.

                                            If the concern is that new goofy features like QUIC or HTTP3 or whatever will only be available under a commercial license…maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                            It just seems like much ado about nothing to me.

                                            1. 6

                                              maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                              They don’t work well enough on mobile networks. In particular, QUIC’s main advantage over TCP is it directly addresses the issues caused by TCP’s congestion-avoidance algorithm on links with rapidly fluctuating capacities. I share your concern that things seem like they’re changing faster than they were before, but it’s not because engineers are bored and have nothing better to do.

                                            2. 4

                                              New cool features will only be available in the commercial version, because $$.

                                              Isn’t that already the case with nginx?

                                          1. 5

                                            It’s common to see beginners spending lots of energy switching back and forth between their editor and a terminal to run rustc, and then scrolling around to find the next error that they want to fix.

                                            For a while I’ve been really skeptical of the I in IDEs, but I’ve had a great experience with the rust-enhanced Sublime plugin. Admittedly I’ve only worked in small codebases, but I’ve found it to be extremely fast. I’m not sure precisely how Sublime plugins work, but they don’t seem to cause the main render thread to block so you can still scroll without jank while cargo check runs in the background. Additionally, having inline compiler suggestions with one-click acceptance really helps while you’re learning (that’s a clickable button that automatically replaces your text with the suggestion).

                                            1. 5

                                              Berlin made International Women’s Day a public holiday, so I’ll probably celebrate by reading some Margaret Atwood. Other than that I have a side project in Rust that I’d like to complete. It’s nice to get back into “stack and heap” languages after writing Python for years.

                                              1. 0

                                                I suggest watching Angel Number 9 by Roberta Findlay instead. It’s fucking wild.

                                              1. 7
                                                • Professionalism. This means treating coworkers with respect, treating customers with respect, treating yourself with respect, and providing adult feedback when you don’t feel respected. Respect comes in many forms.
                                                • Service to your team. Typically this comes in the form of volunteering to do tasks that don’t yield immediate gains, but indirectly help your surroundings. This can mean signing up to do hiring interviews, delving into an unpopular cleanup task, or agreeing to cover for someone on paternity leave. Note: going back to “treating yourself with respect,” you can’t let these tasks consume you (especially if you keep doing them in lieu of “what you get paid for”). A proper balance between scheduled work, unscheduled work, and life is critical.
                                                • Keeping your eyes open. It’s extremely easy to become established and set in your ways, especially if you’ve been at the same company or working on the same codebase for a long time. I’m not saying you should pay attention to every new fad, but it’s good to keep tabs on general trends in the industry. Outside of switching jobs, good sources are: Lobsters itself, conferences, new hires (especially ones new to the industry), and trying new programming languages/tooling “just to see what it’s like.”
                                                • Keeping in touch with the world outside of programming. Maintaining hobbies or interests outside of programming spawns creativity and keeps you out of bubbles. Steve Jobs’ classic example was the calligraphy class he took that inspired him to have great typography on the original Mac. But you can’t force it! You never know when these things will intersect, so don’t take up photography because it will help your programming. You need to find things which genuinely drive you, and the connections will form later.
                                                • Avoiding Second System Effect. The second system effect pops up everywhere, and I’m sure you will see it kill a promising project sometime during your career. Avoiding it means identifying when “perfection” is preventing a project from shipping, and also identifying when “perfection” isn’t actually perfection at all – sometimes it can just be a strong preference and orthogonal to the actual problem at hand. Rearranging deck chairs on the Titanic is perhaps a bit dramatic, because this kind of waste happens even for successful projects.
                                                • Not being a hero. Hero developers are the ones who didn’t heed the advice given in the “service to your team” section. They always jump in and fix emergencies, often gleefully so. It isn’t even limited to fixing application errors, heroes can swoop in and fix planning emergencies or fill in gaps in documentation and training. It’s awesome that they have capabilities to do that, but as the linked article says: “crisis management is not the same as crisis prevention,” and “having a hero on call makes those real problems seem less urgent than they really are.”
                                                • Properly handling Unicode. Yes it’s hard. We need to do it anyway.
                                                • Service to your community. There are a lot of imbalances in the tech industry: minorities, women, and LGBT people face a lot of structural imbalances (in addition to outright discrimination and harassment). If you’re in a position of privilege, it’s on you to read up about these imbalances and how to correct them. In addition to being the right thing to do, not doing so means keeping qualified people out of jobs and driving qualified people away from the jobs they have. It’s a long road but we can get there.
                                                1. 3

                                                  Thanks for such a thoughtful list. Even when good is defined qualitatively, as you have, it’s interesting to consider how to measure success towards manifesting those qualities. I think it’s interesting not as a way of measurement for the sake of measurement, or measuring to keep score, but as a way to identify unrealized potential or areas to improve.

                                                1. 2

                                                  If you’re interested in things like this, you may want to check out the excellently-written Statistics Done Wrong by Alex Reinhart.

                                                  1. 2

                                                    The company I work at has semiregular talks for developers and I’ve been meaning to write “How to Not Be Afraid of Large (and Small) Numbers” for a while. I want to explain back-of-the-envelope estimation “at the edges” and why it’s useful. “At the edges” means what might happen if you take some variable of a problem to the limit. A practical example for my team would be asking “how many servers could we ever possibly use?,” then seeing what it would cost to actually do that. Thinking along these lines reveals hidden bottlenecks and contours in the problem, and being comfortable doing these thought experiments lets you brainstorm and navigate scaling issues better.

                                                    1. 12

                                                      Author here, happy to be thoroughly corrected on German or linguistics in general.

                                                      1. 2

                                                        Not a correction, but you may want to learn the reason why prepositions are so difficult: They aren’t indoeuropean. Most of the nouns and verbs we use have some root in Indoeuropean, but the prepositions were mostly (entirely?) created after the great divisions, so there’s less reason for them to pair nicely with prepositions in other indoeuropean languages.

                                                        1. 2

                                                          The claim that “prepositions aren’t indoeuropean” is poorly-defined and incorrect in most reasonable more-specific senses. Many prepositions in English, German, and in other modern Indo-European languages are straightforwardly traceable to Proto-Indo-European roots. The English prepositions off and of and their German cognate ab, for instance, are reflexes of the reconstructed PIE root *apo, which also yields Greek απο and Latin ab (and then Spanish a). The common English preposition in, which is cognate with similarly-pronounced German in and Latin in (and then Spanish en) are reflexes of a PIE root *en meaning, more or less, “in”.

                                                          It’s true that not every single preposition in English or any other modern Indo-European language is traceable to a PIE root, and that some roots that yield prepositions in modern IE languages were not necessarily prepositions in PIE (if PIE even had a distinct syntactic category of prepositions), and that some prepositions in English or German are cognate with morphemes in other Indo-European languages that are not necessarily prepositions (German um for instance is cognate with the Latinate prefix ambi-, which is not a preposition in Latin). But I don’t think any of these facts are inconsistent with the claim that prepositions in modern Indo-European languages by and large are shared Indo-European vocabulary, traceable to the proto-language.

                                                          1. 1

                                                            I took a random set of prepositions now (the German accusative prepositions durch für gegen ohne um, for no particular reaon other than having the Kluge dictionary on a shelf in front of me) and looked them up. They all are traceable a little over a thousand years back, one has much older roots and another may have, but those much older roots don’t seem to be indoeuropean prepositions.

                                                            When you write “not every… IE root”, are you suggesting that most prepositions are traceable to an IE preposition?

                                                          2. 1

                                                            Wow! Super interesting, thank you for the info.

                                                            1. 2

                                                              I saw this really cool diagram once somewhere with prepositions in different languages, including Finnish which doesn’t have prepositions. The idea was, iirc, to demonstrate conceptualization.

                                                              Couldn’t find it now, but this one on dativ/akkusativ is pretty neat too ;)

                                                        1. 3

                                                          I heard that homebrew was an awful package managers by some compared to, say, apt. Is this true, if so, why?

                                                          1. 11

                                                            It’s fucking ridiculous how bad this thing is, and the issues around how it’s run are almost as bad as the technical ones.

                                                            For years it was a source only tool - it did all compilation locally. Then they caught up to 1998 and realised that the vast majority of people want binary distribution. So they added in pre-compiled binaries, but never bothered to adapt the dependency management system to take that into account.

                                                            So for instance, if you had two packages that provide the same binary - e.g. mysql-server and percona-server (not sure if that’s their exact names in Homebrew), and then wanted to install say “percona-toolkit” as well, which has a source requirement of “something that provides mysql libs/client” - the actual dependency in the binary package would be whatever had been installed on the machine it was built on. This manifested itself in an issue where you couldn’t install both percona-server and percona-toolkit from binaries.

                                                            When issues like this were raised - even by employees of the vendor (e.g. as in https://github.com/Homebrew/homebrew-core/issues/8717) the official response was “not our problem buddy”.

                                                            No fucks given, just keep hyping the archaic design to the cool kids.

                                                            I haven’t even got into the issue of permissions (what could go wrong installing global tools with user permissions) or the ridiculous way package data is handled on end-user machines (git is good for some things, this is not one of them)

                                                            If you get too vocal about the problems the tool has, someone (in my case, the founder of the project) will ask you to stop contacting them (these were public tweets with the homebrew account referenced) about the issues.

                                                            1. 4

                                                              it’s good, easy to use and has a big community with well maintained list of packages. It’s the main package manager for macos. It’s been there for a long time in the macos ecosystem and is much better and easier to use than the older solutions we had such as macport. A cool thing is it has a distinction between command line tool and libraries vs desktop applications (called casks)

                                                              example; you can install wget with brew install wget, but you’d install firefox with brew cask install firefox.

                                                              I would stick to linux system’s default package manager, but maybe it’s worth giving it a try I guess.

                                                              1. 3

                                                                A cool thing is it has a distinction between command line tool and libraries vs desktop applications (called casks)

                                                                Why is that cool? It seems pretty pointless to me.

                                                                1. 2

                                                                  Yeah distinction between them at install tine isn’t that cool, but the fact it does support installing desktop apps is nice. No need for a different tooling like snap does. And you get to know where it’s going to be installed according to the command used. Desktop apps are usually located in /Applications on macos and cli tools are in linked in /usr/local/bin

                                                              2. 4

                                                                Pro:

                                                                • Has every package imaginable (on Mac)
                                                                • Writing your own formulae is stupidly easy

                                                                Con:

                                                                • You can only get the latest version of packages due to how the software repo works.
                                                                • It’s slower than other package managers

                                                                Meh:

                                                                • Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).
                                                                • Might be too easy to add formulae. Everyone’s small projects are in homebrew.
                                                                • The entire system is built on Ruby and Git, so it inherits any problems from them (esp Git).
                                                                1. 1

                                                                  Someone told me that it doesn’t do dependency tracking, does that tie in with:

                                                                  Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).

                                                                  Also, I’m not very knowledgeable on package managers, but not being able to get older versions of a package and basing everything on Git seems kind of a questionable choice to me. Also, I don’t like Ruby, but that’s a personal matter. Any reason they chose this?

                                                                  1. 1
                                                                    • You can only get the latest version of packages due to how the software repo works.
                                                                    • Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).

                                                                    This is very similar to how Arch Linux’s pacman behaves. Personally, I would put both of these under the “pro” header.

                                                                  2. 4

                                                                    The author of Homebrew has repeatedly said this himself (e.g. in this quora answer). He usually says the dependency resolution in Homebrew is substantially less sophisticated than apt.

                                                                    Homebrew became successful because it didn’t try to be a Linux package manager. Instead it generally tries to build on top of MacOS rather than build a parallel ecosystem. The MacOS base distribution is pretty big, so it’s dependency resolution doesn’t need to be that sophisticated. On my system I have 78 Homebrew packages, of those 43 have no dependencies and 13 have just 1.

                                                                    Homebrew Cask also supports MacOS native Application / installer formats like .app, .pkg, and .dmg, rather than insisting on repackaging them. It then extends normal workflows by adding tooling around those formats.

                                                                    So, yes, Homebrew isn’t as good at package management compared to apt, because it didn’t originally try to solve all the same problems as apt. It’s more of a developer app store than a full system package manager.

                                                                    Linuxbrew still doesn’t try to solve the same problems. It focuses on the latest up to date versions of packages, and home dir installations. It doesn’t try to package an entire operating system, just individual programs. I doubt you could build a Linux distribution around the Linuxbrew packages, because it doesn’t concern itself with bootstrapping an operating system. Yes, it only depends on glibc and gcc on Linux, but that doesn’t mean any of the packages in Linuxbrew are set up to work together like they are on an actual Linux distribution.

                                                                    1. 2

                                                                      I don’t want to rag on the homebrew maintainers too much (it’s free software that’s important enough to me that it’s probably the second thing I install on a new mac), but I do have one big UX complaint: every time I run a homebrew command, I have no idea how long it will take. Even a simple brew update can take minutes because it’s syncing an entire git repo instead of updating just the list of packages.

                                                                      brew install might take 30 seconds or it might take two hours. I have no intuition how long it will take before I run it and am afraid to ctrl-c during the middle of a run. I’ll do something like brew install mosh and suddenly I’m compiling GNU coreutils. Huh?

                                                                      While I’d wish they’d fix this variance head-on, at minimum I’d appreciate if it did something like apt and warn you if you’re about to do a major operation. Admittedly apt only does this with disk size, but homebrew could store rough compile times somewhere and ask if I’d like to continue.

                                                                      1. -3

                                                                        I think it’s awful because it’s written in Ruby and uses Github as a CDN.

                                                                        1. 0

                                                                          This comment isn’t helpful. Please be more constructive.

                                                                          1. 0

                                                                            Who are you to judge? He wanted opinions, I gave mine.

                                                                            The Ruby VM is awfully slow and using Github as a CDN is so bad it requires no elaboration.

                                                                            1. 3

                                                                              Saying it’s slow is much more helpful than what you said above.

                                                                              1. 1

                                                                                Yeah.

                                                                      1. 2

                                                                        This is my first FOSDEM, so I don’t really know what I’m in for. Planning on hanging out in the go and rust rooms.

                                                                        1. 2

                                                                          Have a backup plan, those rooms will be super full all the time.

                                                                          1. 1

                                                                            It will be my first too

                                                                          1. 1

                                                                            Related, from 2008: https://lwn.net/Articles/299483/

                                                                            1. 3

                                                                              This is quite old (2003) – today emoji would definitely figure into the “absolute minimum”.

                                                                              1. 5

                                                                                I’ve read this article a few times through the years…

                                                                                I think the most important thing Spolsky is trying to do is get programmers out of the ASCII mindset - one byte, one character. Once you’ve made sure your app can handle Unicode, emoji just comes along as a bonus.

                                                                                As an aside… the Unicode consortium has expressed some dismay about all the attention and money lavished on the “trivial” emoji space, but I think as a larger picture having the impression that “Unicode is great, it gives us emoji!” is a net positive for an organisation.

                                                                                1. 4

                                                                                  Agreed – if emoji can get Amerocentric¹ programmers to care at all about non-ASCII support it’s a win for everyone. It doesn’t solve things like right-to-left problems, but it goes a long way toward making software accessible. Unfortunately, proper Unicode still seems like a chore instead of a basic feature. Even Rust, which goes as far as discouraging you from iterating over “chars” in the standard library, admits that “[g]etting grapheme clusters from strings is complex, so this functionality is not provided by the standard library.”

                                                                                  I look forward to a time when handling these things is the default and “iterating over chars” is difficult. Maybe it’s not possible given the varied features of human language and orthography, but I think there is still a long way we can go with the technology we currently have.

                                                                                  ¹ I know “Amerocentric” technically applies to the entire continents of North and South America, but I couldn’t find a better word to capture the sense of “programmers who only consider en_US when designing and testing software.”

                                                                                  1. 4

                                                                                    I’ve recently been doing some work on Unicode stuff for some commandline tools I’ve been writing, and I found the Unicode specs to be fairly hard to read, and being spread out over multiple documents isn’t helping either. You also need some background knowledge about different writing systems of the world.

                                                                                    None of it is insurmountably hard as such – k8s is probably more complex – but it takes some time to grok and quite some effort to get right. Perhaps we should treat Unicode as cryptography: “don’t implement it yourself when it can be avoided”. I could add LTR support, but without actual knowledge of how an Arabic person uses a computer I’ll probably make some stupid mistake; for example, as I understand it you write from left-to-right in Arabic, except for numbers, which are left-to-right.

                                                                                    I haven’t even gotten to vertical text yet. I have no idea how to deal with that (yet).

                                                                                    I know “Amerocentric” technically applies to the entire continents of North and South America, but I couldn’t find a better word to capture the sense of “programmers who only consider en_US when designing and testing software.”

                                                                                    Anglocentric? As it’s a problem that extends beyond just the United States (CA, UK, AU, NZ, many African countries). Many non-English programmers do a lot of their work in English and have similar biases. Especially in Europe, where most scripts are covered by extended ASCII/ISO-8859/cp1252.

                                                                                    1. 4

                                                                                      In Arabic, everything’s written right-to-left, even the numbers.

                                                                                      When Arabic numerals were imported into Europe, they were physically written left-to-right, but to this day every school child does calculations from right-to-left (like addition, or multiplication) because that’s just how Arabic numerals work.

                                                                                      The story continues with computers, too: many computers were designed in European-culture countries that were comfortable with numbers working in the opposite direction from everything else, and so they used big-endian byte-ordering. Some smaller, cheaper computers couldn’t justify the cost of making the computer follow the designer’s conventions, so they went with the simpler, more straight-forward implementation and came up with little-endian byte-ordering, taking Arabic numerals back to their roots.

                                                                                    2. 2

                                                                                      Yeah, crusty academics can witter on harmlessly about cunieform and Linear B, but if tweens can’t send poop emojis to each other you can bet that bug request will be actioned.

                                                                                    3. 1

                                                                                      Once you’ve made sure your app can handle Unicode, emoji just comes along as a bonus.

                                                                                      Unless you’re MySQL ;-)

                                                                                  1. 5

                                                                                    I’d be interested to see a side-by-side comparison of kitty to alacritty. In particular, I’ve been using alacritty at work for a while and while it’s barebones at the moment, it’s exceptionally fast (which is probably my core feature for terminal emulators). That said, kitty looks like a fine emulator.

                                                                                    1. 6

                                                                                      Honest question: what need do you have for a fast terminal emulator?

                                                                                      1. 7

                                                                                        I have a minor obsession with input latency and scroll jank. It seems to creep up everywhere and is hard to stamp out (Sublime Text is a shining counterexample). I noticed a bit of weird input latency issues when using Terminal.app (purely anecdotal), and haven’t seen the same thing since using alacritty. So that’s the need I have for a fast emulator, it enables a smooth input and output experience.

                                                                                        1. 3

                                                                                          I am sensitive to the same.

                                                                                          This is what kept me on Sublime Text for years, despite open source alternatives (Atom, VS Code and friends). I gave them all at least a week, but in the end the minor latency hiccups were a major distraction. A friend with similar sensitivity has told me that VS Code has gotten better lately, I would give it another go if I weren’t transitioning to Emacs instead.

                                                                                          I sometimes use the Gmail web client and, for some period of time, I would experience an odd buffering of my keystrokes and it would sometimes completely derail my train of thought. It’s the digital equivalent of a painful muscle spasm. Sometimes you ignore it and move on, but sometimes you stop and think “Did I do something wrong here? Is there something more generally broken, and I should fear or investigate it?”

                                                                                          1. 1

                                                                                            Web-based applications are particularly bad, because often they don’t just buffer, but completely reorder my keystrokes. So I can’t just keep typing and wait for the page to catch up; I have to stop, otherwise I’m going to have to do an edit anyway.

                                                                                        2. 3

                                                                                          I have to admit, I thought for certain this was going to be Yet Another JavaScript Terminal but it turns out it’s written in Python. Interesting.

                                                                                          Anyway I would have a hard time believing it’s faster than xfce4-terminal, xterm, or rxvt. It’s been a long time since I last benchmarked terminal emulators, maybe I smell a weekend project coming on.

                                                                                          1. 6

                                                                                            kitty is written is about half C, half Python, Alacritty is written in Rust.

                                                                                            There were some benchmarks done for the recent Alacritty release that added scrollback, which include kitty, urxvt, termite, and st. https://jwilm.io/blog/alacritty-lands-scrollback/#benchmarks

                                                                                            1. 2

                                                                                              I just did a few rough-and-ready benchmarks on my system. Compared to my daily driver (xfce4-terminal), kitty is a little under twice as fast, alacritty and rxvt are about three times as fast. If raw speed was my only concern, I would probably reach for rxvt-unicode since it’s a more mature project.

                                                                                              Alacritty is too bare-bones for me but I could be sold on kitty if I took the time to make it work/behave like xfce4-terminal.

                                                                                              1. 1

                                                                                                I like xfce4-terminal, but it renders fonts completely wrong for me. It’s most noticeable when I run tmux and the solid lines are drawn with dashes. If I pick a font where the lines are solid, then certain letters look off. It’s a shame, because other vte-based terminals (e.g. gnome-terminal) tend to be much slower.

                                                                                          2. 2

                                                                                            For me it’s the simple stuff that gets annoying when it’s slow. Tailing high-volume logs. less-ing/cat-ing large files. Long scrollbacks. Makes a difference to my day by just not being slow.

                                                                                            1. 2

                                                                                              I don’t care that much about the speed it takes to cat a big file, but low latency is very nice and kitty is quite good at that. I cannot use libvte terminals anymore, they just seem so sluggish.

                                                                                              1. 2

                                                                                                For one thing, my workflow involves cutting and pasting large blocks of text. If the terminal emulator can’t keep up, blocks of text can come through out of order etc, which can be a bad time for everyone involved.

                                                                                              2. 3

                                                                                                I’m on macOS.

                                                                                                I used alacritty for a while, then switched to kitty as I’d get these long page redraws when switching tmux windows—so kitty is at least better for me in that regard. Both have similar ease of configuration. I use tmux within both, so I don’t use kitty’s scrolling or tabs. The way I was using them, they were more or less the same.

                                                                                                I’m going to try alacritty again to see if it’s improved. I’d honestly use the default Terminal app if I could easily provide custom shortcuts (I bind keys to switching tmux panes, etc).

                                                                                                1. 4

                                                                                                  I came back to Alacritty on MacOS just the other day after trying it last maybe 6 months ago and finding it “not ready” in my head. It’s been significantly updated, there’s a DMG installer (and it’s in brew), a lot more polished overall and it works really well and really fast. No redraws in tmux switches. Weirded redraw artifiact while resizing main window, but snaps to fixed immediately you stop, and doesn’t bother me much. Using it as a full-time Terminal replacement right now, liking it so far, will see how it goes!

                                                                                                  1. 1

                                                                                                    Good to know! I’ve installed it via brew now and double-checked my old config. My font (as in, not the default Menlo. I’m using a patched Roboto Mono) looks a bit too bold, so just gotta figure out what’s wrong there.

                                                                                                    1. 2

                                                                                                      They’ve updated config files with additional info about aliasing and rendering fonts on Mac. So take a look at that if you are using your old config. It’s not a bad idea to start from scratch.

                                                                                                      1. 1

                                                                                                        Thanks for the tip! I did start from scratch and moved over changes bit by bit, but I’ll have to check the new macOS specific lines.

                                                                                                  2. 3

                                                                                                    Cool, thanks for your input! I also use tmux, and I haven’t seen anything like what you described (I also don’t really use tmux panes, only tabs). I know there has been a longstanding vim + tmux + osx bug as well, but I haven’t used vim proper in a while.

                                                                                                    1. 2

                                                                                                      I think that’s my exact problem (turns out I’m even subscribed to the issue haha). I use neovim so I think it is/was applicable to both

                                                                                                  3. 1

                                                                                                    do any of those really measure up when benchmarked.

                                                                                                    I remember doing some writing to stdout and it alacritty turned out to be slower than say gnome-terminal or whatever.

                                                                                                    Might’ve been that there was a bug with my intel graphics card though, don’t remember to well.

                                                                                                  1. 3

                                                                                                    Docker can fail to load the container, bundler can fail while installing some dependencies, and so can git fetch. All of those failures can be retried

                                                                                                    If you are retrying an action connected to an external service (whether it’s something you run or something on the internet), please, please implement exponential backoff (here is a personal example). I will never forget the phrase “you are threatening to destabilize Git hosting at Google.”