• Desktop/Laptop: Thunderbird (not so happy, but I hopped enough to not be happy with most. Still why I am looking through the thread)
    • Phone: K-9 Mail (Somewhat happy with it, but it has some rough corners, but better than others)
    • Work: Gmail web interface (and app), cause that works well in this environment

    Related question. How do TUI email do with images these days? I am mostly caring about having an easy way to open then when they are in-line and there is text surrounding it not having to guess which is which.

      1. 10

        You might also like alas if in squire. :)

        journey sq_dump(args) {
            arg = run(args.a1)
            if kindof(arg) == 'string' { proclaim('String(' + arg + ')') }
            alas if kindof(arg) == 'number' { proclaim('Number(' + string(arg) + ')') }
            alas if kindof(arg) == 'boolean' { proclaim('Boolean(' + string(arg) + ')') }
            alas if kindof(arg) == 'unbenknownst' { proclaim('Null()') }
            alas { dump(arg) }
            reward arg

          Haha amazing. ‘kindof’ could also have been ‘natureof’

        2. 7

          Ada, Perl, Ruby and a couple of languages inspired by them.

          When I was way younger and jumping programming languages a lot that felt like the main thing I always got wrong, elif, elsif, elseif and else if.

          The last one despite being the most to type feels the most logical to me, being a combination of what’s already there with else and if, but is also the closest to a natural language/English.


            It should really be else, if or else; if to be even more like English and to really make it hard for parsers.


              x equals thirty-three or else... if your hat is green, of course. Good luck, parser! o7


              After discovering cond in Lisp, I wished every language had it instead of if, else and the various combinations of those two.




                Ada uses elsif. I wish all these elifs, elsifs, elseifs and else ifs keywords were interchangeable.

              1. 3

                The place I remember that came really close to a perfect interview, we discussed the infrastructure of the company, what it is made of, how to improve it, what my experiences are regarding that. It came really close to what would be a kick-off meeting on a project. For various reasons on both ends I did not end up in that position, but that has both no school like tests and “things I’d actually do”, it also had essentially no waste of time (because it was kind of a kick off meeting). On top of that I spoke with both the manager and potential co-workers, all in a very face to face way (despite it being held remotely).

                I also have been on the other side, interviewing people where I tried to replicate that. Sadly there was some out-sourced pre-screening, that essentially brought forth candidates, that expected to have standard questions asked and standard questions answered, school like. Also and here again I’d blame the external pre-screening we ended up with candidates, that were in the right realm, but their and our expectations about the working relationship didn’t really match, and it became really clear to both sides rapidly through these interviews.

                In general - for whatever reason - the majority of good interviews were remote, instead of on-site (at least initially). It could be a fluke, but I like to imagine that it has something to do with respect of the other person’s time.

                I’ve also heard about total opposites though, interviews were both sides were excited about working together and eventually there was some formality or strict company structure being a problem, ranging from “You have worked a decade, but we require a degree or you have to take a junior position” (after being offered a lead position during the technical interview) to “You can work completely remotely all the time, but you have to move to the city of the office” (with no other reason than company policy - nothing legal, etc.).

                I think the most important way of getting good people is based on respect. Don’t waste people’s time, be open about your company (both good and bad parts), respect their time and don’t put them in uncomfortable situations. If you do that then what should make one think it would be different, during actual work. Treat them like they’d already be part of the company. I think this would give the best picture and show to both sides, whether there’s a match or not.

                There’s nothing wrong with figuring out that it doesn’t make sense.

                Also that way you have a bigger chance of finding people through referral. If a potential employee takes another job or simply isn’t a fit, they might send someone else your way, but if you coldly cut ties that’s never going to happen. And it does happen. So don’t just coldly cut ties or turn off respect, just because there is no match.

                1. 1

                  Maybe that question is silly or ignorant, but I never understood the point of unicode identifiers in languages, when the languages themselves are English (for ... in ... for example).

                  Doesn’t that cause more problems than it benefits people? Did anyone here ever use them for a real life project?

                  1. 1

                    I’m curious about this too. I remember wondering the same when Elixir got it.

                    The only time I’ve come close to something like this, was when I was learning to program and I had difficulty parsing the language. It would help me understand what words were part of the language and what were variables and functions I’d made myself if I kept my own stuff in my native language (which does include three non-ascii characters).

                  1. 15

                    I think the key insight here is that container images (the article confuses images and containers, a common mistake that pedants like me will rush to point out) are very similar to statically linked binaries. So why Docker/container images and why not ELF or other statically linked formats?

                    I think the main answer is that container images have a native notion of a filesystem, so it’s “trivial” (relatively speaking) to put the whole user space into a single image, which means that we can package virtually the entire universe of Linux user space software with a single static format whereas that is much harder (impossible?) with ELF.

                    1. 4

                      I can fit a jvm in a container! And then not worry about installing the right jvm in prod.

                      I used to be a skeptic. I’ve been sold.

                      1. 2

                        Slightly off topic - but JVM inside a container becomes really interesting with resource limits. Who should be in charge of limits, JVM runtime or container runtime?

                        1. 7

                          Gotta be the container runtime (or the kernel or hypervisor above it) because the JVM heap size limit is best-effort. Bugs in memory accounting could cause the process to use memory beyond the heap limit. Absent that, native APIs (JNI) can directly call malloc and allocate off-heap.

                          Would still make sense for the container runtime to tell the JVM & application what the limits on it currently are so it can tailor its own behaviour to try to fit inside them.

                          1. 4

                            It’s easy: the enclosing layer gets the limits. Who should set the resource limits? ext4 or the iron platter it’s on?

                            1. 2

                              What’s the enclosing layer? What happens when you have heterogenous infrastructure? Legacy applications moving to cloud? Maybe in theory it’s easy, but in practice much tougher.

                            2. 2

                              Increasingly the JVM is setting its own constraints to match the operating environment when “inside a container”.

                          2. 4

                            Yes, layers as filesystem snapshots enable a more expressive packaging solution than statically linked alternatives. But its not just filesystems, but also runtime configuration (variables through ENV, invocation through CMD) that makes the format even more expressive.

                            p.s. I have also updated the post to say “container images”

                            1. 3

                              And we were able to do that with virtualization for at least 5 - 10 years prior Docker. Or you think that packaging also the kernel is too much?

                              Anyways, I do not think that a container having the notion of a filesystem is the killer feature of Docker. I think that moving the deployment code (installing a library for example) close to compilation of the code helped many people and organizations who did not have the right tooling prior that. For larger companies who had systems engineers cgroups gave the security part mostly because packaging was solved decades prior to Docker.

                              1. 1

                                IMO it’s not the kernel but all of the supporting software that needs to be configured for VMs but which comes for ~free with container orchestration (process management, log exfiltration, monitoring, sshd, infrastructure-as-code, etc).

                                Anyways, I do not think that a container having the notion of a filesystem is the killer feature of Docker. I think that moving the deployment code (installing a library for example) close to compilation of the code helped many people and organizations who did not have the right tooling prior that.

                                How do you get that property without filesystem semantics? You can do that with toolchains that produce statically linked binaries, but many toolchains don’t support that and of those that do, many important projects don’t take advantage.

                                Filesystem semantics enable almost any application to be packaged relatively easily in the same format which means orchestration tools like Kubernetes become more tenable for one’s entire stack.

                              2. 3

                                I noticed Go now has support for, in its essentially static binary, including a virtual filesystem instantiated from a filesystem tree specified during compilation. In that scenario, it further occurs to me that containerization isn’t perhaps necessary, thereby exposing read only shared memory pages to the OS across multiple processes running the same binary.

                                I don’t know in the containerization model if the underlying/orchestrating OS can identify identical read only memory pages and exploit sharing.

                                1. 2

                                  I think in the long term containers won’t be necessary, but today there’s a whole lot of software and language ecosystems that don’t support static binaries (and especially not virtual filesystems) at all and there’s a lot of value in having a common package type that all kinds of tooling can work with.

                                  1. 2

                                    As a packaging mechanism, in theory embedded files in Go works ok (follows single process pattern). In practice, most Go binary container images are empty (FROM scratch + certs) anyways. Lots of files that are environment dependent that you would want at runtime (secrets, environment variables, networking) that are much easier to declaratively add to a container image vs. recompile.

                                  2. 2

                                    I think the abstraction on images is a bit leaky. With docker you’re basically forced to give it a name into a system registry, so that you can then run the image as a container.

                                    I would love to be able to say like… “build this image as this file, then spin up a container using this image” without the intermediate steps of tagging (why? because it allows for building workflows that don’t care about your current Docker state). I know you can just kinda namespace stuff but it really bugs me!

                                    1. 3

                                      Good practice is addressing images by their digest instead of a tag using the @ syntax. But I agree - registry has always been a weird part of the workflow.

                                      1. 1

                                        addressing images by their digest instead of a tag using the @ syntax.

                                        Be careful about that. The digest of images can change as you push/pull them between different registries. The problem may have settled out, but we were bitten by changes across different releases of software in Docker’s registry image and across the Docker registry and Artifactory’s.

                                        I’m not sure if there’s a formal standard for how that digest is calculated, but certainly used to be (~2 years back) be very unreliable.

                                        1. 1

                                          Oh I wasn’t aware of that! That could let me at least get most of the way to what I want to do, thanks for the pointer!

                                      2. 2

                                        So why Docker/container images and why not ELF or other statically linked formats?

                                        There are things like gVisor and binctr that work this way, as do somethings like Emscripten (for JS/WASM)

                                        1. 2

                                          I really hope for WASI to pick up here. I used to be a big fan of CloudABI, which now links to WASI.

                                          It would be nice if we could get rid of all the container (well actually mostly Docker) cruft.

                                        1. 2

                                          A shorter version, skipping signature if you do this only locally on a trusted system.

                                          Commands are based on the article, but partly shortened. For example if you only have one ports tree it will be named default, if you don’t specify otherwise.

                                          # Install poudriere
                                          pkg install poudriere 
                                          # Create a jail with the right version of FreeBSD it
                                          poudriere jail -c -j 12-2x64 -v 12.2-RELEASE 
                                          # Create a ports tree
                                          poudriere ports -c 
                                          # -> Set up package building options, according to the article by creating make.conf
                                          # -> create a port-lists file with the ports you want to build (format like www/nginx, one per line)
                                          # Build the ports
                                          poudriere bulk -j 12-2x64 -f /path/to/port-list
                                          # Disable the default repo by creating a /usr/local/etc/pkg/repos/freebsd.conf containing:
                                          FreeBSD: {
                                            enabled: no
                                          # Create a new one with 
                                          poudriere: {
                                            url: "file:///usr/local/poudriere/data/packages/12-2x64",
                                            enabled: yes                        
                                          # Update repos
                                          pkg update

                                          Build updated packages

                                          # Update ports
                                          poudriere ports -u
                                          poudriere bulk -j 12-2x64 -f /path/to/port-list

                                          Use packages as normally.

                                          1. 3

                                            Note that, if you’re using pkg-base, then the output from pkg query -e "%a==0" "%o" | sort -d will contain a load of lines with just base in them. Putting that in the list of packages for Poudriere to build will cause problems. To avoid this, run pkg query -e "%a==0" "%o" | sort -d -u | grep -v -x base.

                                            You should also be able to skip this:

                                            # Update repos
                                            pkg update

                                            pkg (unlike apt) will automatically check if it needs to do an update when you do an upgrade.

                                          1. 67

                                            I don’t understand how heat maps are used as a measuring tool, it seems pretty useless on its own. If something is little clicked, does it mean people don’t need the feature or people don’t like how it’s implemented? Or how do you know if people would really like something that’s not there to begin with?

                                            It reminds about the Feed icon debacle: it’s been neglected for years and fell out of active use, which lead Mozilla to say “oh look, people don’t need the Feed icon, let’s move it away from the toolbar”. And then after a couple of versions they said “oh look, even less people use the Feed functionality, let’s remove it altogether”. Every time I see a click heatmap as a means to drive UI decisions I can’t shake the feeling that it’s only used to rationalize arbitrary product choices already made.

                                            (P.S. I’ve been using Firefox since it was called Netscape and never understood why so many people left for Chrome, so no, I’m not just a random hater.)

                                            1. 11

                                              Yeah, reminds me of some old Spiderman game where you could “charge” your jump to jump higher. They removed the visible charge meter in a sequel but kept the functionality, then removed the functionality in the sequel after that because nobody was using it (because newcomers didn’t know it was there, because there was no visible indication of it!).

                                              1. 8

                                                It’s particularly annoying that the really cool things, which might actually have a positive impact for everyone – if not now, at least in a later release – are buried at the end of the announcement. Meanwhile, some of the things gathered through metrics would be hilarious were it not for the pretentious marketing language:

                                                There are many ways to get to your preferences and settings, and we found that the two most popular ways were: 1) the hamburger menu – button on the far right with three equal horizontal lines – and 2) the right-click menu.

                                                Okay, first off, this is why you should proofread/fact-check even the PR and marketing boilerplate: there’s no way to get to your preferences and settings through the right-click menu. Not in a default state at least, maybe you can customize the menu to include these items but somehow I doubt that’s what’s happening here…

                                                Anyway, assuming “get to your preferences and settings” should’ve actually been “do things with the browser”: the “meatball” menu icon has no indication that it’s a menu, and a fourth way – the old-style menu bar – is hidden by default on two of the three desktop platforms Firefox supports, and isn’t even available on mobile. If you leave out the menubar through sheer common sense, you can skip the metrics altogether, a fair dice throw gets you 66% accuracy.

                                                People love or intuitively believe what they need is in the right click menu.

                                                I bet they’ll get the answer to this dilemma if they:

                                                • Look at the frequency of use for the “Copy” item in the right-click menu, and
                                                • For a second-order feature, if they break down right-click menu use by input device type and screen size

                                                And I bet the answer has nothing to do with love or intuition ;-).

                                                I have also divined in the data that the frequency of use for the right-click menu will further increase. The advanced machine learning algorithms I have employed to make this prediction consist of the realisation that one menu is gone, and (at least the screenshots show) that the Copy item is now only available in the right-click menu.

                                                Out of those 17 billion clicks, there were three major areas within the browser they visited:

                                                A fourth is mentioned in addition to the three in the list and, as one would expect, these four (out of… five?) areas are: the three areas with the most clickable widgets, plus the one you have to click in order to get to a new website (i.e. the navigation bar).

                                                1. 12

                                                  They use their UX experts & measurements to rationalize decisions done to make Firefox more attractive to (new) users as they claim, but … when do we actually see the results?

                                                  The market share has kept falling for years, whatever they claim to be doing, it is exceedingly obvious that they are unable to deliver.

                                                  Looking back, the only thing I remember Mozilla doing in the last 10 years is

                                                  • a constant erosion of trust
                                                  • making people’s lives miserable
                                                  • running promising projects into the ground at full speed

                                                  I would be less bitter about it if Mozilla peeps wouldn’t be so obnoxiously arrogant about it.

                                                  Isn’t this article pretty off-topic, considering how many stories are removed for being “business analysis”?

                                                  This is pretty much “company losing users posts this quarter’s effort to attract new users by pissing off existing ones”.

                                                  1. 14

                                                    The whole UI development strategy seems to be upside down: Firefox has been hermorrhaging users for years, at a rate that the UI “improvements” have, at best, not influenced much, to the point where a good chunk of the browser “market” consists of former Firefox users.

                                                    Instead of trying to get the old users back, Firefox is trying to appeal to a hypothetical “new user” who is technically illiterate to the point of being confused by too many buttons, but somehow cares about tracking through 3rd-party cookies and has hundreds of tabs open.

                                                    The result is a cheap Chrome knock-off that’s not appealing to anyone who is already using Chrome, alienates a good part of their remaining user base who specifically want a browser that’s not like Chrome, and pushes the few remaining Firefox users who don’t specifically care about a particular browser further towards Chrome (tl;dr if I’m gonna use a Chrome-like thing, I might as well use the real deal). It’s not getting anyone back, and it keeps pushing people away at the same time.

                                                    1. 16

                                                      The fallacy of Firefox, and quite a few other projects and products, seems to be:

                                                      1. Project X is more popular than us.
                                                      2. Project X does Y.
                                                      3. Therefore, we must do Y.

                                                      The fallacy is that a lot of people are using your software is exactly because it’s not X and does Z instead of Y.

                                                      It also assumes that the popularity is because of Y, which may be the case but may also not be the case.

                                                      1. 3

                                                        You’re not gonna win current users away from X by doing what X does, unless you do it much cheaper (not an option), or 10x better (hard to see how you could do more of chrome better than chrome).

                                                        1. 1

                                                          You might; however, stop users from switching to X by doing what X does, even if you don’t do it quite as well.

                                                      2. 4

                                                        The fundamental problem with Firefox is: It’s just slow. Slower than Chrome for almost everything. Slower at games (seriously, its canvas performance is really bad), slower at interacting with big apps like Google Docs, less smooth scrolling, even more latency between you hit a key on the keyboard and the letter shows up in the URL bar. This stuff can’t be solved with UI design changes.

                                                        1. 3

                                                          Well, but there are reasons why it’s slow - and at least one good one.

                                                          Most notably, because Firefox makes an intentionally different implementation trade-off than Chrome. Mozilla prioritizes lower memory usage in FF, while Google prioritizes lower latency/greater speed.

                                                          (I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me)

                                                          That’s partially why you see so many Linux users complaining about Chrome’s memory usage.

                                                          These people are getting exactly what they asked for, and in an age where low CPU usage is king (slow mobile processors, limited battery life, more junk shoved into web applications, and plentiful RAM for people who exercise discipline and only do one thing at once), Chrome’s tradeoff appears to be the better one. (yes, obviously that’s not the only reason that people use Chrome, but I do see people noticing it and citing it as a reason)

                                                          1. 2

                                                            I rarely use Google Docs; basically just when someone sends me some Office or Spreadsheet that I really need to read. It’s easiest to just import that in Google Docs; I never use this kind of software myself and this happens so infrequently that I can’t be bothered to install LibreOffice (my internet isn’t too fast, and downloading all updates for it takes a while and not worth it for the one time a year I need it). But every time it’s a frustrating experience as it’s just so darn slow. Actually, maybe it would be faster to just install LibreOffice.

                                                            I haven’t used Slack in almost two years, but before this it was sometimes so slow in Firefox it was ridiculous. Latency when typing could be in the hundreds or thousands of ms. It felt like typing over a slow ssh connection with packet loss.

                                                            CPU vs. memory is a real trade-off with a lot of various possible ways to do this and it’s a hard problem. But it doesn’t change that the end result is that for me, as a user, Firefox is sometimes so slow to the point of being unusable. If I had a job where they used Slack then this would be a problem as I wouldn’t be able to use Firefox (unless it’s fixed now, I don’t know if it is) and I don’t really fancy having multiple windows.

                                                            That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                                                            1. 1

                                                              That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                                                              I agree. I absolutely prefer Firefox to Chrome - it’s generally a better browser with a much better add-on ecosystem (Tree Style Tabs, Container Tabs, non-crippled uBlock Origin) and isn’t designed to allow Google to advertise to you. My experience with it is significantly better than with Chome.

                                                              It’s because I like Firefox so much that I’m so furious about this poor design tradeoff.

                                                              (also, while it contributes, I don’t blame all of my slowdowns on Firefox’s design - there are many cases where it’s crippled by Google introducing some new web “standard” that sites started using before Firefox could catch up (most famously, the ShaddowDOM v0 scandal with YouTube))

                                                            2. 1

                                                              I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me

                                                              I’m interested in your citations :)

                                                              1. 1

                                                                Here’s one about Google explicitly trading off memory for CPU that I found on the spot: https://tech.slashdot.org/story/20/07/20/0355210/google-will-disable-microsofts-ram-saving-feature-for-chrome-in-windows-10

                                                        2. 4

                                                          I remember more things from Mozilla. One is also a negative (integration of a proprietary application, Pocket, into the browser; it may be included in your “constant erosion of trust” point), but the others are more positive.

                                                          Mozilla is the organization that let Rust emerge. I’m not a Rust programmer myself but I think it’s clear that the language is having a huge impact on the programming ecosystem, and I think that overall this impact is very positive (due to some new features of its own, popularizing some great features from other languages, and a rather impressive approach to building a vibrant community). Yes, Mozilla is also the organization that let go of all their Rust people, and I think it was a completely stupid idea (Rust is making it big, and they could be at the center of it), but somehow they managed to wait until the project was mature enough to make this stupid decision, and the project is doing okay. (Compare to many exciting technologies that were completely destroyed by being shut out too early.) So I think that the balance is very positive: they grew an extremely positive technology, and then they messed up in a not-as-harmful-as-it-could-be way.

                                                          Also, I suspect that Mozilla is doing a lot of good work participating to the web standards ecosystem. This is mostly a guess as I’m not part of this community myself, so it could have changed in the last decade and I wouldn’t know. But this stuff matters a lot to everyone, we need to have technical people from several browsers actively participating, it’s a lot of work, and (despite the erosion of trust you mentioned) I still trust the Mozilla standard engineers to defend the web better than Google (surveillance incentives) or Apple (locking-down-stuff incentives). (Defend, in the sense that I suspect I like their values and their view of the web, and I guess that sometimes this makes a difference during standardization discussion.) Unfortunately this part of Mozilla’s work gets weaker as their market share shrinks.

                                                          1. 3

                                                            Agreed. I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off – an unexpected, pleasant surprise that Rust didn’t end in the premature death-spiral that Mozilla projects usually end up in.

                                                            Negative things I remember most are Persona, FirefoxOS and the VPN scam they are currently running.

                                                            1. 4

                                                              I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off

                                                              Hard disagree there. Pernosco is a revolution in debugging technology (a much, much bigger revolution than what Rust is to programming languages) and wouldn’t exist without Mozilla spending engineering resources on RR. I don’t know much about TTS/STT but the Deepspeech work Mozilla has done also worked quite nicely and seemed to make quite an impact in the field. I think I also recall them having some involvement in building a formally-proven crypto stack? Not sure about this one though.

                                                              Mozilla has built quite a lot of very popular and impressive projects.

                                                              Negative things I remember most are Persona, FirefoxOS and the VPM scam they are currently running.

                                                              None of these make me as angry as the Mister Robot extension debacle they caused a few years ago.

                                                              1. 2

                                                                To clarify, I didn’t mean it’s a one-off that it was popular, but that it’s a one-off that it didn’t get mismanaged into the ground. Totally agree otherwise.

                                                              2. 4

                                                                the VPM [sic] scam they are currently running

                                                                Where have you found evidence that Mozilla is not delivering what they promise - a VPN in exchange for money?

                                                                1. 0

                                                                  They are trying to use the reputation of their brand to sell a service to a group of “customers” that has no actual need for it and barely an understanding what it does or for which purposes it would be useful.

                                                                  What they do is pretty much the definition of selling snake oil.

                                                                  1. 7

                                                                    I am a Firefox user and I’m interested in their VPN. I have a need for it, too - to prevent my ISP from selling information about me. I know how it works and what it’s useful for. I can’t see how they’re possibly “selling snake oil” unless they’re advertising something that doesn’t work or that they won’t actually deliver…

                                                                    …which was my original question, which you sidestepped. Your words seem more like an opinion disguised as fact than actual fact.

                                                          2. 2

                                                            It’s a tool like a lot of other things. Sure, you can abuse it in many ways, but unless we know how the results are used we can’t tell if it’s a good or bad scenario. A good usage for a heatmap could be for example looking at where people like to click on a menu item and how far should the “expand” button go.

                                                            As an event counter, they’re not great - they can get that info in better/cheaper ways.

                                                            1. 2

                                                              This is tricky and also do for survey. I often am in a situation where it asks me “What do you have the hardest time with it” or “What prevents you from using language X on your current project?” and when the answer essentially boils down to “I am doing scripting and not systems programming” or something similar, I don’t intend to tell them that they should make a scripting language out of a systems language or vice versa.

                                                              And I know these are often taken wrongly, by reading the results and interpretation. There rarely is a “I like it how it is” option or a “Doesn’t need changes” or even “Please don’t change this!”.

                                                              I am sure this is true about other topics too, but programming language surveys seem to be a trend so that’s where I often see it.

                                                              1. 1

                                                                I feel like they’re easily gamed, too. I feel like this happened with Twitter and the “Moments” tab. When they introduced it, it was in the top bar to the right of the “Notifications” tab. Some time after introduction, they swapped the “Notifications” and “Moments” tab, and the meme on Twitter was how the swap broke people’s muscle memory.

                                                                I’m sure a heat map would’ve shown that after the swap, the Moments feature suddenly became a lot more popular. What that heat map wouldn’t show was user intent.

                                                                1. 1

                                                                  from what I understand, the idea behind heat maps is not to decide about which feature to kill, but to measure what should be visible by default. The more stuff you add to the screen, the more cluttered and noisy the browser becomes. Heat maps help Mozilla decide if a feature should be moved from the main visible UX to some overflowing menu.

                                                                  Most things they moved around can be re-arranged by using the customise toolbar feature. In that sense, you do have enough bits to make your browser experience yours to some degree.

                                                                  The killing of feed icon was not decided with heat maps alone. From what I remember, that feature was seldom used (something they can get from telemetry and heat maps) but also was some legacy bit rot that added friction to maintenance and whatever they wanted to do. Sometimes features that are loved by few are simply in the way of features that will benefit more people, it is sad but it is true for codebases that are as old as Firefox.

                                                                  Anyway, feed reading is one WebExtension away from any user, and those add-ons usually do a much better job than the original feature ever did.

                                                                  1. 1

                                                                    I’m wondering how this whole heatmaps/metrics thing works for people who have customized their UI.

                                                                    I’d assume that the data gained from e. g. this is useless at best and pollution at worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                                                    1. 1

                                                                      @soc, I expect the browser to know it’s own UI and mark heat maps with context so that clicking on a tab is flagged the same way regardless if tabs are on top or the side. Also, IIRC the majority of Firefox users do not customise their UI. We live in a bubble of devs and power users who do, but that is a small fraction of the user base. Seeing what the larger base is doing is still beneficial.

                                                                      worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                                                      I’m pretty sure they can get meaningful results without assuming everyone is the same ideal user. Heat maps are just a useful way to visualise something, specially when you’re doing a blog post.

                                                                  2. 1

                                                                    never understood why so many people left for Chrome,

                                                                    The speed difference is tangible.

                                                                    1. 2

                                                                      I don’t find it that tangible. If I was into speed, I’d be using Safari here which is quite fast. There are lots of different reasons to choose a browser. A lot of people switched to Chrome because of the constant advertising in Google webapps and also because Google has a tendency of breaking compatibility or reducing compatibility and performance with every other browser, thus making Google stuff work better on Chrome.

                                                                  1. 7

                                                                    I have two observations on this article.

                                                                    First, there is no mention of the “new” wave of federated services that popped up all over the place based on ActivityPub. I find that to be a glaring omission. Even though they didn’t get mass adoption, the number of users the Mastodon network has is impressive for a bunch of open-source projects.

                                                                    Second, I think that throwing the baby with the bath water because a corporation has captured a large number of users inside of a distributed network is pretty defeatist. Just because gmail has a large portion of email users it doesn’t mean that as a user I can’t choose a smaller provider like tutanota or proton.

                                                                    1. 12
                                                                      1. I didn’t mention these on purpose because I don’t have any direct experience with them (personal dislike of social media), and so don’t feel qualified to talk about them. From an outsider’s perspective though, they do seem to fit my case study of XMPP/Matrix etc.

                                                                      2. A lot of people seem to have got the impression that I hate these applications, and/or am somehow against them. I tried to make it clear in the post that I am active user of almost all the applications I discussed, and only want to see them succeed.

                                                                      1. 3

                                                                        I’m sorry to sound like the “ackchyually” gang, but I guess my comments were based on how your title makes a sweeping generalization without the article looking into all the options.

                                                                        PS. I’m working on an ActivityPub service myself, and that might colour my views. :D

                                                                      2. 4

                                                                        Quoting an entire paragraph:

                                                                        Whenever this topic comes up, I’m used to seeing other programmers declare that the solution is simply to make something better. I understand where this thought comes from; when you have a hammer, everything looks like a nail, and it’s comforting to think that your primary skill and passion is exactly what the problem needs. Making better tools doesn’t do anything about the backwards profit motive though, and besides, have you tried using any of the centralised alternatives lately? They’re all terrible. Quality of tools really isn’t what we’re losing on.

                                                                        ActivityPub might be awesome, but that is entirely beside the point the author tries to make.

                                                                        1. 2

                                                                          How so? I’m not sure how the paragraph you’re quoting takes away from the fact that there are currently community driven federated projects and services which are popular and that the author didn’t consider.

                                                                          1. 3

                                                                            The article listed a few examples of where decentralization didn’t work out, even though it had the technical merits to be much better than the alternatives. Enumerating all possible examples where it did or didn’t work is out of the scope and besides the point - it’s never the technical merits that are lacking in these situations.

                                                                            ActivityPub isn’t used (in a very broad term here…) by others than hypergeeks like you and me. We might find each other and form communities around our interests, but the was majority of users are not going to form their own communities like this.

                                                                            1. 1

                                                                              but the was majority of users are not going to form their own communities like this.

                                                                              That’s fine, but considering this as the only metric for success is a poor choice.

                                                                        2. 4

                                                                          Try to run your own mail server, though.

                                                                          1. 3

                                                                            There are options out there for allowing other people to do the nitty gritty or running the server with minimal costs and time investment. I’m running a purelymail account with multiple domains that I own.

                                                                            1. 3

                                                                              I did that for quite some time. Only switched to a commercial hosted provider because of general sysadmin burnout / laziness, not anything mail specific. The problem of Gmail treating my server as suspicious was easily solved by sending one outgoing email from Gmail to my server.

                                                                              1. 4

                                                                                You get blacklisted to hell once you put your mail server on a VPS with high enough churn in its IP neighborhood.

                                                                                And there is no power on Earth (for now) that would convince GOOG or MSFT to reconsider. Their harsh policies are playing into their hands – startups buy their expensive services instead of running a stupid postfix instance.

                                                                                We (the IT sector) need to agree on a way to group hosts on a network by their operator in much finer way so that regular leasers are not thrown in the same bag as the spammers or victims of a network breach.

                                                                                1. 4

                                                                                  You get blacklisted to hell once you put your mail server on a VPS with high enough churn in its IP neighborhood.

                                                                                  Is that so? So far the main reason I see for mails not arriving is the use of Mailchimp. No hate on Mailchimp there, just experience, since many companies use their service.

                                                                                  Meanwhile Google is super fine, as long as you make sure SPF and DKIM/DMARC are set up correctly. Oh and of course reverse IP (PTR record) should be set up correctly, like with any server. They are even nice enough to report back why your mail was rejected and what to do about if you don’t do the above.

                                                                                  Experience is based on Mailchimp usage in multiple companies and e-mail servers in various setups (new domain, new IP, moving servers, VPS, dedicated hoster, small hosters, big hosters). Didn’t have a case so far where Google would have rejected an email, once the initial SPF/DKIM-Setup/PTR was running correctly.

                                                                                  The “suspicious email” stuff is usually analysis of the e-mail itself. The main causes are things like reply-to with different domain, HTML links, where it says example.com, but actually (for example for click tracking purposes) links somewhere else.

                                                                                  Not telling anyone they should run a mail server, just throwing in some personal experiences, because the only real life examples where Google would reject an email was, because of SPF, DKIM or PTR being misconfigured or missing. For accepted, but thrown into spam it’s mostly reply-to and links. I have close to no experience with MSFT. Only ever used a small noname-VPS and a big German dedicated hosing provider to send to hotmail addresses and it worked.

                                                                                  1. 3

                                                                                    Is that so?

                                                                                    I host my own postfix instance in a VPS for years now (well, not since last summer or so, but I’ll eventually get back to it). I had my email bounced from hotmail’s server, and the reason given by the bounce email was that my whole IP block was being banned. It tends to resolve itself after a few days. In the mean time, I am dead to hotmail users. Google is even more fickle. I am often marked as spam, including in cases where I was replying to email.

                                                                                    I don’t believe it was a misconfiguration of any kind. I did everything except DKIM, and tested that against dedicated test servers (I believe you can send an email to them, and they respond with how spammy your mail server looks).

                                                                                    So yes, it is very much so.

                                                                                    1. 2

                                                                                      Google is even more fickle. I am often marked as spam, including in cases where I was replying to email.

                                                                                      Same here. After finally setting up DKIM these hard to diagnose and debug problems finally went away completely , AFAICT.

                                                                                      1. 1

                                                                                        Interesting. Thank you for the response!

                                                                                        Just curious, whether if you open the detail view of the email it says more? When I played with it it usually did tell why it thought it was spammy.

                                                                                        1. 1

                                                                                          Just curious, whether if you open the detail view of the email it says more?

                                                                                          Didn’t thought of that: when I send email to my own gmail account, it does not end up in the spam folder. I have yet to hack into other people’s spam folder. :-)

                                                                                          Right now I can’t make the test because I’m not using my own mail server (I’m currently using the mail service of my provider). I send an email to myself anyway, and the headers added by Gmail say SPF and DKIM are “neutral” (that is, not configured). I will set them up once I reinstate my own Postfix instance, though.

                                                                                      2. 2

                                                                                        It’s a frequent issue with Digital Ocean, for example.

                                                                              1. 1

                                                                                There is so many reasons to make stuff. Among them it can be for fun, for learning, for money or because it solves a problem someone has and that’s only some of them.

                                                                                I think they can all overlap, but there’s also ones that for example make money, solve zero problems (if you don’t consider making money the problem) or even creates them (spam, ransomware, answers to questions nobody ever asked, etc.).

                                                                                Research is fairly often doing stuff beyond (yet) obvious usefulness.

                                                                                1. 1

                                                                                  I always like to think of Ruby as the attempt to bring bits of Smalltalk into a world dominated by PHP/Perl/Python.

                                                                                  While I’ve never been too much into Ruby, it’s interesting how with Rails, etc. it re-popularized the concept of MVC (stemming from Smalltalk) and also how it found its way into places, where I totally wouldn’t expect to find it, like Hashicorp’s Vagrant.

                                                                                  1. 8

                                                                                    with Rails, etc. it re-popularized the concept of MVC (stemming from Smalltalk)

                                                                                    I think this is a fallacy: MVC in GUI in smalltalk and MVC in HTPP in Ruby are different patterns, which emerge in different contexts, and just happen to share a catchy name. See

                                                                                    1. 1

                                                                                      Hey, thanks for sharing those. Very insightful. I however still wouldn’t say they just share the name. Yes, MVC as in Smalltalk isn’t really what we see in HTTP frameworks. However I’d still say that it is very much inspired by MVC in Smalltalk. Maybe saying it repopularized is bad wording though.

                                                                                      It’s similar when people talk about object oriented programming though. In Smalltalk it’s still clearer and different, also with class browser, message passing and the integration into a graphical environment. Just like the way debugging works is something some projects take inspiration from.

                                                                                      And when it comes to development in let’s say Java or C++ in real life projects, there’s very little resembling object oriented programming like in Smalltalk.

                                                                                      1. 0

                                                                                        Hey, thanks for sharing those. Very insightful. I however still wouldn’t say they just share the name. Yes, MVC as in Smalltalk isn’t really what we see in HTTP frameworks. However I’d still say that it is very much inspired by MVC in Smalltalk. Maybe saying it repopularized is bad wording though.

                                                                                        It’s similar when people talk about object oriented programming though. In Smalltalk it’s still clearer and different, also with class browser, message passing and the integration into a graphical environment. Just like the way debugging works is something some projects take inspiration from.

                                                                                        And when it comes to development in let’s say Java or C++ in real life projects, there’s very little resembling object oriented programming like in Smalltalk.

                                                                                    1. 56

                                                                                      IMHO it’s hard to get much out of reading a codebase without necessity. Without a reason why, you won’t do it, or you won’t get much out of it without knowing what to look for.

                                                                                      1. 5

                                                                                        Yeah, this seems a bit like asking “What’s your favorite math problem?”

                                                                                        I dunno. Always liked 7+7=14 since I was a kid.

                                                                                        Codebases exist to do things. You read a codebase because you want to modify what that is or fix it because it’s not doing the thing its supposed to. Ideally, my favorite codebase is the one I get value out of constantly but never have to look at. CPU microcode, maybe?

                                                                                        1. 4

                                                                                          I often find myself reading codebases when looking for examples for using a library I am working with, or to understand how you are supposed to interact with some protocol. Open source codebases can help a lot there. It’s not so much 7 + 7 = 14, but rather 7 + x + y = 23, and I don’t know how to do x or y to get 23, but there are a few common components between the math problems. Maybe one solution can help me understand another?

                                                                                          1. 2

                                                                                            I completely agree. I do the same thing.

                                                                                            when I am solving a similar problem or I’m interested in a class of problems, sometimes I find reviewing a codebase very informative. In my mind, what I’m doing is walking through the various things I might want to do and then reviewing the code structure to see how they’re doing it. It’s also bidirectional: A lot of times I see things in the structure and then wonder what sorts of behavior I might be missing.

                                                                                            I’m not saying don’t review any codebases at all. I’m simply pointing out that without context, there’s no qualifiers for one way of coding to be viewed as better or worse than any other. You take the context to your codebase review, whether explicitly or completely inside your mind.

                                                                                            There’s a place for context-free codebase reviews, of course. It’s usually in an academic setting. Everybody should walk through the GoF and functional data structures. You should have experience in a generic fashion working through a message loop or queuing system and writing a compiler. I did and still do, but in the same way I read up on what’s going on in mRNA vaccinations: familiarity. There exists these sorts of things that might help when I need them. I do not necessarily have to learn or remember them, but I have to be able to get them when I want. I know these coding details at a much lower level than I do biology, after all, I’m the guy who’s going to use and code them if I need them. But the real work is matching the problem context up (gradually, of course) with the various implementation systems you might want to use.

                                                                                            There are folks who are great problem-solvers that can’t code. That sucks. There are other folks who can code like the wind but are always putting some obscure yet clever chunk of stuff out and plugging it in somewhere. That also sucks. Good coders should be able to work on both sides of that technical line and move back and forth freely. I review codebases to review how that problem-solving line changed over the years of development, thinking to myself “Where did these guys do too much coding? Too little? Why are these classes or modules set up the way they are (in relation to the problem and maintaining code)?”

                                                                                            That’s the huge value you bring from reviewing codebases: more information on the story of developing inside of that domain. The rest of the coding stuff should be rote: I have a queue, I have a stack, etc. If I want to dive down to that level, start reviewing object interface strategy, perhaps, I’m still doing it inside of some context: I’m solving this problem and decided I need X, here’s a great example of X. Now, start reading and go back to reviewing what they’ve done against the problem you’re solving. Don’t be the guy who brings 4,000 lines of code to a 1 line problem. They might be great lines of code, but you’re working backwards.

                                                                                            1. 1

                                                                                              Yeah, I end up doing this a lot for i.e obscure system-specific APIs. Look at projects that’d use it/GH code search, chase the ifdefs.

                                                                                            2. 2

                                                                                              Great Picard’s Theorem, obvs. I always imagined approaching an essential singularity and seeing all infinity unfold, like a fractal flower, endlessly repeated in every step.

                                                                                              1. 1

                                                                                                I’d disagree. While sure, one could argue you just feed a computer what to do, you could make a similar statement about for example architecture, where (very simplified) you draw what workers should do and they do it.

                                                                                                Does that mean that architects don’t learn from the work of other architect? I really don’t think so.

                                                                                                But I also don’t think that “just reading” code or copying some “pattern” or “style” from others is what makes you like it. It’s more that if you write some code only on your own or with a somewhat static, like-minded team your mental constructs don’t really change, while different code bases can challenge your mental model or give you insights in a different mental/architectural model that someone else came up with.

                                                                                                For me that’s not so different from learning different programming languages - like really learning them, not just being able to figure out what it means or doing the same thing you did before with different syntax.

                                                                                                I am sure it’s not the same for everyone, and it surely depends on different learning styles, but I assume that most people commenting here don’t read code like the read a calculation and I’d never recommend people to just “read some code”. It doesn’t work, just like you won’t be a programmer after just reading a book on programming.

                                                                                                It can be a helpful way of reflecting on own programming, but very differently from most code-reviews (real ones, not some theoretical optimal code review).

                                                                                                Another thing, more psychological maybe is that I think everyone has seen bad code, and be it some old self-written code from some years ago. Sometimes it helps for motivation to come across the opposite by reading a nice code base to be able to visualize a goal. The closer it is to practical the better in my opinion. I am not so much a fan of examples or example apps, because they might not work in real world code bases, but that’s another topic.

                                                                                                I hope though that nobody feels like they need to read code, when they don’t feel like it and it gives them nothing. Minds work differently and forcing yourself to do something seems to often counter-act how much is actually learned.

                                                                                              2. 4

                                                                                                Well, it varies. Many contributions end up being a grep away and only make you look at a tiny bit of the codebase. Small codebases can be easier to grasp, as can those with implementation overviews (e.g. ARCHITECTURE.md)

                                                                                                1. 3

                                                                                                  “Mathematics is not a spectator sport” - I think the same applies to coding.

                                                                                                  1. 1

                                                                                                    I have to agree with this; I’ve found the most improvement comes from contribution, and having my code critiqued by others. Maybe we can s/codebases to study/codebases to contribute to/?

                                                                                                    1. 2

                                                                                                      Even if you don’t have to modify something, reading something out of a necessity to understand it makes it stick better (and more interesting) than just reading it for the sake of reading. That’s how I know more about PHP than most people want to know.

                                                                                                      1. 1

                                                                                                        Years ago working on my MSc thesis I was working on a web app profiler. “How can I get the PHP interpreter to tell me every time it enters or exits a function in user code” led to likely a similar level of “I know more about the internals of PHP than I would like” :D

                                                                                                  1. 1


                                                                                                    The code is both elegant and simple. Not actually sure how I found it and I am not using it, but really enjoy reading the code.

                                                                                                    Go standard library. I think that code at least used to be a great example of being pragmatic when programming and making decisions. Rob Pike’s ivy is similar.

                                                                                                    1. 24

                                                                                                      I am confused about why the Rest crowd is all over grpc ant the likes. I thought the reason why Rest became a thing was that they didn’t really thought RPC protocols were appropriate. Then Google decides to release an binary (no less) RPC protocol and all of the sudden, everyone thinks RPC is what everyone should do. SOAP wasn’t even that long ago. It’s still used out there.

                                                                                                      Could it be just cargo cult? I’ve yet to see a deployment where the protocol is the bottleneck.

                                                                                                      1. 14

                                                                                                        Because a lot of what is called REST wends up as something fairly close to an informal RPC over HTTP in JSON, maybe with an ad-hoc URI call scheme, and with these semantics, actual binary rpc is mostly an improvement.

                                                                                                        (Also everyone flocks to go for services and discover that performant JSON is a surprisingly poor fit for that language)

                                                                                                        1. 14

                                                                                                          I’I imagine that the hypermedia architectural constraints weren’t actually buying them much. For example, not many folks even do things like cacheability well, never mind building generic hypermedia client applications.

                                                                                                          But a lot of the time the bottleneck is usually around delivering new functionality. RPC style interfaces are cheapter to build, as they’re conceptually closer to “just making a function call” (albeit one that can fail half way though), wheras more hypermedia style interfaces requires a bit more planning. Or at least thinking in a way that I’ve not seen often.

                                                                                                          1. 10

                                                                                                            There has never been much, if anything at all, hypermedia specific about HTTP, It’s just a simple text based stateless protocol on top of TCP. At this day an age, that alone buys anyone more than any binary protocol. I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations. Which I don’t think are common to encounter even among tech giants.

                                                                                                            Virtually every computing device has a TCP/IP stack these days. $2 microcontrollers have it. Text protocols were a luxury in the days where each kilobyte came with high costs. We are 20-30 years pst that time. Today even in the IoT world HTTP and MQTT are the go to choices for virtually everyone, no one bothers to buy into the hassle of an opaque protocol.

                                                                                                            I agree with you, but I think the herd is taking the wrong direction again. My suspicion is that the whole Rest histeria was a success because of being JSON over HTTP which are great easy to grasp and reliable technologies. Not because of the alleged architectural advantages as you well pointed out.

                                                                                                            SOAP does provide “just making a function call”, I think the reason why it lost to Restful APIs, was because requests were not easy to assemble without resourcing to advanced tooling. And implementations in new programming languages were demanding. I do think gRPC suffers from these problems too. It’s all fun and games while developers are hyped “because google is doing”, once the hype dies out, I’m picturing this old embarrassing beast no one wants to touch, in the lines of GWT, appengine, etc.

                                                                                                            1. 9

                                                                                                              I cannot reason as to why anyone would want to use a binary protocol over a human readable (and writeable) text one, except for very rare situations of extreme performance or extreme bandwidth optimisations.

                                                                                                              Those are not rare situations, believe me. Binary protocols can be much more efficient, in bandwidth and code complexity. In version 2 of the product I work on we switched from a REST-based protocol to a binary one and greatly increased performance.

                                                                                                              As for bandwidth, I still remember a major customer doing their own WireShark analysis of our protocol and asking us to shave off some data from the connection setup phase, because they really, really needed the lowest possible bandwidth.

                                                                                                              1. 2

                                                                                                                hypermedia specific about HTTP

                                                                                                                Sure, but the framing mostly comes from Roy Fielding’s thesis, which compares network architectural styles, and describes one for the web.

                                                                                                                But even then, you have the constraints around uniform access, cacheability and a stateless client, all of which are present in HTTP.

                                                                                                                just a simple text based stateless protocol

                                                                                                                The protocol might have comparatively few elements, but it’s just meant that other folks have had to specify their own semantics on top. For example, header values are (mostly) just byte strings. So for example, in some sense, it’s valid to send Content-Length: 50, 53 in a response to a client. Interpreting that and maintaing synchronisation within the protocol is hardly simple.

                                                                                                                herd is taking the wrong direction again

                                                                                                                I really don’t think that’s a helpful framing. Folks aren’t paid to ship something that’s elegant, they’re paid to ship things that work, so they’ll not want to fuck about too much. And while it might be crude and and inelegant, chunking JSON over HTTP achived precisely that.

                                                                                                                By and large gRPC succeeded because it lets developers ignore a whole swathe of issues around protocol design. And so far, it’s managed to avoid a lot of the ambiguity and interoperability issues that plagued XML based mechanisms.

                                                                                                            2. 3

                                                                                                              Cargo Cult/Flavour of the Week/Stockholm Syndrome.

                                                                                                              A good portion of JS-focussed developers seem to act like cats: they’re easily distracted by a new shiny thing. Look at the tooling. Don’t blink, it’ll change before you’ve finished reading about what’s ‘current’. But they also act like lemmings: once the new shiny thing is there, they all want to follow the new shiny thing.

                                                                                                              And then there’s the ‘tech’ worker generic “well if it works for google…” approach that has introduced so many unnecessary bullshit complications into mainstream use, and let slide so many egregious actions by said company. It’s basically Stockholm syndrome. Google’s influence is actively bad for the open web and makes development practices more complicated, but (a lot of) developers lap it up like the aforementioned Lemming Cats chasing a saucer of milk that’s thrown off a cliff.

                                                                                                              1. 2

                                                                                                                Partly for sure. It’s true for everything coming out of Google. Of course this also leads to a large userbase and ecosystem.

                                                                                                                However I personally dislike Rest. I do not think it’s a good interface and prefer functions and actions over (even if sometimes very well) forcing that into modifying a model or resource. But it also really depends on the use case. There certainly is standard CRUD stuff where it’s the perfect design and it’s the most frequent use case!

                                                                                                                However I was really unhappy when SOAP essentially killed RPC style Interfaces because it brought problems that are not inherent in RPC interfaces.

                                                                                                                I really liked JSON RPC as a minimal approach. Sadly this didn’t really pick up (only way later inside Bitcoin, etc.). This lead to lots of ecosystems and designs being built around REST.

                                                                                                                Something that has also been very noticeable with REST being the de-facto standard way of doing APIs is that oftentimes it’s not really followed. Many, I would say most REST-APIs do have very RPC-style parts. There’s also a lot of mixing up HTTP+JSON with REST and RPC with protobufs (or at least some binary format). Sometimes those “mixed” pattern HTTP-Interfaces also have very good reasons to be like they are. Sometimes “late” feature additions simply don’t fit in the well-designed REST-API and one would have to break a lot of rules anyways, leading to the questions of whether the last bits that would be worth preserving for their cost. But that’s a very specific situation, that typically would only arise years into the project, often triggered by the business side of things.

                                                                                                                I was happy about gRPC because it made people give it another shot. At the same time I am pretty unhappy about it being unusable for applications where web interfaces need to interact. Yes, there is “gateways” and “proxies” and while probably well designed in one way or another they come at a huge price essentially turning them into a big hack, which is also a reason why there’s so many grpc-alikes now. None as far as I know has a big ecosystem. Maybe thrift. And there’s many approaches not mentioned in the article, like webrpc.

                                                                                                                Anyways, while I don’t think RPC (and certainly gRPC) is the answer to everything I also don’t think restful services are, nor graphql.

                                                                                                                I really would have liked to see what json-rpc would have turned to if it got more traction, because I can imagine it during for many applications that now use REST. But this is more a curiosity on an alternative reality.

                                                                                                                So I think like all Google Project (Go, Tensorflow, Kubernetes, early Angular, Flutter, …) there is a huge cargo cult mentality around gRPC. I do however think that there’s quite a lot of people that would have loved to do it themselves, if that could guarantee that it would not be a single person or company using it.

                                                                                                                I also think the cargo cult is partly the reason for contenders not picking up. In cases where I use RPC over REST I certainly default to gRPC simply because there’s an ecosystem. I think a competitor would have a chance though if it would manage a way simpler implementation which most do.

                                                                                                                1. 1

                                                                                                                  I can’t agree more with that comment! I think the RPC approach is fine most of the time. Unfortunately, SOAP, gRPC and GraphQL are too complex. I’d really like to see something like JSON-RPC, with a schema to define schemas (like the Protobuf or GraphQL IDL), used in more places.

                                                                                                                  1. 2

                                                                                                                    Working in a place that uses gRPC quite heavily, the primary advantage of passing protobufs instead of just json is that you can encode type information in the request/response. Granted you’re working with an extremely limited type system derived from golang’s also extremely limited type system, but it’s WONDERFUL to be able to express to your callers that such-and-such field is a User comprised of a string, a uint32, and so forth rather than having to write application code to validate every field in every endpoint. I would never trade that in for regular JSON again.

                                                                                                                    1. 1

                                                                                                                      Strong typing is definitely nice, but I don’t see how that’s unique to gRPC. Swagger/OpenAPI, JSON Schema, and so forth can express “this field is a User with a string and a uint32” kinds of structures in regular JSON documents, and can drive validators to enforce those rules.

                                                                                                              1. 3

                                                                                                                Sometimes it just gives me precisely what I want. There is a lot talk on what’s somehow better, but to me it’s more a decision like what algorithm I use and not what I am a fan of. Printing is amazing for having lists of something (actions or today I was looking into parsing a spreadsheet), while other approaches emphasize state.

                                                                                                                1. 1

                                                                                                                  This is a really interesting article for FVWM3. Not too much seems to be tied to NetBSD other than installing packages and things like the exact volume control commands.

                                                                                                                  Also it’s maybe obvious, but a screenshot of the result is at the very bottom.

                                                                                                                  1. 1

                                                                                                                    What’s up with the shrunken “mc” and “vi” windows (?) on the left of that screenshot — I’ve long since forgotten anything I ever knew about mwm… does it shrink/dock windows like that?

                                                                                                                    1. 4

                                                                                                                      twm-like window managers often minimize windows by putting an icon on the root, I think that’s just a preview of a minimized window.

                                                                                                                      1. 1

                                                                                                                        This says a lot about users of the desktop today… goodness me.

                                                                                                                        1. 1

                                                                                                                          It says a lot that I haven’t used CDE since the 1990s? There’s a lot I’ve forgotten in the past 25 years :-)

                                                                                                                    1. 1

                                                                                                                      It looks really nice!

                                                                                                                      I think something that gets forgotten about, especially with the various Markdown dialects is that a nice property of the original Markdown was that it was pretty close to what one would write in a Readme or an email, Usenet, etc. if there was only regular text (ASCII for example) available.

                                                                                                                      This has the advantage that unless the author doesn’t care just doing cat README.md or similar would still be very readable.

                                                                                                                      Of course there is a lot of extension and of course there is now many other use cases, like static site generators, websites, chat applications, manuals, etc. where the extensions are interesting. While it of course would be nice to have a “one size fits all” type small markup languages, I wonder if it wouldn’t make sense for some of these projects to start with clearly defined use cases (and what isn’t). That gives both developers a measure to prevent feature creep and users a quick way to see if it fits their project.

                                                                                                                      So I am curious. What would be the target applications for this specific markup language?

                                                                                                                      1. 2

                                                                                                                        Originally, the goal was to be “easy to type”. Initially, I made a point of avoiding the use of the shift key, which lead to duplication being favored (code was initially wrapped in ''), but that’s harder to type really. I have found typeup makes it really easy to reformat notes written in no format at all. My notes often use paragraphs instead of lists and never use Markdown tables because of their overhead. In typeup, they look almost the same except some wrapping. For example, instead of the trouble of a markdown table, you’d just write:

                                                                                                                        Irish: Celtic
                                                                                                                        Kannada: Dravidian
                                                                                                                        Swahili: Bantu

                                                                                                                        and to make it typeup:

                                                                                                                        Language: Family
                                                                                                                        Irish: Celtic
                                                                                                                        Kannada: Dravidian
                                                                                                                        Swahili: Bantu

                                                                                                                        This has the added benefit of being unintrusive visually (if a bit ugly). Typeup also aims to provide better tooling than existing languages, for example very simple builtin metadata so no bodging on frontmatter. There’s a lot of opportunities for improvement on Markdown in extensibility and tooling.

                                                                                                                        I should emphasize this is the first version of typeup. A lot of stuff is yet to be added: math, extensibility, footnotes, escaping, better nesting and plumbing to make it easier to use typeup in a Markdown-dominated world and just to showcase it (SSGs etc.). One of the things I hope to do is add Pandoc AST as an output format, which I’ve seen no other format doing. That would open up many output formats and lots of customizability

                                                                                                                      1. 7

                                                                                                                        At the chance of having missed it, is the source code available somewhere?

                                                                                                                        1. 1

                                                                                                                          Getting it cleaned up and on GitHub will be next weekend’s project. :)

                                                                                                                        1. 2

                                                                                                                          Bonuses, compensations, raises often get really odd. Having had the luxury of being able to mess a bit with them not depending so much on steady income yet it feels like there’s always factors involved that cause wrong incentives.

                                                                                                                          There were bonuses for…

                                                                                                                          …“pushing hard on a project” when the only reason for staying longer was waiting for time to pass cause the work place was closer to a bar one would go to. Or any other reason of staying physically at the office longer than usually.

                                                                                                                          … being the most vocal about personal live hardships.

                                                                                                                          … being the most vocal about effort being put into a project.

                                                                                                                          … essentially doing vanity metrics or for verifiably, wrongly claiming graphs going a certain way (making things faster, reducing failures, etc. when just not letting requests in, not counting them as failures for example).

                                                                                                                          … seeming stressed.

                                                                                                                          Pretty sure I’ve done all of that at some point in my career, at least unintentionally.

                                                                                                                          I think a lot of this is related to people seeing a hundred percent of what they do, but less than that if what others do. And perception as in the examples above is easily skewed. Secret agreements and big projects certainly add to that being reinforced.

                                                                                                                          Adding incentives like that is really hard to do right and there’s certainly other areas where they backfire. Tax benefits, grants, etc. certainly lead to people and companies doing the exact opposite of what the goal was many times before. That’s neither to say that there is malicious intent nor that these measures are bad per see, just that they can lead to unintended outcomes rather easily it seems.

                                                                                                                          1. 8

                                                                                                                            I totally support the tag proposal!

                                                                                                                            If a specific tag for Raku could not be accepted, another possibility would be to change the description of the Perl tag in the spirit of the APL tag to include Raku specifically. It is not ideal but an option.

                                                                                                                            1. 5

                                                                                                                              I think this is probably best. I don’t think that there’s enough of a difference between the two languages to support an extra tag (I know there’s some difference. Maybe I’m completely off the mark here, I’ve only slightly used Raku). I’d point to the “lisp” tag as an example of a tag for various languages that are only tangentially related (admittedly clojure is a separate tag) and I’ve seen a lot more around here about lisp than perl.

                                                                                                                              maybe perl people are different from lisp people in what they do and don’t want to see, but i know that a lot of common lispers don’t care at all for scheme, and schemers don’t care for common lisp.

                                                                                                                              1. 11

                                                                                                                                They’re basically two different languages, and I’d personally appreciate being able to browse Perl (5) posts without seeing Raku posts. But I’m okay with Raku posts under the perl tag if the amount of Raku posts is still low.

                                                                                                                                1. 2

                                                                                                                                  I would differentiate between what I see and think that the communities only partly overlap. There are also some historical reasons I think.

                                                                                                                                  Personally I’d prefer to be able to filter between one or the other, so I agree with the statement:

                                                                                                                                  Having a separate tag will both enable Raku enthusiasts to promote the language, and Perl 5 curmudgeons to filter it out ;)