1.  

    So how long till we see a benchmark between the different implementations of “true”?

    1. 5

      Putting on my “very definitely not a scientist” hat:

      $ time for i in `seq 1000`; do true; done
      
      real	0m0.010s
      user	0m0.004s
      sys	0m0.004s
      $ time for i in `seq 1000`; do /bin/true; done
      
      real	0m1.280s
      user	0m0.112s
      sys	0m0.320s
      $ time for i in `seq 1000`; do sh -c ''; done
      
      real	0m1.552s
      user	0m0.172s
      sys	0m0.292s
      
      $ lsb_release -d
      Description:	Debian GNU/Linux 8.10 (jessie)
      
      1.  

        Beware of constructs like this if you write production code and intend it to be portable.

        $ time for i in `seq 1000`; do true; done
        Syntax error: "do" unexpected
        

        Also: true is not guaranteed to be a builtin, : is.

        1.  

          The “very definitely not a scientist” hat has a tag on the inside saying in big red letters “very definitely not for production use”. =) You would indeed need to subshell or something using /bin/time instead of bash’s builtin time, and I made no effort whatsoever to be portable (or to ensure caches were filled or the processor unburdened with background tasks).

          As another possibly amusing case:

          $ readlink /bin/sh
          dash
          $ time for i in `seq 1000`; do bash -c ''; done
          
          real	0m2.488s
          user	0m0.180s
          sys	0m0.332s
          

          Anybody want to try with zsh? ;)

      2.  

        Any slowness in the script version could probably be linked to some other bloat in that program.

        1.  

          Careful, the troll police is hawk-eyed and doesn’t appreciate truths like that.

      1. 6

        You’re all missing the chance to debate a real philosophical question: is a zero-length /bin/true a binary file or a text file?

        1. 5

          There are no binary files in Unix, and there are no text files either. This is one of the main differentiators between Unix and other systems, especially systems that preceded it, like Multics. In those systems the syntax for dealing with text files is different than the syntax for dealing with binary files. In Unix, there are just files, uninterpreted streams of bytes. Of course there are textual and binary interfaces, but that’s a different thing. The syntax is the same, the semantics differ.

        1. 10

          Yeah… No. I don’t want AMP in my emails.

          EMail already has a reduced HTML subset with most email clients blocking a large set of HTML stuff by default. Most emails are also not big unless you spam pictures in there (which you can strip out and simply not download) so I don’t really see any tangible advantage of AMP over regular HTML email (or Plaintext Mail)

          1. 3

            Dear god, give me plain text emails, please!

            1. 2

              I block all pure-HTML e-mails. Occasionally I check out what was blocked. It’s all spam. I suspect AMP e-mail will be the same. Regular people will send three copies of their e-mails (in text, HTML, and AMP), which I will read in plain text, and spammers (also known as marketers) will send only HTML and/or AMP.

              That being said, this attack of Google over the open Internet is the last straw that made me ditch all Google programs and services.

          1. 46

            Half this article is out of date as of 2 days ago. GOPATH is mostly going to die with vgo as is the complaint about deps.

            Go is kind of an example of what happens when you focus all effort on engineering and not research.

            Good things go has:

            • Go has imo the best std library of any language.
            • Go has the best backwards compatibility I have seen (I’m pretty sure code from go version 1.0 still works today.).
            • Go has the nicest code manipulation tools I have seen.
            • The best race condition detector tool around.
            • An incredibly useful in practice interface system. (I once used the stdlibrary http server over a serial port because net.Listener is a simple interface)
            • The fastest compiler to use, and to build from source.
            • Probably the best cross compilation story of any language, and uniformity across platforms, including ones you haven’t heard of.
            • One of the easiest to distribute binaries across platforms (this is why hashicorp, cockroachdb, ngrok etc choose go imo).
            • A very sophisticated garbage collector with low pause times.
            • One of the best runtime performance to ease of use ratios around.
            • One of the easier to learn languages around.
            • A compiler that produces byte for byte identical binaries.
            • incredibly useful libraries maintained by google: (e.g. Heres a complete ssh client and server anyone can use: https://godoc.org/golang.org/x/crypto/ssh)
            • Lots of money invested in keeping it working well from many companies: cloud flare, google, uber, hashicorp and more.

            Go is getting something that looks like a damn good versioning story, just way too late:

            Go should have in my opinion and order of importance:

            • Ways to express immutability as a concurrent language.
            • More advanced static analysis tools that can prove properties of your code (perhaps linked with the above).
            • Generics.
            • Some sort of slightly more sophisticated pattern matching .

            Go maybe should have:

            • More concise error handling?
            1. 51

              I have been involved with Go since the day of its first release, so almost a decade now, and it has been my primary language for almost as long. I have written the Solaris port, the ARM64 port, and the SPARC64 port (currently out of tree). I have also written much Go software for myself and for others.

              Go is my favorite language, despite everything I write below this line.

              Everything you say is true, so I will just add more to your list.

              My main problem with Go is that, as an operating system it’s too primitive, it’s incomplete. Yes, Go is an operating system, almost. Almost, but not quite. Half and operating system. As an operating system it lacks things like memory isolation, process identifiers, and some kind of a distributed existence. Introspection exists somewhat, but it’s very weak. Let me explain.

              Go presents the programmer with abstractions traditionally presented by operating systems. Take concurrency, for example. Go gives you goroutines, but takes away threads, and takes away half of processes (you can fork+exec, but not fork). Go gives you the net package instead of the socket interface (the latter is not taked away, but it’s really not supposed to be used by the average program). Go gives you net/http, instead of leaving you searching for nginx, or whatever. Life is good when you use pure Go packages and bad when you use cgo.

              The idea is that Go not only has these rich features, but that when you are programming in Go, you don’t have to care about all the OS-level stuff underneath. Go is providing (almost) all abstractions. Go programming is (almost) the same on Windows, OpenBSD and Plan 9. That is why Go programs are generally portable.

              I love this. As a Plan 9 person, you might imagine my constant annoyance with Unix. Go isolates me from that, mostly, and it is great, it’s fantastic.

              But it doesn’t go deep enough.

              A single Go program instance is one operating system running some number of processes (goroutines), but two Go program instances are two operating systems, instead of one distributed operating system, and in my mind that is one too many operating systems.

              “Deploying” a goroutine is one go statement away, but deploying a Go program still requires init scripts, systemds, sshs, puppets, clouds, etc. Deploying a Go program is almost the same as deploying C, or PHP, or whatever. It’s out of scope for the Go operating system. Of course that’s a totally sensible option, it’s just doesn’t align with what I need.

              My understanding about Erlang (which I know little of, so forgive me if I misrepresent it) is that once you have an Erlang node running, starting a remote Erlang process is almost as easy as starting a local Erlang process. I like that. I don’t have to fuck with kubernetes, ansible, it’s just a single, uniform, virtual operating system.

              Goroutines inside a single process have very rich communication methods, Go channels, even mutexes if you desire them. But goroutines in different processes are handicaped. You have to think about how to marshal data and RPC protocols. The difficulty of getting two goroutines in different processes to talk to each other is the about the same as getting some C, or Python code, to talk to Go. Since I only want Go to talk to Go, I don’t think that’s right. It should be easier, and it should feel native. Again, I think Erlang does better here.

              Goroutines have no process ids. This makes total sense if you restrict yourself to a single-process universe, but since I want a multi-process universe, and I want to avoid thinking about systemds and dockers, I want to supervise goroutines from Go. Which means goroutines should have process ids, and I should be able to kill and prioritize them. Erlang does this, of course.

              What I just described in the last two paragraph would preclude shared memory. I’m willing to live with that in order to get network transparency.

              Go programs have ways to debug and profile themselves. Stack traces are one function call away, and there’s a easy to use profiler. But this is not enough. Sometimes you need a debugger. Debugging Go programs is an exercise in frustration. It’s much difficult than debugging C programs.

              I am probably one of the very few people on planet Earth that knows how to profile/debug Go programs with a grown-up tool like DTrace or perf. And that’s because I know assembly programming and the Go runtime very well. This is unacceptable. Some people would hope that something would happen to Go so that it works better with these tools, but frankly, I love the “I am an operating system” aspect of Go, so I would want to use something Go-native. But I want something good.

              This post is getting too long, so I will stop now. Notice I didn’t feel a need for generics in these 9 years. I must also stress out that I am a low-level programmer. I like working in the kernel. I like C and imperating programming. I am not one of those guys that prefers high-level languages (that do not have shared memory), so naturally wants Go to be the same. On the contrary. I found out what I want only through a decade of Go experience. I have never used a language without shared memory before.

              I think Go is the best language for writting command-line applications. Shared memory is very useful in that case, and the flat, invisble goroutines prevent language abuse and “just work”. Lack of debugger, etc, are not important for command-line applications, and command-line applications are run locally, so you don’t need dockers and chefs. But when it comes to distributed systems, I think we could do better.

              In case it’s not clear, I wouldn’t want to change Go, I just want a different language for distributed systems.

              1. 11

                I’ve done some limited erlang programming and it is very much a distributed OS to the point where you are writing a system more than a program. You even start third party code as “applications” from the erlang shell before you can make calls to them. erlang’s fail fast error handling and let supervisors deal with problems is also really fun to use.

                I haven’t used dtrace much either, but I have seen the power, something like that on running go systems would also be neat.

                1. 5

                  Another thing that was interesting about erlang is how the standard library heavily revolves around timers and state machines because anything could fail at any point. For example gen_server:call() (the way to call another process implementing the generic service interface) by default has a 5 second timeout that will crash your process.

                2.  

                  Yes, Go is an operating system, almost. Almost, but not quite. Half and operating system. As an operating system it lacks things like memory isolation, process identifiers, and some kind of a distributed existence.

                  This flipped a bit in my head:

                  Go is CMS, the underlying operating system is VM. That is, Go is an API and a userspace, but doesn’t provide any security or way to access the outside world in and of itself. VM, the hypervisor, does that, and, historically, two different guests on the same hypervisor had to jump through some hoops to talk to each other. In IBM-land, there were virtual cardpunches and virtual cardreaders; these days, we have virtual Ethernet.

                  So we could, and perhaps should, have a language and corresponding ecosystem which takes that idea as far as we can, implementation up, and maybe it would look more like Erlang than Go; the point is, it would be focused on the problem of building distributed systems which compile to hypervisor guests with a virtual LAN. Ideally, we’d be able to abstract away the difference between “hypervisor guest” and “separate hardware” and “virtual LAN” and “real LAN” by making programs as insensitive as possible to timing variation.

                3. 17

                  How can vgo - announced just two days ago - already be the zeitgeist answer for “all of go’s dependency issues are finally solved forever”?

                  govendor, dep, glide - there’s been many efforts and people still create their own bespoke tools to deal with GOPATH, relative imports being broken by forks, and other annoying problems. Go has dependency management problems.

                  1. 2

                    We will see how it pans out.

                  2. 14

                    Go has the best backwards compatibility I have seen (I’m pretty sure code from go version 1.0 still works today.).

                    A half-decade of code compatibility hardly seems remarkable. I can still compile C code written in the ’80s.

                    1. 10

                      I have compiled Fortran code from the mid-1970s without changing a line.

                      1. 1

                        Can you compile Netscape Communicator from 1998 on a modern Linux system without major hurdles?

                        1. 12

                          You do understand that the major hurdles here are related to the external libraries and processes it interacts with, and that Go does not save you from such hurdles either (other than recommending that you vendor compatible version where possible), I hope.

                        2. 1

                          A valid point not counting cross platform portability and system facilities. Go has a good track record and trajectory but you may be right.

                        3. 5

                          Perfect list (the good things, and the missing things).

                          1. 2

                            The fixes the go team have finally made to GOROOT and GOPATH are great. I’m glad they finally saw the light.

                            But PWD is not a “research concern” that they were putting off in favor of engineering. The go team actively digs their heals in on any choice or concept they don’t publish first, and it’s why in spite of simple engineering (checking PWD and install location first) they argued for years on mailing lists that environment variables (which rob pike supposedly hates, right?) are superior to simple heuristics.

                            Your “good things go has” list is also very opinionated (code manipulation tools better that C# or Java? Distrribution of binaries.. do you just mean static binaries?? Backwards compatibility that requires recompilation???), but I definitely accept that’s your experience, and evidence I have to the contrary would be based on my experiences.

                            1. 4

                              The fixes the go team have finally made to GOROOT and GOPATH are great.

                              You haven’t had to set, and should’t have set GOROOT since Go 1.0, released six years ago.

                              (which rob pike supposedly hates, right?)

                              Where did you get that idea?

                              1. 4

                                Yes, you do have to set GOROOT if you use a go command that is installed in a location different than what it was compiled for, which is dumb considering the go command could just find out where it exists and work from there. See: https://golang.org/doc/go1.9#goroot for the new changes that are sane.

                                And I got that idea from.. rob pike. Plan9 invented a whole new form of mounts just to avoid having a PATH variable.

                                1. 4

                                  you do did have to set GOROOT if you use a go command that is installed in a location different than what it was compiled for

                                  So don’t do that… But yes, that’s also not required anymore. Also, if you move /usr/include you will find out that gcc won’t find include files anymore… unless you set $CPPFLAGS. Go was hardly unique. Somehow people didn’t think about moving /usr/include, but they did think about moving the Go toolchain.

                                  Plan9 invented a whole new form of mounts just to avoid having a PATH variable.

                                  No, Plan 9 invented new form of mounts in order to implement a particular kind of distributed computing. One consequence of that is that $path is not needed in rc(1) anymore, though it is still there if you want to use it.

                                  In Plan 9 environment variables play a crucial role, for example $objtype selects the toolchain to use and $cputype selects which binaries to run.

                                  Claiming that Rob Pike doesn’t like environment variables is asinine.

                                  1. 18

                                    “So don’t do that…” is the best summary of what I dislike about Go.

                                    Ok, apologies for being asinine.

                                  2. 1

                                    I always compiled my own go toolchain because it takes about 10 seconds on my PC, is two commands (cd src && ./make.bash). Then i could put it wherever I want. I have never used GOROOT in many years of using Go.

                                2. 0

                                  C# and Java certainly have great tools, C++ has some ok tools. All in the context of bloated IDE’s that I dislike using (remind me again why compiling C++ code can crash my text editor?). But I will concede the point that perhaps C# refactoring tools are on par.

                                  I was never of the opinion GOPATH was objectively bad, it has some good properties and bad ones.

                                  Distrribution of binaries.. do you just mean static binaries? Backwards compatibility that requires recompilation???

                                  Dynamic libraries have only ever caused me problems. I use operating systems that I compile from source so don’t really see any benefit from them.

                              1. 1

                                I’m severely dissapointed that npm developers think that if I run something as sudo, I obviously want my file permissions set as the invoking user instead.

                                It’s frustrating, really.

                                When I invoke something with sudo, it should be assumed I have a damn good reason to invoke with sudo. If your tool doesn’t like root then abort if it runs as uid=0, otherwise nobody should try being smart about sudo.

                                Being smart about sudo is how you get stuff like this. Don’t try to outsmart sudo. Either it works with sudo as sudo is intended to work or you print an error and abort. (Like my GUI text editor, which aborts with an error message that I shouldn’t run it as root)

                                Additionally and on top of this, the functions in question are barely tested (read: not at all) and the released version number did not properly indicate this was a pre-release, leading to places installing it in production.

                                Pip and NPM are the only package managers I ever had problems with. Pip is bearable with virtualenv and various other hacks. I’ll go back to using apt and pacman and yum. Atleast they won’t chown me.

                                1. 2

                                  Pip and NPM are the only package managers I ever had problems wit

                                  I take it you never used easy_install or zc.buildout then?

                                  1. 1

                                    I agree that NPM is a joke, but this happens all throughout the stack: https://github.com/systemd/systemd/issues/2402#issuecomment-174565563

                                    1. 7

                                      It happens at every level of the stack, but it doesn’t happen to everything.

                                      1. 1

                                        Wouldn’t even be my hardest complain, it was more annoying that systemctl tries to check if sudo is being invoked by a service (atleast as far as I can tell) and prevents it from escalating to root.

                                        Understandable but if I put a service’s user in /etc/sudoers to run some systemctl as root I think I have a reason for doing so.

                                        The entire stack is fucked tbh, though the higher level it is the more likely it is that some normal office users runs into these problems. NPM sees far more usage than rm -rf /

                                    1. 30

                                      In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.

                                      There’s another Lobster thread right now about how distributions like Debian are obsolete. The idea being that people use stuff like npm now, instead of apt, because apt can’t keep up with modern software development.

                                      Kubernetes official installer is some curl | sudo bash thing instead of providing any kind of package.

                                      In the meantime I will keep using only FreeBSD/OpenBSD/RHEL packages and avoid all these nightmares. Sometimes the old ways are the right ways.

                                      1. 7

                                        “In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.”

                                        I think this misses the point. The relevant claim was that npm has a good general approach to packaging, not that npm is perfectly written. You can be solving the right problem, but writing terribly buggy code, and you can write bulletproof code that solves the wrong problem.

                                        1.  

                                          npm has a good general approach to packaging

                                          The thing is, their general approach isn’t good.

                                          They only relatively recently decided locking down versions is the Correct Thing to Do. They then screwed this up more than once.

                                          They only relatively recently decided that having a flattened module structure was a good idea (because presumably they never tested in production settings on Windows!).

                                          They decided that letting people do weird things with their package registry is the Correct Thing to Do.

                                          They took on VC funding without actually having a clear business plan (which is probably going to end in tears later, for the whole node community).

                                          On and on and on…

                                          1. 2

                                            Go and the soon-to-be-official dep dependency managment tool manages dependencies just fine.

                                            The Go language has several compilers available. Traditional Linux distro packages together with gcc-go is also an acceptable solution.

                                            1. 4

                                              It seems the soon-to-be-official dep tool is going to be replaced by another approach (currently named vgo).

                                            2. 1

                                              I believe there’s a high correlation between the quality of the software and the quality of the solution. Others might disagree, but that’s been pretty accurate in my experience. I can’t say why, but I suspect it has to do with the same level of care put into both the implementation and in understanding the problem in the first place. I cannot prove any of this, this is just my heuristic.

                                              1. 8

                                                You’re not even responding to their argument.

                                                1. 2

                                                  There’s npm registry/ecosystem and then there’s the npm cli tool. The npm registry/ecosystem can be used with other clients than the npm cli client and when discussing npm in general people usually refer to the ecosystem rather than the specific implementation of the npm cli client.

                                                  I think npm is good but I’m also skeptical about the npm cli tool. One doesn’t exclude the other. Good thing there’s yarn.

                                                  1. 1

                                                    I think you’re probably right that there is a correlation. But it would have to be an extremely strong correlation to justify what you’re saying.

                                                    In addition, NPM isn’t the only package manager built on similar principles. Cargo takes heavy inspiration from NPM, and I haven’t heard about it having a history of show-stopping bugs. Perhaps I’ve missed the news.

                                                2. 8

                                                  The thing to keep in mind is that all of these were (hopefully) done with best intentions. Pretty much all of these had a specific use case… there’s outrage, sure… but they all seem to have a reason for their trade offs.

                                                  • People are angry about a proposed go package manager because it throws out a ton of the work that’s been done by the community over the past year… even though it’s fairly well thought out and aims to solve a lot of problems. It’s no secret that package management in go is lacking at best.
                                                  • Distributions like Debian are outdated, at least for software dev, but their advantage is that they generally provide a rock solid base to build off of. I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.
                                                  • While I don’t trust curl | sh it is convenient… and it’s hard to argue that point. Providing packages should be better, but then you have to deal with bug reports where people didn’t install the package repositories correctly… and differences in builds between distros… and… and…

                                                  It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place… there are plenty of good, solid options for development and we’re moving (however slowly) towards safer, more efficient build/dev environments.

                                                  But maybe I’m just telling myself all this so I don’t go crazy… jury’s still out on that.

                                                  1. 4

                                                    Distributions like Debian are outdated, at least for software dev,

                                                    That is the sentiment that seems to drive the programming language specific package managers. I think what is driving this is that software often has way too many unnecessary dependencies causing setup of the environment to build the software being hard or taking lots of time.

                                                    I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                                    Often it is possible to install libraries at another location and redirect your software to use that though.

                                                    It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place…

                                                    I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                                    I’m growing more disillusioned the more I read Hacker News and lobste.rs… Help me be happy. :)

                                                    1.  

                                                      So like squeak/smalltalk images then? Whats old is new again I suppose.

                                                      http://squeak.org

                                                      1. 1

                                                        I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                                        You could say the same thing about Docker. I think package managers and tools like Docker are a net win for the community. They make it faster for experienced practitioners to setup environments and they make it easier for inexperienced ones as well. Sure, there is a lot you’ve gotta learn to use either responsibly. But I remember having to build redis every time I needed it because it wasn’t in ubuntu’s official package manager when I started using it. And while I certainly appreciate that experience, I love that I can just install it with apt now.

                                                      2. 2

                                                        I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                                        Speaking of Python specifically, it’s not a big problem there because everyone is expected to work within virtual environments and nobody runs pip install with sudo. And when libraries require building something binary, people do rely on system-provided stable toolchains (compilers and -dev packages for C libraries). And it all kinda works :-)

                                                        1. 4

                                                          I think virtual environments are a best practice that unfortunately isn’t followed everywhere. You definitely shoudn’t run pip install with sudo but I know of a number of companies where part of their deployment is to build a VM image and sudo pip install the dependencies. However it’s the same thing with npm. In theory you should just run as a normal user and have everything installed to node_modules but this clearly isn’t the case, as shown by this issue.

                                                          1. 5

                                                            nobody runs pip install with sudo

                                                            I’m pretty sure there are quite a few devs doing just that.

                                                            1. 2

                                                              Sure, I didn’t count :-) The important point is they have a viable option not to.

                                                            2.  

                                                              npm works locally by default, without even doing anything to make a virtual environment. Bundler, Cargo, Stack etc. are similar.

                                                              People just do sudo because Reasons™ :(

                                                          2. 4

                                                            It’s worth noting that many of the “curl | bash” installers actually add a package repository and then install the software package. They contain some glue code like automatic OS/distribution detection.

                                                            1. 2

                                                              I’d never known true pain in software development until I tried to make my own .debs and .rpms. Consider that some of these newer packaging systems might have been built because Linux packaging is an ongoing tirefire.

                                                              1. 3

                                                                with fpm https://github.com/jordansissel/fpm it’s not that hard. But yes, using the Debian or Redhat blessed was to package stuff and getting them into the official repos is def. painful.

                                                                1. 1

                                                                  I used the gradle plugins with success in the past, but yeah, writing spec files by hand is something else. I am surprised nobody has invented a more user friendly DSL for that yet.

                                                                  1. 1

                                                                    A lot of difficulties when doing Debian packages come from policy. For your own packages (not targeted to be uploaded in Debian), it’s far easier to build packages if you don’t follow the rules. I like to pretend this is as easy as with fpm, but you get some bonus from it (building in a clean chroot, automatic dependencies, service management like the other packages). I describe this in more details here: https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging

                                                                  2. 2

                                                                    It sucks that you come away from this thinking that all of these alternatives don’t provide benefits.

                                                                    I know there’s a huge part of the community that just wants things to work. You don’t write npm for fun, you end up writing stuff like it because you can’t get current tools to work with your workflow.

                                                                    I totally agree that there’s a lot of messiness in this newer stuff that people in older structures handle well. So…. we can knowledge share and actually make tools on both ends of the spectrum better! Nothing about Kubernetes requires a curl’d installer, after all.

                                                                  1. 2

                                                                    I love GOPATH — I actually adopted the idea for everything, my GOPATH is ~/src and that’s where I store all software repos. I use ghq to clone non-Go repos into the right directory. I even set up vim-plug to use these paths.

                                                                    I like the cross-compilation story (made possible by not using libc at all). (Fun fact, Go currently supports FreeBSD ARMv6/7 but not AArch64. Rust supports FreeBSD AArch64 but not ARMv6/7. )

                                                                    I dislike the hostility to developer ergonomics. if err != nil is shameful.

                                                                    I hate the Plan9-based Go assembler. It’s objectively a horrible disaster. That’s one thing AT&T and Intel syntax fans would agree on ;)

                                                                    Look at the hacks people do to make their C/asm functions work without cgo overhead. I failed and used cgo. Sounds like the deepest internals of Go are based on Plan 9 fanboyism rather than solid pragmatic engineering…

                                                                    1. 2

                                                                      I love GOPATH — I actually adopted the idea for everything, my GOPATH is ~/src and that’s where I store all software repos.

                                                                      Can’t agree enough.

                                                                      I like the cross-compilation story

                                                                      I hate the Plan9-based Go assembler.

                                                                      These things don’t go together. It might be difficult to see at first, but one reason why Go succeeds at the former, is because of the later.

                                                                      made possible by not using libc at all

                                                                      That’s not it. Using libc has nothing to do with it. It’s the same even if the port uses libc, look at the Windows or Solaris ports (I wrote the Solaris one). It uses libc, but it uses libc without the user-visible part of cgo. And cross-compilation works just fine. This is made possible only by the custom Plan 9-inspired toolchain.

                                                                      1. 4

                                                                        The Go toolchain is not unique in that. LLVM natively cross-compiles to anything too. LLD cross-links.

                                                                        I just tested this. Built a 32-bit Linux binary on 64-bit FreeBSD and ran it (the file is this code):

                                                                        $ cc -fuse-ld=lld -nostdlib -target i386-unknown-linux-none -o nostd nostd.c
                                                                        $ brandelf -t Linux nostd
                                                                        $ ./nostd
                                                                        

                                                                        Not using libc has everything to do with not needing to download the target’s native libraries to link to them!

                                                                        1. 2

                                                                          Not using libc has everything to do with not needing to download the target’s native libraries to link to them!

                                                                          As I said… You can cross-compile Go binaries to Solaris, and get binaries which use libc.so (unlike your example), and most certainly it does not need to download Solaris libc.so libraries to do it. Same with Windows. This is something you can’t do with any other toolchain.

                                                                    1. 7

                                                                      with this extra $50 million they’ll surely have the resources to support federating their servers. let’s see it moxie.

                                                                      1. 3

                                                                        I hope so too, but Moxie Marlinspike voiced quite principal concerns against federations before https://signal.org/blog/the-ecosystem-is-moving/

                                                                        1. 5

                                                                          That was the day I realised I had to boycott Signal too

                                                                          1. 2

                                                                            Generally I don’t like “me too” posts, but in this case, me too! This is unacceptable. Using phone numbers as sole userids, is also unacceptable in my book.

                                                                            1. 1

                                                                              What is wrong with using phone numbers as ids? Was it wrong 50 years ago?

                                                                              1. 4

                                                                                First, I don’t want to give my phone number to strangers. I am okay with giving my e-mail address (or some other kind of token) to strangers.

                                                                                Second, at least for myself, e-mail addresses are eternal, while phone numbers are very ephemeral. Especially if you travel or move a lot.

                                                                                Third, Signal doesn’t just depend on your phone number, it somehow depends on your SIM card (not sure of tech details). You can’t change your SIM card and continue to use Signal smoothly. For me this is a blocker. It means I can’t use Signal even for testing purposes, as I switch SIM cards often.

                                                                                Apple iMessage gets this right. You can have any number of ids, including phone numbers or e-mails. I am identified by either one of those. I can be contacted by people who have either in their address book. And I can switch my SIM card any time I want. Of course, iMessage is not equivalent to Signal, nor is iMessage a good example to follow apart from the UX.

                                                                                Also I must add a fourth point about Signal. Until relatively recently there was no way to use it on a real computer. Now there’s an Electron application, which to me still means there is no way to use it on a real computer. I do not know if 3rd parties can implement real native desktop applications or not, but there are no such applications today.

                                                                                1. 3

                                                                                  Third, Signal doesn’t just depend on your phone number, it somehow depends on your SIM card (not sure of tech details). You can’t change your SIM card and continue to use Signal smoothly. For me this is a blocker. It means I can’t use Signal even for testing purposes, as I switch SIM cards often.

                                                                                  I have a burner phone that was initially set up with a throw-away prepaid SIM. After doing the initial setup (including with Signal), I threw away the SIM and put the phone in airplane mode. The phone now sits behind a fully Tor-ified wireless network. Signal’s still working fine.

                                                                                  Maybe if I were to put in a new SIM card, Signal might go crazy.

                                                                                  And since this is a burner phone that sits behind Tor with a number that’s meant to be public, here it is: +1 443-546-8752. :)

                                                                                  1. 2

                                                                                    I have a burner phone

                                                                                    You can’t legally acquire a pre-paid SIM in the European Union without registering it against your ID. They did it to ‘thwart terrorism’.

                                                                                    1. 1

                                                                                      Interesting. They give out pre-paid SIMs as promotions on the street here in Sweden, or at least they used to. Maybe the ID check comes at the first top-up.

                                                                                      1. 1

                                                                                        In the past you were able to obtain them anonymously.

                                                                                        They still give them away like candy but it won’t operate unless you register it by providing your ID at the operator. Though I’m speaking based on Poland - don’t know how other countries regulated this.

                                                                                        1. 1

                                                                                          I see. I don’t know if it’s a specific EU-related law / regulation or whether each country has their own rules.

                                                                                          1. 2

                                                                                            Some EU countries have regulations limiting the possibility to purchase prepaid cards to the stationary shops of telecommunications operators. Such solutions have been adopted i.a. in Germany, United Kingdom, Spain, Bulgaria and Hungary. Obligation to collect data concerning subscribers who use telecommunications services can be found i.a. in the German law.

                                                                                            source: http://krakowexpats.pl/utilities/mandatory-registration-of-prepaid-sim-cards/

                                                                                            Funny I thought it was a cross EU law. Regardless, that still makes it very annoying that signal has no other means of making an ID. I don’t really want to give my mobile to everyone, and there is no way to use signal anonymously in countries that do regulate sim registration.

                                                                                      2. 1

                                                                                        What that would do is create a black market for pre-paid SIMs, where you have a single entity registering tons of SIMs and reselling them pre-activated.

                                                                                        1. 2

                                                                                          That is what is happening on the street, criminals approach durnkards etc. to register a SIM on them and resell or use that themselves.

                                                                                          Point is, for a regular person there is no legal way to obtain an anonymous SIM. Creating a legal entity registering SIMs is also not possible. This means that signal can’t be used anonymously if you want to stay on the legal side.

                                                                                          1. 2

                                                                                            Completely agreed. It’s unfortunate to see such silly laws that are so easy to be skirted around. All it does is make people who would otherwise be honest and trustworthy break the law.

                                                                                      3. 1

                                                                                        At least in my country, phone numbers can, and eventually, will be reallocated when not in use for several years. So aren’t you running at a small risk that someone else might register ‘your’ number with Signal in a few years?

                                                                                      4. 1

                                                                                        A me-too-style reply for what @lattera said.

                                                                                        A friend lives abroad and got a local SIM on a visit once. When he went back, he discarded his SIM, unknown to me.

                                                                                        When I heard he might be visiting again, I sent a Signal message asking if this still works. To our surprise, it did.

                                                                                        So this myth needs to be busted.

                                                                                        It may have a bug, though, as I sent a Signal message to another friend and got a reply from a foreign phone number. He told me it’s the number of a SIM he used on a business trip.

                                                                                        That’s a different issue someone else can hunt down, but Signal is more anonymous than eg. Bitcoin as it stands today.

                                                                                2. 1

                                                                                  this is so dumb

                                                                                  • it’s a messaging app… how much can people’s expectations evolve? how have they evolved since signal’s inception?
                                                                                  • the cost of switching between services is low only for services that already have mass adoption. if moxie started fucking around with the protocol and people weren’t having it, network effects mean there would be no alternative (whatsapp and facebook messenger are not alternatives)
                                                                              1. 11

                                                                                Awk and the various members of the Unix shell family?

                                                                                1. 6

                                                                                  Yup, I’ve looked at at 5-10 shell implementations, and all of them are essentially AST interpreters + on-the-fly parsing/evaluating.

                                                                                  Just like Ruby was an AST interpreter that recently became a bytecode interpeter, R made the same jump recently:

                                                                                  A byte code compiler for R

                                                                                  https://scholar.google.com/scholar?cluster=1975856130321003102&hl=en&as_sdt=0,5&sciodt=0,5

                                                                                  1. 1

                                                                                    Do you happen to know how recently they’ve switched? (And an example of a large project written purely (or almost purely) in that language?) Thanks.

                                                                                    1. 5

                                                                                      Ruby switched in 2007, i used it for a few years before that and the speedups were quite dramatic. See https://en.wikipedia.org/wiki/YARV for a bit more info.

                                                                                      1.  

                                                                                        Cool! That links to some benchmarks which saves me from trying to do them.

                                                                                        Looks like its only about 4x on average which is hopeful (for having no bytecode).

                                                                                      2. 3

                                                                                        It looks like it’s around R release 2.13 and 2.14, which was around 2011. R is from the early/mid 90’s, similar to Ruby, and I think they both switched to bytecode compilers at around the same time.

                                                                                        https://csgillespie.github.io/efficientR/7-4-the-byte-compiler.html

                                                                                        https://cran.r-project.org/src/base/R-2/

                                                                                        R is for data analysis, so it’s hard to say what a “large project” is. It’s also a wrapper around Fortran numeric libraries in the same way Python is a wrapper around C (or NumPy is a wrapper around Fortran.)

                                                                                        There are thousands of libraries written in R:

                                                                                        https://cran.r-project.org/web/packages/available_packages_by_date.html

                                                                                        1. 1

                                                                                          Thanks! This actually looks like a pretty good example. That website even lists package dependencies so I can try to find a long chain of pre-2011 dependencies.

                                                                                    2. 4

                                                                                      Unix shell, for sure. GNU awk and mawk compile to bytecode. Not sure about nawk, though the regexp implementation in nawk uses bytecode.

                                                                                      1. 2

                                                                                        GNU awk and mawk compile to bytecode

                                                                                        Really? That seems wierd, since gawk is both slower than nawk and mawk. Or is it because of the features the GNU project added, that it’s overall slower?

                                                                                        1. 6

                                                                                          One big difference is that gawk operates on sequences of Unicode characters, while mawk and nawk operate on sequences of bytes. If your text is all ASCII, setting your locale to ‘C’ will cause gawk to also operate on bytes, which should close at least some of the performance gap. But gawk’s Unicode support can be nice if you have UTF-8. mawk and nawk will often still produce the right result on UTF-8 input, especially when non-ASCII characters are only passed through unmodified. But sometimes you get:

                                                                                          $ echo $LANG
                                                                                          en_GB.UTF-8
                                                                                          $ echo "ÜNICÖDE" | gawk '{print tolower($0)}'
                                                                                          ünicöde
                                                                                          $ echo "ÜNICÖDE" | nawk '{print tolower($0)}'
                                                                                          �nic�de
                                                                                          $ echo "ÜNICÖDE" | mawk '{print tolower($0)}'
                                                                                          �nic�de
                                                                                          
                                                                                      2. 1

                                                                                        Thanks for the example. I don’t know how they work internally. Although to me, they are in the write-more-primitives-in-a-faster-language category. The primitives being the external commands called (maybe awk a bit less?).

                                                                                        Are there examples of multi-layered library (or use) for awk and shell?

                                                                                        Edit: typo

                                                                                        1. 2

                                                                                          What does “multi-layered push” mean?

                                                                                          As for awk, when I use it, I never shell out. If I need to shell out from awk, I just write the whole thing in Go or something. So I just use the primitives exposed by awk. Though, before Go, I shelled out of awk more often.

                                                                                          1. 1

                                                                                            multi-layered push

                                                                                            Oops, that’s the wrong words. That’s what I get for alt-tabbing between the comment window and a shell.

                                                                                            I mean something like calling a library which calls another library which calls another library, with all intermediate libraries written in awk (or shell). (Or the same thing with functions if there are layers one on top of each other. By this I mean functions in one layer treats functions in the previous layer as primitives).

                                                                                            As for awk, when I use it, I never shell out.

                                                                                            That would be a good example then if you also have at least two layers of functions (although more would be nice).

                                                                                      1. 2

                                                                                        Maybe a dumb questions, but in semver what is the point of the third digit? A change is either backwards compatible, or it is not. To me that means only the first two digits do anything useful? What am I missing?

                                                                                        It seems like the openbsd libc is versioned as major.minor for the same reason.

                                                                                        1. 9

                                                                                          Minor version is backwards compatible. Patch level is both forwards and backwards compatible.

                                                                                          1. 2

                                                                                            Thanks! I somehow didn’t know this for years until I wrote a blog post airing my ignorance.

                                                                                          2. 1

                                                                                            PATCH version when you make backwards-compatible bug fixes See: https://semver.org

                                                                                            1. 1

                                                                                              I still don’t understand what the purpose of the PATCH version is? If minor versions are backwards compatible, what is the point of adding a third version number?

                                                                                              1. 3

                                                                                                They want a difference between new functionality (that doesn’t break anything) and a bug fix.

                                                                                                I.e. if it was only X.Y, then when you add a new function, but don’t break anything.. do you change Y or do you change X? If you change X, then you are saying I broke stuff, so clearly changing X for a new feature is a bad idea. So you change Y, but if you look at just the Y change, you don’t know if it was a bug-fix, or if it was some new function/feature they added. You have to go read the changelog/release notes, etc. to find out.

                                                                                                with the 3 levels, you know if a new feature was added or if it was only a bug fix.

                                                                                                Clearly just X.Y is enough. But the semver people clearly wanted that differentiation, they wanted to be able to , by looking only at the version #, know if there was a new feature added or not.

                                                                                                1. 1

                                                                                                  To show that there was any change at all.

                                                                                                  Imagine you don’t use sha1’s or git, this would show that there was a new release.

                                                                                                  1. 1

                                                                                                    But why can’t you just increment the minor version in that case? a bug fix is also backwards compatible.

                                                                                                    1. 5

                                                                                                      Imagine you have authored a library, and have released two versions of it, 1.2.0 and 1.3.0. You find out there’s a security vulnerability. What do you do?

                                                                                                      You could release 1.4.0 to fix it. But, maybe you haven’t finished what you planned to be in 1.4.0 yet. Maybe that’s acceptable, maybe not.

                                                                                                      Some users using 1.2.0 may want the security fix, but also do not want to upgrade to 1.3.0 yet for various reasons. Maybe they only upgrade so often. Maybe they have another library that requires 1.2.0 explicitly, through poor constraints or for some other reason.

                                                                                                      In this scenario, releasing a 1.2.1 and a 1.3.1, containing the fixes for each release, is an option.

                                                                                                      1. 2

                                                                                                        It sort of makes sense but if minor versions were truly backwards compatible I can’t see a reason why you would ever want to hold back. Minor and patch seem to me to be the concept just one has a higher risk level.

                                                                                                        1. 4

                                                                                                          Perhaps a better definition is library minor version changes may expose functionality to end users you did not intend as an application author.

                                                                                                          1. 2

                                                                                                            I think it’s exactly a risk management decision. More change means more risk, even if it was intended to be benign.

                                                                                                            1. 2

                                                                                                              Without the patch version it makes it much harder to plan future versions and the features included in those versions. For example, if I define a milestone saying that 1.4.0 will have new feature X, but I have to put a bug fix release out for 1.3.0, it makes more sense that the bug fix is 1.3.1 rather than 1.4.0 so I can continue to refer to the planned version as 1.4.0 and don’t have to change everything which refers to that version.

                                                                                                    2. 1

                                                                                                      I remember seeing a talk by Rich Hickey where he criticized the use of semantic versioning as fundamentally flawed. I don’t remember his exact arguments, but have sem ver proponents grappled effectively with them? Should the Go team be wary of adopting sem ver? Have they considered alternatives?

                                                                                                      1. 3

                                                                                                        I didn’t watch the talk yet, but my understanding of his argument was “never break backwards compatibility.” This is basically the same as new major versions, but instead requiring you to give a new name for a new major version. I don’t inherently disagree, but it doesn’t really seem like some grand deathblow to the idea of semver to me.

                                                                                                        1. 1

                                                                                                          IME, semver itself is fundamentally flawed because humans are the deciders of the new version number and we are bad at it. I don’t know how many times I’ve gotten into a discussion with someone where they didn’t want to increase the major because they thought high major’s looked bad. Maybe at some point it can be automated, but I’ve had plenty of minor version updates that were not backwards compatible, same for patch versions. Or, what’s happened to me in Rust multiple times, is the minor version of a package incremented but the new feature depends on a newer version of the compiler, so it is backwards breaking in terms of compiling. I like the idea of a versioning scheme that lets you tell the chronology of versions but I’ve found semver to work right up until it doesn’t and it’s always a pain. I advocate pinning all deps in a project.

                                                                                                          1. 2

                                                                                                            It’s impossible for computers to automate. For one, semver doesn’t define what “breaking” means. For two, the only way that a computer could fully understand if something is breaking or not would be to encode all behavior in the type system. Most languages aren’t equipped to do that.

                                                                                                            Elm has tools to do at least a minimal kind of check here. Rust has one too, though not widely as used.

                                                                                                            . I advocate pinning all deps in a project.

                                                                                                            That’s what lockfiles give you, without the downsides of doing it manually.

                                                                                                  1. 5

                                                                                                    I’m confused, you disagree with tags and hiding threads, but yet you want a tag to do exactly this.

                                                                                                    And to voice this opinion you start the exact kind of thread that you don’t want to see.

                                                                                                    Surely you understand that other people feel this way about other topics, hence the existence of other tag suggestion threads.

                                                                                                    With this being said I think adding the tag is a good idea, I just find your position contradictory.

                                                                                                    1. 0

                                                                                                      Yes!

                                                                                                    1. 8

                                                                                                      This advice should be expanded. Do not under any circumstance use any kind of 3rd party VPN at all.

                                                                                                      1. 4

                                                                                                        VPNs are this decade’s antivirus.

                                                                                                        1. 2

                                                                                                          This advice is hyperbolic…. there are tons of valid uses for a 3rd party VPN. For example, I use a 3rd party VPN to torrent over networks that punish me for doing so (LTE, university WiFi).

                                                                                                          1. 1

                                                                                                            OK… this is not very helpful advice. But, if you have something constructive to say on the subject, I’d like to hear it!

                                                                                                            I don’t suppose you’re saying that you should just trust your ISP.

                                                                                                            Perhaps you’re saying that you should set up and maintain your own VPN? Do you have any helpful resources to suggest for those of us who might want to do that? Because I can imagine a few ways to get that wrong too.

                                                                                                            But perhaps more to the point, what do you suggest for less technical people who are concerned about their privacy, or those who don’t want to maintain that much infrastructure?

                                                                                                            1. 3

                                                                                                              I don’t suppose you’re saying that you should just trust your ISP.

                                                                                                              If you’re using a VPN service, that’s exactly what you’re doing - just trusting the VPN operator.

                                                                                                              But perhaps more to the point, what do you suggest for less technical people who are concerned about their privacy, or those who don’t want to maintain that much infrastructure?

                                                                                                              “If you’re telling people not to buy these rocks, what do you suggest for people who are concerned about keeping tigers away and can’t afford fences and guns?”

                                                                                                              If you genuinely need to access the internet without being tracked, you need to put the legwork in and use Tor; this is not something you can afford to trust someone else to do for you (though there are bundled installers etc. that can make it slightly easier).

                                                                                                              1. 2

                                                                                                                Sometimes I trust my VPN operator more than my ISP. Thus using the VPN is nicer

                                                                                                                Example cases:

                                                                                                                • being in China
                                                                                                                • airport wifi
                                                                                                                1. 1

                                                                                                                  Using tor for many tasks is no harder than a vpn anyway

                                                                                                                2. 3

                                                                                                                  There was another blog post not too long ago about not using VPNs. This article does state all the reasons to use a VPN: protect your from your ISP and protect your location data.

                                                                                                                  However a VPN isn’t TOR. They can still keep logs on the VPN side and turn them over to police, even in other countries. It has a limited use and people need to understand what those uses are. Too many people use it without understand what VPNs do and don’t do (similar to the confusion around Private Window browsing .. even though there’s a clear wall of text describing the limitations, most people don’t read it).

                                                                                                                3. 1

                                                                                                                  I’d argue that pretty much anyone who reads this site has the wherewithal to set up their own VPN. Check out Streissand or Algo

                                                                                                                1. 25

                                                                                                                  I think ads are the worst way to support any organization, even one I would rate as highly as Mozilla. People however are reluctant to do so otherwise, so we get to suffer all the negative sides of ads.

                                                                                                                  I just donated to Mozilla with https://donate.mozilla.org, please consider doing the same if you think ads/sponsored stories are the wrong path for Firefox.

                                                                                                                  1. 14

                                                                                                                    Mozilla has more than enough money to accomplish their core task. I think it’s the same problem as with Wikimedia; if you give them more money, they’re just going to find increasingly irrelevant things to spend it on. Both organizations could benefit tremendously from a huge reduction in bureaucracy, not just more money.

                                                                                                                    1. 9

                                                                                                                      I’ve definitely seen this with Wikimedia, as someone who was heavily involved with it in the early years (now I still edit, but have pulled back from meta/organizational involvement). The people running it are reasonably good and I can certainly imagine it having had worse stewardship. They have been careful not to break any of the core things that make it work. But they do, yeah, basically have more money than they know what to do with. Yet there is an organizational impulse to always get more money and launch more initiatives, just because they can (it’s a high-traffic “valuable” internet property).

                                                                                                                      The annual fundraising campaign is even a bit dishonest, strongly implying that they’re raising this money to keep the lights on, when doing that is a small part of the total budget. I think the overall issue is that all these organizations are now run by the same NGO/nonprofit management types who are not that different from the people who work in the C-suites at corporations. Universities are going in this direction too, as faculty senates have been weakened in favor of the same kinds of professional administrators. You can get a better administration or a worse one, but barring some real outliers, like organizations still run by their idiosyncratic founders, you’re getting basically the same class of people in most cases.

                                                                                                                    2. 21

                                                                                                                      So Mozilla does something bad, and as a result I am supposed to give it money?? Sorry, that doesn’t make any sense to me. If they need my money, they should convince me to donate willingly. What you are describing is a form of extortion.

                                                                                                                      I donate every month to various organizations; EFF, ACLU, Wikipedia, OpenBSD, etc. So far Mozilla has never managed to convince me to give them my money. On the contrary, why would I give money to a dysfunctional, bureaucratic organization that doesn’t seem to have a clear and focused agenda?

                                                                                                                      1. 9

                                                                                                                        They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?

                                                                                                                        If I really want to get to a destination, I will take a run-down bus if that is the only transport going there. And if you don’t care about the destination, then transport options don’t matter.

                                                                                                                        1. 17

                                                                                                                          They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?

                                                                                                                          I am frequently in touch with Mozilla and while I sometimes feel like fighting with windmills, other parts of the org are very quick moving and highly cost effective. For example, they do a lot of very efficient training for community members like the open leadership training and the Mozilla Tech speakers. They run MDN, a prime resource for web development and documentation. Mozilla Research has high reputation.

                                                                                                                          Firefox in itself is in constant rebuild and is developed. MozFest is the best conferences you can go to in this world if you want to speak tech and social subjects.

                                                                                                                          I still find their developer relationship very lacking, which is probably the most visible part to us, but hey, it’s only one aspect.

                                                                                                                          1. 9

                                                                                                                            The fact that Mozilla is going to spend money on community activities and conferences is why I don’t donate to them. The only activity I and 99% of people care about is Firefox. All I want is a good web browser. I don’t really care about the other stuff.

                                                                                                                            Maybe if they focused on what they’re good at, their hundreds of millions of dollars of revenue would be sufficient and they wouldn’t have to start selling “sponsored stories”.

                                                                                                                            1. 18

                                                                                                                              The only activity I and 99% of people care about is Firefox.

                                                                                                                              This is a very easy statement to throw around. It’s very hard to back up.

                                                                                                                              Also, what’s the point of having a FOSS organisation if they don’t share their learnings? This whole field is fresh and we have maintainers hurting left and right, but people complain when organisations do more then just code.

                                                                                                                              1. 6

                                                                                                                                To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.

                                                                                                                                Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.

                                                                                                                                1. 8

                                                                                                                                  To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.

                                                                                                                                  In my opinion, the point of FOSS is sharing and I’m pretty radical that this involves approaches and practices. I agree that all you write is important, I don’t agree that it should be the sole focus. Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.

                                                                                                                                  Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.

                                                                                                                                  BS is very much in the eye of the beholder. I also haven’t said that they couldn’t do what you describe.

                                                                                                                                  Also, be aware that they often collaborate with other foundations and bring knowledge and connections into the deal, not everything is funded from the money MozCorp has or from donations.

                                                                                                                                  1. 1

                                                                                                                                    “Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.”

                                                                                                                                    Well, there’s a good idea! :)

                                                                                                                                2. 3

                                                                                                                                  That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.

                                                                                                                                  It’s unfortunate, but advertisers have so thoroughly ruined their reputation that I simply will not use ad supported services any more.

                                                                                                                                  I feel like Mozilla is so focused on making money for itself that it’s lost sight of what’s best for their users.

                                                                                                                                  1. 2

                                                                                                                                    That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.

                                                                                                                                    Ummm… sorry? The post you are replying to doesn’t speak about money at all, but what people carry about?

                                                                                                                                    Yes, advertising and Mozilla is an interesting debate and it’s also not like Mozilla is only doing advertisement. But flat-out criticism of the kind “Mozilla is making X amount of money” or “Mozilla supports things I don’t like” is not it

                                                                                                                                  2. 3

                                                                                                                                    This is a very easy statement to throw around. It’s very hard to back up.

                                                                                                                                    Would you care to back up the opposite, that over 1% of mozilla’s userbase supports the random crap Mozilla does? That’s over a million people.

                                                                                                                                    I think my statement is extremely likely a priori.

                                                                                                                                    1. 1

                                                                                                                                      I’d venture to guess most of them barely know what Firefox is past how they do stuff on the Internet. They want it to load up quickly, let them use their favorite sites, do that quickly, and not toast their computer with malware. If mobile tablet, maybe add not using too much battery. Those probably represent most people on Firefox along with most of its revenue. Some chunk of them will also want specific plugins to stay on Firefox but I don’t have data on their ratio.

                                                                                                                                      If my “probably” is correct, then what you say is probably true too.

                                                                                                                                  3. 5

                                                                                                                                    This is a valid point of view, just shedding a bit of light on why Mozilla does all this “other stuff”.

                                                                                                                                    Mozilla’s mission statement is to “fight for the health of the internet”, notably this is not quite the same mission statement as “make Firefox a kickass browser”. Happily, these two missions are extremely closely aligned (thus the substantial investment that went into making Quantum). Firefox provides revenue, buys Mozilla a seat at the standards table, allows Mozilla to weigh in on policy and legislation and has great brand recognition.

                                                                                                                                    But while developing Firefox is hugely beneficial to the health of the web, it isn’t enough. Legislation, proprietary technologies, corporations and entities of all shapes and sizes are fighting to push the web in different directions, some more beneficial to users than others. So Mozilla needs to wield the influence granted to it by Firefox to try and steer the direction of the web to a better place for all of us. That means weighing in on policy, outreach, education, experimentation, and yes, developing technology.

                                                                                                                                    So I get that a lot of people don’t care about Mozilla’s mission statement, and just want a kickass browser. There’s nothing wrong with that. But keep in mind that from Mozilla’s point of view, Firefox is a means to an end, not the end itself.

                                                                                                                                    1. 1

                                                                                                                                      I don’t think Mozilla does a good job at any of that other stuff. The only thing they really seem able to do well (until some clueless PR or marketing exec fucks it up) is browser tech. I donate to the EFF because they actually seem able to effect the goals you stated and don’t get distracted with random things they don’t know how to do.

                                                                                                                              2. 3

                                                                                                                                What if, and bear with me here, what they did ISN’T bad? What if instead they are actually making a choice that will make Firefox more attractive to new users?

                                                                                                                              3. 9

                                                                                                                                The upside is that atleast Mozilla is trying to make privacy respecting ads instead of simply opening up the flood gates.

                                                                                                                                1. 2

                                                                                                                                  For now…

                                                                                                                              1. 6

                                                                                                                                Anyone know of cloud providers (either virtualized or real hardware) that either offer OpenBSD, or allow you to install OpenBSD easily and without hacks?

                                                                                                                                I only know of prgmr.com, RootBSD and ARP Networks. I am interested in companies offering real professional support running on server grade hardware (ECC, Xeon, etc) with proper redundant networking, etc, so amateur (but cheap) stuff like Hetzner doesn’t count.

                                                                                                                                Somewhat tangential, but I am also interested in European companies. I only know of CloudSigma, Tilaa, Exoscale and cloudscale.ch. Are they any good?

                                                                                                                                EDIS and ITL seem to be Russian companies or shells operating in European locations, not interested in those.

                                                                                                                                Many thanks!

                                                                                                                                1. 5

                                                                                                                                  https://www.vultr.com/servers/openbsd

                                                                                                                                  I wouldn’t consider Gilles’ method a hack at this point, now that online.net gives you console access. Like usual, you first have to get the installer on to a disk attached to the machine. Since you can’t walk up to the machine with a stick of USB flash, copying it to the root disk from recovery mode makes all the sense.

                                                                                                                                  1. 2

                                                                                                                                    Thanks, I forgot about vultr.

                                                                                                                                    As for installing, I would vastly prefer PXE boot. It’s not just about getting it installed. It’s about having a supported configuration. I am not interested in running configurations not supported by the provider. What if next year they change the way they boot the machines and you can’t install OpenBSD using the new system anymore? A guarantee for PXE boot ensures forward compatibility.

                                                                                                                                    Or what if some provider that is using virtualization updates their hypervisor which has a new bug that only affects OpenBSD? If the provider does not explicitly support OpenBSD, it’s unlikely they will care enough to roll back the change or fix the bug.

                                                                                                                                    You’re not paying for hardware, as Hetzner showed, hardware is cheap, you’re paying for support and for the network. If they don’t support you, then why pay?

                                                                                                                                    1. 2

                                                                                                                                      Yeah I share your concerns. That’s why I’ve hesitated to pay for hosting and am still running all my stuff at home. It would suck to pay only to hear that I’m on my own if something changes and my system doesn’t work well after that change.

                                                                                                                                      Given how often OpenBSD makes it to the headlines on HN and other tech news outlets, it is really disappointing how few seem to actually care enough to run or support it. It’s also disappointing considering that the user base has a healthy disdain for twisting knobs, and the system itself doesn’t suffer much churn. It should be quite easy to find a stable & supported hardware configuration that just works for all OpenBSD users.

                                                                                                                                      1. 1

                                                                                                                                        It should be quite easy to find a stable & supported hardware configuration that just works for all OpenBSD users.

                                                                                                                                        Boom! There it is. The consumer side picks their own hardware expecting whatever they install to work on it. They pick for a lot of reasons other than compatibility, like appearance. OpenBSD supporting less hardware limits it a lot there. I’ve always thought an OpenBSD company should form that uses the Apple model of nice hardware with desktop software preloaded for some market segment that already buys Linux, terminals, or something. Maybe with some must-have software for business that provides some or most of the revenue so not much dependency on hardware sales. Any 3rd party providing dediboxes for server-side software should have it easiest since they can just standardize on some 1U or 2U stuff they know works well with OpenBSD. In theory, at least.

                                                                                                                                  2. 4

                                                                                                                                    https://www.netcup.de/

                                                                                                                                    I run the above setup on a VPS. OpenBSD is not officially supported, but you can upload custom images. Support was very good in the last 3-4 years (didn’t need it recently).

                                                                                                                                    1. 2

                                                                                                                                      Looks nice, especially since they are locals :) Do you mind answering some questions?

                                                                                                                                      • Do they support IPv6 for VPS (/64)?
                                                                                                                                      • Have you tried to restore a snapshot from a VPS?
                                                                                                                                      • Mind sharing a dmesg?
                                                                                                                                      1. 3
                                                                                                                                    2. 2

                                                                                                                                      I have two OpenBSD vservers running at Hetzner https://www.hetzner.com . They provide OpenBSD ISO images and a “virtual KVM console” via HTTP. So installing with softraid (RAID or crypto) is easily possible.

                                                                                                                                      Since one week there is no official vServer product more. Nowadays, they call it … wait for it … cloud server. The control panel looks different, however, I have no clue if something[tm] changed.

                                                                                                                                      Here is a dmesg from one server: http://dmesgd.nycbug.org/index.cgi?do=view&id=3441

                                                                                                                                      1. 2

                                                                                                                                        Joyent started providing a KVM OpenBSD image for Triton last May: https://docs.joyent.com/public-cloud/instances/virtual-machines/images/openbsd

                                                                                                                                        (This has been possible for some time if you had your own Triton cluster, but there was no official way until this was published.)

                                                                                                                                        1. 1

                                                                                                                                          What’s the deal for cloud providers for not making OpenBSD available? Is it technically complex to offer, or just that they don’t have the resources for the support? Maybe just a mention that it’s not supported by their customer service would already help users no?

                                                                                                                                          1. 11

                                                                                                                                            As far as I know, it’s a mix of things. Few people ask for OpenBSD, so there’s little incentive to offer it. Plus a lot of enterprise software tends to target RHEL and other “enterprise-y” offerings. Even in the open source landscape, things are pretty dire:

                                                                                                                                            OpenBSD also seems to have pretty bad timing issues on qemu/KVM that have fairly deeply rooted causes. Who knows what other horrors lurk in OpenBSD as a guest.

                                                                                                                                            OpenBSD doesn’t get people really excited, either. Many features are security features and that’s always a tough sell. They’d rather see things like ZFS.

                                                                                                                                            For better or for worse, OpenBSD has a very small following. For everybody else, it just seems to be the testing lab where people do interesting things with OS development, such as OpenSSH, LibreSSL, KASLR, KARL, arc4random, pledge, doas, etc. that people then take into OSes that poeple actually use. Unless some kind of Red Hat of OpenBSD emerges, I don’t see that changing, too. Subjectively, it feels very UNIX-y still. You can’t just google issues and be sure people have already seen them before; you’re on your own if things break.

                                                                                                                                            1. 8

                                                                                                                                              Rust’s platform support has OpenBSD/amd64 in tier 3 (“which are not built or tested automatically, and may not work”).

                                                                                                                                              I can talk a little about this point, as a common problem: we could support OpenBSD better if we had more knowledge and more people willing to integrate it well into our CI workflow, make good patches to our libc and so on.

                                                                                                                                              It’s a damn position to be in: on the one hand, we don’t want to be the people that want to inflict work to OpenBSD. We are in no position to ask. On the other hand, we have only few with enough knowledge to make OpenBSD support good. And if we deliver half-arsed support but say we have support, we get the worst of all worlds. So, we need people to step up, and not just for a couple of patches.

                                                                                                                                              This problem is a regular companion in the FOSS world, sadly :(.

                                                                                                                                              Also, as noted by mulander: I forgot semarie@ again. Thanks for all the work!

                                                                                                                                              1. 7

                                                                                                                                                semarie@ has been working upstream with rust for ages now… It would be more accurate to say ‘we need more people to step up’.

                                                                                                                                                1. 2

                                                                                                                                                  Right, sorry for that. I’ll change the wording.

                                                                                                                                        1. 19

                                                                                                                                          My favorite example of your point from when I first discovered LISP was CLOS. The typical way to get C programmers to have OOP was to tell them to use horribly-complex C++ or switch to Java/C#. The LISP people just added a library of macros. If they don’t like OOP, don’t use the library. Done. People thought Aspect-Oriented Programming would be cool. Started writing pre-compilers for Java, etc. LISP folks did a library. I like your emphasis on how easy it is to undo such things if it’s just a library versus a language feature. A lot of folks never knew Aspect LISP happened because their language isn’t stuck with it. ;)

                                                                                                                                          1. 14

                                                                                                                                            A lot of folks never knew Aspect LISP happened because their language isn’t stuck with it.

                                                                                                                                            Now that’s what I call a selling point.

                                                                                                                                            1. 7

                                                                                                                                              I sometimes get the feeling that some programming communities are very against the sentiment of extending the language in userspace due to reasons of consistency or too much power. I find such sentiments vaguely authoritarian and off-putting, and don’t buy them at all. Users almost always end up extending the language somehow, whether it is giant frameworks/metaprogramming/code generation.

                                                                                                                                              1. 6

                                                                                                                                                A difference should be made between extending the vocabulary and extending the grammar:

                                                                                                                                                • All programmers are okay with extending the vocabulary.
                                                                                                                                                • But some programmers are reluctant to extend the grammar, not for authoritarian reasons, but because it hinders readability and maintainability.

                                                                                                                                                Using a framework is not extending the language; it’s extending the vocabulary. Using code generation is not extending the language; it’s translating some source language to some target language.

                                                                                                                                                In a natural language like English, you sometimes extend the vocabulary, but you rarely extend the grammar, and this is how we can understand each other. When you read an English sentence that contains an unknown word, you can still parse the sentence because you know the grammar. But if you read an English sentence that uses some kind of “syntactic macro for English”, then it will be very hard to understand what’s going go without learning the macro in the first place.

                                                                                                                                                1. 2

                                                                                                                                                  Users almost always end up extending the language somehow, whether it is giant frameworks/metaprogramming/code generation.

                                                                                                                                                  They might, but that isn’t necessarily a good thing. When I am implementing algorithms (admittedly, “algorithms” aren’t the same thing as “programs”), I find both large and extensible languages to be a distraction: The essence of most algorithms can be expressed using basic data types (integers, sums, products, rarely first-class functions) and control flow constructs (selection, repetition and procedure calls; where by “selection” I mean “pattern matching”, of course). Perhaps the feeling that you need fancy language features (whether built into the language or implemented using metaprogramming) is just a symptom of accidental complexity in either the problem you are solving, or the language you are using, or both.

                                                                                                                                                  1. 2

                                                                                                                                                    Completely agree, you just end up with really baroque methods of metaprogramming if you try to prevent it.

                                                                                                                                                  2. 3

                                                                                                                                                    I think code generators work fine. I don’t really get why people think lisp macros are better than just writing a tool like lex/yacc, they seem strictly less flexible to me.

                                                                                                                                                    1. 12

                                                                                                                                                      Tools like yacc are “applied once”. In Lisp I can write a function that symbolically differentiates another function and produces another function. Then this higher order function can be differentiated again yielding an even higher order function and so on. You can’t do this with yacc.

                                                                                                                                                      In fact symbolic differentiation and other computer algebra things are precisely the reason why lisp was invented. It’s in the original Lisp paper.

                                                                                                                                                      In fact homoiconicity is the only good reason in favor of dynamic typing that I ever found. In most dynamically-typed languages I feel like the author simply didn’t know better. In those languages dynamic typing is only a gun to shoot yourself with. Lisp is the only dynamically-typed language that I found where dynamic typing is truly fundamental and seems to be put to good effect.

                                                                                                                                                      1. 2

                                                                                                                                                        Great point about the link between homoiconicity and dynamic typing. Answered a question I was asking myself for a few years.

                                                                                                                                                        1. 1

                                                                                                                                                          Interesting point

                                                                                                                                                        2. 6

                                                                                                                                                          Lex/Yacc are mostly awful to work with. They complicate the build (even with native support in Make), suck to maintain, and are painful to debug. LLVM/Clang don’t bother using them, and the code is better for it. (Having debugged that stuff, I’m thankful they didn’t use them.) Maybe you can use lex to generate a state machine for you, or you can just do it manually. It’s a one time cost without the unholy mess of getting the proper includes.

                                                                                                                                                          If your language is small, then lex/yacc is likely more of a burden than any kind of boon. Just write a recursive descent parser and be done with it. It will likely be fast enough and you can still deal with the oddities, and probably with fewer contortions.

                                                                                                                                                          1. 3

                                                                                                                                                            Using macros lets me do straight-forward stuff that cleanly integrates into the language and its tooling. I can also use code generators if they’re better suited for the job. I can also build code generators much more easily with macros in a language that’s already an AST. ;)

                                                                                                                                                            There’s also highly-optimized implementations, formally-verified subsets, existing libraries in sane language, and IDE’s. The overall deal is much better than yacc, etc. Such benefits are how Julia folks built a compiler quickly for a powerful, complex language: it was sugar coating over a LISP (femtolisp). An industrial one might have worked even better. sklogic’s toolkit with DSL’s was pretty interesting, too.

                                                                                                                                                            1. 1

                                                                                                                                                              They most likely are less flexible. Just as programming languages and functions are strictly less flexible than assembly.

                                                                                                                                                            2. 3

                                                                                                                                                              What always made me smile a little was the fact that Gregor Kiczales, one of the authors of The Art of the Metaobject Protocol (published in 1991, and the best book on OO I’ve ever read), is one of the main contributors to AspectJ.

                                                                                                                                                              1. 2

                                                                                                                                                                Oh damn. Didnt know that. I might need a new example out of respect for his MOP work. Or just keep the irony coming. :)

                                                                                                                                                            1. 16

                                                                                                                                                              Or you could give up and not care.

                                                                                                                                                              $ ls -d ~/* ~/.* | wc -l
                                                                                                                                                              252
                                                                                                                                                              

                                                                                                                                                              And for the neatness masochists:

                                                                                                                                                              $ ls -d ~/*test*  | wc -l
                                                                                                                                                              37
                                                                                                                                                              
                                                                                                                                                              1. 6

                                                                                                                                                                I decided to share a solution because it is working so nicely on my system.

                                                                                                                                                                I’m down to a single dot-file and 26 dot-directories on my system, so it is certainly working.

                                                                                                                                                                1. 6

                                                                                                                                                                  I appreciate you sharing. It’s a good thing to be aware of, and certainly valuable for people who do like to be more organized. But personally I have no gripe with chaos, and prefer systems that are searchable over structured. And I prefer ease of access to the extent that I will symlink dirs I’m using into my home dir, making it even more cluttered. I think you can guess how often I garbage collect these symlinks. ;)

                                                                                                                                                                2. 2

                                                                                                                                                                  I agree, there are much worse things in life to worry about.

                                                                                                                                                                1. 6

                                                                                                                                                                  So, to recap:

                                                                                                                                                                  a) gtk+3 with clang was broken

                                                                                                                                                                  b) ..due to a missing -fvisibility flag that both gcc and clang support

                                                                                                                                                                  c) ..because the autoconf check was written to use a gcc extension that clang doesn’t support (nested functions)

                                                                                                                                                                  Sounds like yet another great case against autoconf to me. In the x86 monoculture of today, making a cross-platform autoconf script seems as error-prone as just writing a makefile directly to be cross-platform. Particularly since you still need to test your autoconf scripts on all your platforms.

                                                                                                                                                                  1. 3

                                                                                                                                                                    The autoconf check was probably unintentionally written in a way that caused the nested function.

                                                                                                                                                                    I’d put the blame on gcc adding a non-standard extension (nested functions in C) and enabling it by default.

                                                                                                                                                                    1. 3

                                                                                                                                                                      Yes, all bugs are unintentional. What’s your point? :) (The article actually confirms your first sentence, so there’s no need for the ‘probably’.)

                                                                                                                                                                      Tools doing non-standard things is precisely the problem autoconf was “designed” for. So your reasoning seems like rationalization.

                                                                                                                                                                      Feel free to blame one turd over another. Other shit can be shit; autoconf is still shit.

                                                                                                                                                                      In fact, OP graphically shows that autoconf was shit even in the 90s when a bazillion flavors of unix made it not utterly irrelevant. It tries to determine features a platform has using the potentially non-standard tools on that platform. Utter circular reasoning with no solid ground anywhere.

                                                                                                                                                                      Summary: autoconf was a crap solution to a poorly demarcated problem that no longer exists.

                                                                                                                                                                      1. 4

                                                                                                                                                                        The problem does exist, if you’re not living in a Linux monoculture. If you support *BSD+Darwin+illumos, testing if a little piece of code compiles to determine a config option is sometimes necessary. (Yeah you often can just ifdef for the OS… but that doesn’t scale. Damn, that reminds me of feature detection on the web :D)

                                                                                                                                                                        Thankfully, Meson exists.

                                                                                                                                                                        1. 4

                                                                                                                                                                          You don’t need to test for anything, you need to write your makefile targets assuming specific things are available for specific targets. BSD has kqueue and illumos has event ports, you don’t need to check for either kqueue or event ports, you simply assume kqueue is available for BSD and event ports are available for illumos. Coincidentally this also allows you to cross-compile code easily, as nobody, and I mean nobody writes autoconf tests that work correctly when the target system is different from the build system (not to mention the host system).

                                                                                                                                                                          If you must chose between one or another specific piece of technology that might or might not be there simply the best one as a default. Do not test for it, simply use it and make the build fail if it’s not there. Let the user chose the alternate technology by setting a make variable or whatever, do not test and chose for him.

                                                                                                                                                                          You can even have a configure script for convenience and compatibility with higher level build automation tools that expects GNU autoconf/automake source code, but the configure script must not do any probing, it must simply set some variables (the defaults I mentioned earlier), or whatever the user chooses. Then these remain “baked-in”, so the programmer working on the project doesn’t have to remember them each time he types make.

                                                                                                                                                                          But whatever you do, don’t probe. It’s wrong.

                                                                                                                                                                    2. 3

                                                                                                                                                                      Also a larger lesson about failure modes. When something breaks, is this obvious to the user? Or does it report success and leave you to detect the damage? A lot can go wrong when you try to do “smart” error recovery.

                                                                                                                                                                      1. 1

                                                                                                                                                                        It’s the same lesson as the fact that automated tests should never just test “does this thing throw an exception when passed invalid input?”. They must always test “does this thing throw the specific expected exception when passed invalid input?”. Otherwise the test might pass for any number of wrong reasons.

                                                                                                                                                                        So too autoconf should be checking whether a compile test failed for reasons specific to the thing under test – not just whether there was a failure of any kind at all. Of course you can only do that in either a boil-the-complexity-ocean way (by parsing the compiler’s error output, which requires support for specific compilers (in specific versions…)) or a boil-the-literal-ocean way (for every test, run two compiler checks: one that tests the null hypothesis, then one that tests the actual hypothesis).

                                                                                                                                                                      2. 1

                                                                                                                                                                        Mostly, except there is no evidence for the fact that the culprit was -fvisibility specifically. All they found out was that the configure script detected different flags under GCC vs Clang despite the compilers’ support not differing.

                                                                                                                                                                      1. 7

                                                                                                                                                                        This is only somewhat relevant, but I just discovered that newer versions of git now support your XDG CONFIG dir, so you can move your global gitconfig to $XDG_CONFIG_HOME/git/config (usually ~/.config/git/config)

                                                                                                                                                                        1. 0

                                                                                                                                                                          Sounds awful, why would I made it harder to get to it? I hate dotfiles, but I’d rather make the file visible where it is, or in some easy to get directory like ~/lib instead of hiding it even more.

                                                                                                                                                                          1. 5

                                                                                                                                                                            But if everyone hides their stuff in the same pattern then config is easy to find.

                                                                                                                                                                            1. 2

                                                                                                                                                                              Then you can change XDG_CONFIG_HOME to point to ~/lib

                                                                                                                                                                              1. 0

                                                                                                                                                                                Oh yeah, that’s what I need, even more state and configuration. No thanks.

                                                                                                                                                                                Not that I said ~/lib/file, not ~/lib/foo/config. At least without this XDG nonsense (.local? .config? .cache? .run? Fuck you XDG!), even if my files have stupid dotnames, they are all in one single directory, $HOME…

                                                                                                                                                                                I’ll stick to this scheme, thanks.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  could set XDG_CONFIG_HOME=$HOME ;)

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    Chill. I don’t like excess configuration also.

                                                                                                                                                                                    XDG is indeed overt. However I at least appreciate .cache a bit, because it’s one directory that I can safely remove to reclaim some bytes.

                                                                                                                                                                            1. 1

                                                                                                                                                                              This has to be a joke. Is this a joke? I can’t tell.

                                                                                                                                                                              1. 2

                                                                                                                                                                                Another solution might add a tooltip style description of who the comment is replying to when the mouse hovers over a link like parent.

                                                                                                                                                                                Solution popularized (invented?) by 4chan. Great artists steal, etc. Sounds good to me.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  Didnt know. Thanks for the tip.