1. 7

    FYI, some history: GnuTLS was created at a time when OpenSSL had a GPL incompatible license. Eventually RedHat (I think) did the hard work to fix OpenSSL licensing.

    1. 3

      https://www.openssl.org/source/license.html says that OpenSSL 3.0.0 and newer are published under Apache License V2 which is GPLv3 compatible (see https://www.apache.org/licenses/GPL-compatibility.html). It’s still not GPLv2 compatible, so some projects like Linux that are “GPLv2 only” still can’t use OpenSSL. On the other hand they can also not use (recent versions of) GnuTLS because that’s GPLv3 these days which is also not GPLv2-compatible.

        1. 1

          Right, and that added GPLv3 compatibility. For GPLv2, both OpenSSL (no matter the version) and GnuTLS (recent versions, and you shouldn’t use old crypto code) still are not an option.

          I wouldn’t called OpenSSL licensing “fixed”.

    1. 11

      Random historical factoid: when I joined Twitter there were no usernames. Identity was through your phone number. First names were used unless there was ambiguity and then it would fall back to full names. After a month or two they switched to usernames.

      1. 2

        It strikes me that this is a variant of the general class of cache side-channel attacks made famous by Spectre and Meltdown. This isn’t a timing side-channel (though those are possible on the web too) but still a side-channel due to performance optimizations.

        Just like in CPUs, if caches were completely disabled the side-channels would go, but so would the performance we’ve come to know and love.

        1. 0

          Hey I happen to be wearing my Dark tshirt today. It’s really nice and soft.

          1. 2

            While I was at university (and dropping in and out and doing random early webdev) in the late 90s I got involved in the GIMP/Gtk+/GNOME community, mostly on IRC. I ended up starting to contribute to Gnome-VFS (precursor to GVfs).

            Eazel, the company that was building the Nautilus file manager contracted me to contribute to particular parts (http client, ftp client) and then hired me. They moved me from Western Australia to Silicon Valley and though they went out of business a few short months later I’m still here and the connections I made through that first Bay Area company I worked for have guided my career in many ways.

            1. 4

              TypeScript is such a damn pleasant language to use to write useful code.

              1. 3

                I wonder if this would work better with a real serial port.

                1. 4

                  Yes.

                  According to docs I found, the VT102 has a 128-character input buffer, and it sends an XOFF (flow control stop) when there are 32 characters in the buffer, meaning that the host can send 96 additional characters before the buffer overflows (or more, if the VT102 processes some of the buffer in the meantime, but I don’t know how fast it processes and won’t make any assumptions). At 9600 baud, 8N1 or 7E1, it takes 100ms for the host to transmit 96 characters. That should be ample time to respond to XOFF.

                1. 2

                  My laptop (running Linux) died the other day. My boss has been pressuring me to get a Mac for forever and since I needed a new machine right now (in the middle of a big project) and Apple could courier a Mac to me in two hours (I’d prefer to not go to a store what with COVID-19 and all)…I got a Mac.

                  So far so good, since it’s UNIX under the hood. My only complaint is that when I hook up an external display it gets real hot and real loud…way hotter and louder than it has any right to be for a mostly-idle system. Apparently this is a known issue with the MacBook Pro 16”…

                  1. 8

                    It’s depressing how macOS is worse at power management and external monitors than Linux these days.

                    1. 9

                      I know what you mean, but macOS still does mixed-DPI way better than Linux, which is important. Wayland is making significant improvements in this space. I hope that the ARM transition helps with MacBook Pro thermals (the real issue in this particular case).

                      1. 1

                        Are they going to drop Nvidia for their home grown GPUs?

                        1. 2

                          They haven’t been using NVIDIA for years now. I’m guessing that they will be dropping AMD graphics for their home-grown ones on most of the Mac line. We’ll have to wait and see whether this holds true for the Mac Pro.

                    2. 7

                      Try using the right-hand ports. Seriously.

                      1. 1

                        Not that I want to turn lobste.rs into a support forum, but for the record, I’ve got just the external display plugged in and nothing else (not even the power supply); it’s plugged in on the right-hand side. The system is 96-99% idle. With the external display plugged in, the CPU temperature bounces between 60 and 75 degrees C. From what I’ve been able to find, at idle it should be no more than about 45 degrees C. Doing anything even moderately intense (e.g. listening to music) gets the machine hot enough that the fans are blasting and the CPU is getting throttled.

                        Searching around online, this is apparently a universal problem: using an external display with the MBP 16” causes huge power draw and heat problems, regardless of the resolutions/refresh rates/whatever involved. It’s kinda to the point that I feel like I was lied to. I don’t feel like this Mac is truly capable of being used with an external display. It feels like false advertising.

                        I’m very strongly considering returning the machine. I might consider getting the 13” model (which doesn’t have the Radeon GPU that is apparently the source of the problem) if it turns out it doesn’t have the same thermal problems…

                        Anyway, rant over. Sorry.

                        1. 1

                          Oh, no worries. I just discovered the right hand side thing myself.

                      2. 6

                        MacBooks have bad thermal issues when plugging in a monitor and a charge cable both on the left-hand side. Consider putting the two cables on opposite sides of the machine.

                        https://apple.stackexchange.com/a/363933/349651

                        1. 2

                          Apparently there is a possible work around if your monitor supports displayport. Maybe worth a shot? Apparently that uses the igpu, and runs much cooler – presumably a bad bug with the dedicated gpu and an external display.

                          1. 1

                            Apple seems to always let things get hotter than I would like.

                          1. 4

                            I wish that open offices constituted an OSHA violation. It feels like the office equivalent of asking construction workers to just do without scaffolding because it’s “cheaper” and “makes spontaneous collaboration on window installation easier.”

                            1. 4

                              Except that it’s a different scale. Construction workers have significant risk of death, especially if they’re working in violation of codes or without union representation.

                              I hate open offices, they hurt my productivity badly, but let’s keep things in perspective.

                              1. 3

                                Agreed that it’s a different scale and that this was a metaphor.

                                But would you disagree that tech workers need to be protected from their own harmful environments?

                                Open offices are disproportionately stressful and contribute to the spread of disease. In the age of COVID, would it be unreasonable to limit or ban them as a matter of public health? How much can we prioritize employer profits and pride over their employees’ mental and physical health?

                                1. 1

                                  I hate open offices, they hurt my productivity badly, but let’s keep things in perspective.

                                  I think it’s possible to keep things in perspective and also acknowledge that not just the most serious issues (resulting in a risk of death) should be of concern to OSHA. Office conditions that have the potential to result in long term health damage should, in my opinion, also be considered important.

                                2. 2

                                  Someone elsewhere made the point that open offices should constitute ADA violations, too.

                                1. 33

                                  There’s a huge cultural problem around dependencies and a total lack of discipline for what features get used in most Rust projects.

                                  sled compiles in 6 seconds flat on my laptop, despite being a fairly complex Rust embedded database. Most database compile times are measured in minutes, even if they are written in C, C++ or Go.

                                  I feel that slow compilation times for a library are totally disrespectful to any users, who probably just want to solve a relatively simple issue by bringing in your library as a dependency anyway. But in the Rust ecosystem it’s not uncommon at all for a simple dependency that could have probably been written in 50 lines of simple code to pull in 75 dependencies to get the job done that you need it for. Pretty much all of my friends working in the Rust blockchain space have it especially bad since they tend to have to pull in something like 700 dependencies and spend 3 minutes just for linking for one commonly used dependency.

                                  Things I avoid to make compilation fast:

                                  • proc macros - these are horrible for compile times
                                  • build.rs same as above, also causes friction with tooling
                                  • deriving traits that don’t get used anywhere (side note, forcing users to learn your non-std trait is just as bad as forcing them to learn your non-std macro. It introduces a huge amount of friction into the developer experience)
                                  • generics for things I only use one concrete version of internally. conditional compilation is very easy to shoot yourself in the foot with but sometimes it’s better than generics for testing-only functionality.
                                  • dependencies for things I could write in a few dozen lines myself - the time saved for a project that I sometimes compile hundreds of times per day and have been building for over 4 years is a huge win. Everybody has bugs, and my database testing tends to make a lot of them pop out, but I can fix mine almost instantly, whereas it takes a lot of coordination to get other people to fix their stuff.

                                  Also, CI on every PR tends to finish in around 6 minutes despite torturing thousands of database instances with fault injection and a variety of other tests that most people only run once before a big release.

                                  Developer-facing latency is by far one of the most effective metrics to optimize for. It keeps the project feeling fun to hack on. I don’t feel dread before trying out my changes due to the impending waiting time. Keeping a project nice to hack on is what keeps engineers hacking on it, which means it’s also the most important metric for any other metrics like reliability and performance for any project that you hope to keep using over years. But most publicly published Rust seems to be written with an expiration date of a few weeks, and it shows.

                                  1. 11

                                    My take-away from the article is that open source allows different people to play different roles: the original dev got it working to their own satisfaction. Another user polished off some cruft. Everybody wins.

                                    I feel that slow compilation times for a library are totally disrespectful to any users …

                                    If someone write software to solve a problem and shares it as open source, I don’t consider it disrespectful regardless of the code quality. Only if someone else is compelled to use it or is paying for it would the developer have any obligation, IMO.

                                    1. 10

                                      Same experience here. When I started writing durduff, I picked clap for parsing CLI arguments. After a while. I used cargo-deps to generate a graph of dependencies and it turned out that the graph was dominated by the dependencies pulled in by clap. So I switched to getopts. It cut the build times by something like 90%.

                                      Another example: atty term_size depends on libc, but when you look at its source code, it has a lot of code duplicated with libc. The constants which need to be passed to the ioctl, with conditional compilation, because they have different values on different platforms. It seems to be a common theme: wrapping libraries, while still duplicating work. I replaced atty term_size with a single call to libc (in the end I stopped doing even that). One less dependency to care about.

                                      That said, I still think that waiting a couple seconds for a project as small as durduff to compile is too much. It also slows down syntastic in vim: it’s really irritating to wait several seconds for text to appear every time I open a rust file in vim. It’s even worse with bigger projects like rustlib.

                                      As for avoiding generics: I use them a lot for testing things in isolation. Sort of like what people use interfaces for in Go. With the difference that I don’t pay the runtime cost for it. I’m not giving this one up.

                                      BTW Thank you for flamegraph-rs! Last weekend it helped me find a performance bottleneck in durduff and speed the whole thing up three-fold.

                                      EDIT: I got crates mixed up. It was term_size, not atty that duplicated code from libc.

                                      1. 11

                                        it turned out that the graph was dominated by the dependencies pulled in by clap.

                                        Did you try disabling some or all of Clap’s default features?

                                        With respect to the OP, it’s not clear whether they tried this or whether they tried disabling any of regex’s features. In the latter case, those features are specifically intended to reduce compilation times and binary size.

                                        Another example: atty depends on libc, but when you look at its source code, it has a lot of code duplicated with libc. The constants which need to be passed to the ioctl, with conditional compilation, because they have different values on different platforms. It seems to be a common theme: wrapping libraries, while still duplicating work

                                        Huh? “A lot”? Looking at the source code, it defines one single type: Stream. It then provides a platform independent API using that type to check whether there’s a tty or not.

                                        I replaced atty with a single call to libc. One less dependency to care about.

                                        I normally applaud removing dependencies, but it’s likely that atty is not a good one to remove. Unless you explicitly don’t care about Windows users. Because isatty in libc doesn’t work on Windows. The vast majority of the atty crate is specifically about handling Windows correctly, which is non-trivial. That’s exactly the kind of logic that should be wrapped up inside a dependency.

                                        Now, if you don’t care about Windows, then sure, you might have made a good trade off. It doesn’t really look like one to me, but I suppose it’s defensible.

                                        That said, I still think that waiting a couple seconds for a project as small as durduff to compile is too much. It also slows down syntastic in vim: it’s really irritating to wait several seconds for text to appear every time I open a rust file in vim.

                                        It takes about 0.5 seconds for cargo check to run on my i5-7600 after making a change in your project. Do you have syntastic configured to use cargo check?

                                        1. 4

                                          Did you try disabling some or all of Clap’s default features?

                                          I disabled some of them. It wasn’t enough. And with the more fireworky features disabled, I no longer saw the benefit of clap over getopts, when getopts has less dependencies.

                                          Huh? “A lot”? Looking at the source code, it defines one single type: Stream. It then provides a platform independent API using that type to check whether there’s a tty or not.

                                          Ok, I got crates mixed up. It was term_size (related) that did that, when it could just rely on what’s already in libc (for unix-specific code). Sorry for the confusion.

                                          I normally applaud removing dependencies, but it’s likely that atty is not a good one to remove. Unless you explicitly don’t care about Windows users. Because isatty in libc doesn’t work on Windows.

                                          Yes, I don’t care about Windows, because reading about how to properly handle output to the windows terminal and output that is piped somewhere else at the same time left me with the impression that it’s just too much of a pain.

                                          It takes about 0.5 seconds for cargo check to run on my i5-7600 after making a change in your project. Do you have syntastic configured to use cargo check?

                                          I’ll check when I get back home.

                                          1. 21

                                            I no longer saw the benefit of clap over getopts, when getopts has less dependencies.

                                            Well, with getopts you start out of the gate with a bug: it can only accept flags and arguments that are UTF-8 encoded. clap has OS string APIs, which permit all possible arguments that the underlying operating system supports.

                                            You might not care about this. But I’ve had command line tools with similar bugs, and once they got popular enough, end users invariably ran into them.

                                            Now, I don’t know whether this bug alone justifies that extra weight of Clap. Although I do know that Clap has to go out of its way (with additional code) to handle this correctly, because dealing with OS strings is hard to do in a zero cost way.

                                            Yes, I don’t care about Windows, because reading about how to properly handle output to the windows terminal and output that is piped somewhere else at the same time left me with the impression that it’s just too much of a pain.

                                            I think a lot of users probably expect programs written in Rust to work well on Windows. This is largely because of the work done in std to provide good platform independent APIs, and also because of the work done in the ecosystem (including myself) to build crates that work well on Windows.

                                            My argument here isn’t necessarily “you should support Windows.” My argument here is, “it’s important to scrutinize all costs when dropping dependencies.” Particularly in a conversation that started with commentary such as “lack of discipline.” Discipline cuts both ways. It takes discipline to scrutinize all benefits and costs for any given technical decision.

                                            1. 15

                                              Just wanted to say thanks for putting in the effort to support Windows. ripgrep is one of my favorite tools, and I use it on Windows as well as Linux.

                                              1. 1

                                                I checked and I have a line like this in my .vimrc:

                                                let g:rust_cargo_check_tests = 1
                                                

                                                That’s because I was annoyed that I didn’t see any issues in test code only to be later greeted with a wall of errors when compiling. Now I made a small change and cargo check --tests took 5 seconds on my AMD Ryzen 7 2700X Eight-Core Processor.

                                                Well, with getopts you start out of the gate with a bug: it can only accept flags and arguments that are UTF-8 encoded. clap has OS string APIs, which permit all possible arguments that the underlying operating system supports.

                                                I’ll reconsider that choice.

                                        2. 4

                                          But in the Rust ecosystem it’s not uncommon at all for a simple dependency that could have probably been written in 50 lines of simple code to pull in 75 dependencies to get the job done that you need it for.

                                          I don’t think I have experienced this. Do you have an example of this on crates.io?

                                          1. 1

                                            Not quite to that degree, but I’ve seen it happen. Though I’ve also seen the ecosystem get actively better about this – or maybe I just now have preferred crates that don’t do this very much.

                                            rand does this to an extent by making ten million little sub-crates for different algorithms and traditionally including them all by default, and rand is everywhere, so I wrote my own version. num also is structured that way, though seems to leave less things on by default, and deals with a harder problem domain than rand.

                                            The main example of gratuitous transitive dependencies I recall in recent memory was a logging library – I thought it was pretty_env_logger but can’t seem to find it right now. It used winconsole for colored console output, which pulls in the 10k lines of cgmath, which pulls in rand and num both… so that it can have a single function that takes a single Vector2D.

                                            …sorry, this is something I find bizarrely fun. I should probably make more time for it again someday.

                                          2. 1

                                            Minimizing your dependencies has further advantages like making the overall system easier to understand or avoiding library update problems.

                                            Part of this is simply insufficient tooling.

                                            Rebuilding all your dependencies should be rare. In practice, it happens way too often, e. g. frequently on every CI run without better build systems. That is madness. You can avoid it by e.g. using Nix or bazel.

                                            I terms of timing, I’d also love to understand why linking is quite slow - for me often the slowest part.

                                            But all in all, bloated compile times in dependencies would not be a major decision factor for me in choosing a library. Bloated link times or bloated compile times of my own crates are, since they affect my iteration speed.

                                            That said, I think if you are optimizing the compile time of your crate, you are respecting your own time and that of your contributors. Time well spent!

                                            1. 1

                                              deriving traits that don’t get used anywhere

                                              I take it you mean custom traits, and not things like Default, Eq/Ord, etc?

                                              1. 9

                                                check this out:

                                                #[derive()]
                                                pub struct S { inner: String }
                                                

                                                (do this in your own text editor)

                                                1. 2dd (yank lines), 400p (paste 400 times)
                                                2. %s/struct\ S/\=printf("struct S%d", line('.')) name all of those S’s to S + line number
                                                3. time cargo build - 0.10s on my laptop
                                                4. %s/derive()/derive(Debug)
                                                5. time cargo build - 0.58s on my laptop
                                                6. %s/derive(Debug)/derive(Debug, PartialOrd, PartialEq)
                                                7. time cargo build - 2.46s

                                                So, yeah, that deriving actually slows things down a lot, especially in a larger codebase.

                                                1. 3

                                                  This is particularly annoying for Debug because either you eat the compile time or you have to go derive Debug on various types every time you want to actually debug something. Also if you don’t derive Debug on public types for a library then users of the library can’t do it themselves.

                                                  In languages like Julia and Zig that allow reflection at specialization time this tradeoff doesn’t exist. Eg in zig:

                                                  pub fn debug(thing: var) !void {
                                                      const T = @TypeOf(thing);
                                                      if (std.meta.trait.hasFn("debug")(T)) {
                                                          // use custom impl if it exists
                                                          thing.debug();
                                                      } else {
                                                          // otherwise reflect on type
                                                          switch (@typeInfo(T)) {
                                                              ...
                                                          }
                                                      }
                                                  }
                                                  

                                                  This function will work on any type but will only get compiled for types to which it is actually applied in live code so there’s no compile time overhead for having it available. But the reflection is compiled away at specialization time so there is no runtime overhead vs something like derive.

                                                  1. 2

                                                    Some numbers from a fairly large project:

                                                    $ cargo vendor
                                                    $ rg --files | grep -E '*.rs' | xargs wc -l | sort -n | tail -n 1
                                                     1234130 total
                                                    $ rg --files | grep -E '*.rs' | xargs grep -F '#[derive' | grep -o -E '\(|,' | wc -l
                                                    22612
                                                    

                                                    If we extrapolate from your example a minimum of 2ms extra compile time per derive, this is adding >45s to the compile time for a debug build. But:

                                                    $ cargo clean && time cargo build
                                                    Finished dev [unoptimized + debuginfo] target(s) in 20m 58s
                                                    
                                                    real	20m58.636s
                                                    user	107m34.211s
                                                    sys	10m57.734s
                                                    
                                                    $ cargo clean && time cargo build --release
                                                    Finished release [optimized + debuginfo] target(s) in 61m 25s
                                                    real	61m25.930s
                                                    user	406m27.001s
                                                    sys	11m30.052s
                                                    

                                                    So number of dependencies and amount of specialization are probably the low hanging fruit in this case.

                                                    1. 1

                                                      Doh, bash fail.

                                                      $ rg --files -0 | grep -zE '\.rs$' | wc -l --files0-from=- | tail -n 1
                                                      2123768 total
                                                      $ rg --files -0 | grep -zE '\.rs$' | xargs -0 cat | grep '\[derive' | grep -oE '\(|,' | wc -l
                                                      22597
                                                      

                                                      Same conclusion though.

                                                    2. 2

                                                      Experiment independently reproduced, very nice results. I never realized this was significantly expensive!

                                                      1. 1

                                                        Thank you for this reply. It’s absolutely beautiful. You made an assertion, and this backs it up in a concise, understandable, and trivially reproducible way.

                                                    3. 0

                                                      Without build.rs how are create authors going to mine Bitcoin on your computer?

                                                      1. 2

                                                        With a make install target, just like the olden days.

                                                    1. 4

                                                      This is normal. It’s simpler and cheaper (though not as flexible) as using a TPM.

                                                      1. 4

                                                        Imagine an internet without Wikipedia and WordPress, Yahoo, Flickr and Slack. For its many flaws PHP has made a lot of wonderful things possible. Worse is better at its best.

                                                        1. 4

                                                          Given the plethora of overlapping programming languages/environments, I don’t think it’s reasonable to claim that these things wouldn’t exist without PHP.

                                                          1. 7

                                                            I think that’s not entirely fair. Back in 2001, I’m not sure a project like Wikipedia would have even been started without something like PHP being available, given the difficulty and inefficiency of deploying a web application using any other language/platform available at that time…

                                                            1. 6

                                                              Yep - shared PHP hosting was an economic wonder not replicated since.

                                                              I was looking for hosting around 2006, and I recall the cheapest price to deploy a PHP site (shared hosting) was roughly 200x cheaper (not a typo!) than the cheapest VPS I could find.

                                                              1. 1

                                                                Yep - shared PHP hosting was an economic wonder not replicated since.

                                                                Many VPS providers have free tiers these days, though, which in essentially gives you a space to deploy your thing for $0 – at least if you can pick your tools to optimize for that. But yeah, PHP was hard to beat for a long time, economically.

                                                                1. 2

                                                                  Many? Which other than amazon and GCP? Honest question.

                                                                  There was thousands of web hosting companies offering free hosting with PHP support. Even locally. It was what every 20 year old internet geek would do, starting a web hosting company. Granted, most of them were a security nightmare and had hours-long downtimes, or even days-long. But it was still a wonder world.

                                                                  I recently talked to an old friend, and he mentioned that a website that I had put together back in 2002 was still up. I went and check, and there it was, almost two decades later, that website, along with two other that I hosted at the same hosting provider. Realistically speaking, I doubt any VPS instance I spin nowadays will be up 10 years from now.

                                                                  1. 1

                                                                    Many? Which other than amazon and GCP? Honest question.

                                                                    … Azure? :) Good points though.

                                                              2. 4

                                                                I think that’s not entirely fair. Back in 2001, I’m not sure a project like Wikipedia would have even been started without something like PHP being available, given the difficulty and inefficiency of deploying a web application using any other language/platform available at that time…

                                                                To put this in perspective, Slashdot was written in Perl and predates Wikipedia. A lot of the early web sites were. Server-side Java was pretty mature by 2001. GNUstepWeb, a complete reimplementation of WebObjects 4.5 (depending on who you ask, WebObjects was either the first or second WebApp development framework and was first released in 1996) was pretty mature by 2001.

                                                                There were quite a lot of alternatives by 2001. CGI was standardised in 1993 and most web servers supported FastCGI by the late ’90s. Most scripting languages had CGI / FastCGI plugins and many compiled languages had libraries for writing [Fast]CGI programs.

                                                                PHP was available on a lot of shared-hosting platforms, but so were a few alternatives. If PHP hadn’t been around, one of the other alternatives would have taken its place as the dominant server-side scripting language.

                                                                1. 5

                                                                  You’re doing an apples to oranges comparison. PHP’s design, both easy and tightly integrated with web, allowed people with little to no programming experience to quickly get websites together. Another thing technical folks rarely bring up is that this appeals to people that want to focus on their project/product, not the language or stack.

                                                                  The alternatives you mention didn’t do that. On top of it, PHP got into all the cheap, web hosts. That combo made it dominate for this segment of users.

                                                                  1. 4

                                                                    PHP’s design, both easy and tightly integrated with web, allowed people with little to no programming experience to quickly get websites together

                                                                    PHP was not the only language that allowed interleaved text and markup. Even the oldest systems had templating infrastructure that let you embed scripts in HTML, though typically they’d then interact with more complex components. Most users; however, did not need to write those components, they just treated them as things that were built into the language.

                                                                    As to ‘little to no programming experience’, that’s quite a subjective thing. You needed to understand conditionals and loops to write non-trivial PHP. You needed to understand logical operators, string concatenation (what a string is!) and for non-trivial things you needed to understand structured data. We’re talking about a level of programming competence of someone who grew up in the ’90s with 8-bit BASIC, not the level of someone who is just doing HTML markup. Again, a lot of other systems required a similar level of competence. Quite a few had much simpler learning curves for people with no programming experience.

                                                                    On top of it, PHP got into all the cheap, web hosts

                                                                    I mentioned that, but you’re right. That is why PHP succeeded. If PHP hadn’t existed, do you think these hosts would not have offered something else? Most of them offered PHP + one or two other things, but PHP was the common platform. Most of the providers I remember from back then are out of business now. NearlyFreeSpeech is still around and, according to the Wayback machine, they offered C, C++, LISP, Perl, Python, PHP, Ruby, and TCL + MySQL hosting in 2006, the first time they offered server-side scripting support at all.

                                                                    I was on the executive committee and admin team for a student computer society around 2000 and, back then, the fact that we offered free web hosting with server-side scripting was a massive incentive for members (we charged £3/year initially and put it up to £5/year and it was still vastly cheaper just for the hosting than any alternatives that offered PHP or other languages). Email hosting was also a big thing: most people only had a university account and maybe one tied to their ISP (which went away at the end of the year when they moved house). Hotmail was still slowly taking off as an alternative.

                                                                    It wasn’t until around 2005 that alternatives for web hosting became cheap enough that students considered using them and even then we were a lot cheaper. PHP was convenient but mostly because there were big marketing pushes behind it and it was what everyone else was using, so it was easy to get help. People doing computer science degrees were often asking for help with PHP (we had a talker and a support mailing list for helping people with PHP), so it definitely wasn’t as easy as you are claiming (especially a few years later once people learned about SQL injection attacks and discovered that their ‘simple’ PHP gave random Internet people full access to their databases). If PHP hadn’t been around, we’d have done exactly the same thing for some other scripting language.

                                                                    1. 3

                                                                      Couldn’t agree more. My first big “project” was a car-pool sharing site in the late 90s. I was inspired by reading Greenspun’s “Database backed web sites” (great book, btw) and managed to get basic functionality going without much trouble using PHP and MySQL.

                                                                    2. 2

                                                                      To run servlets, you would need to spend an order of magnitude or two more money on servers. And most likely manage the server yourself. Shared PHP hosting allowed you to drop a php file on a designated folder using an ftp client and you were good to go. We have a zillion choices nowadays. But things were not as ubiquitous back then. How many people do you know that had rented a server for personal projects back in the 90s?

                                                                      Many of these shared hosting providers offered CGI in up till 2000~2002, but that was a resource drain when compared to PHP. You wouldn’t realistically spawn 1000 processes on the server if a 1000 requests came in one second. PHP could handle this without blinking. And included a super intuitive template system, a mysql client and extensive functionality for string manipulation. This is what people wanted and they could get it to work in seconds.

                                                                      I think that the fact that it was a language too helped because it was, and still is, a language that tries to be approachable, as opposed to say perl or java, which make much more assumptions about the programmer. A complete beginner could open php.net, read the first chapters and be ready to build something. Probably filled with bugs and security holes, but from the cheer amount of ghetto php programmers, the web of dynamic content flourished. That said, I do believe that the platform is what primarily made PHP huge. Upload a file and visit the url. There was even people writing websites in HTML, and then just sprinkling a couple of PHP snippets inside their html files.

                                                                      If I may make a tangent, I think the crowd of current PHP programmers claiming that the language has improved, is missing the point. You cannot make the language theoretically sound without turning it into something completely different. I even think the whole java-like OOP that they do these days in PHP is a mistake. It was a quick and dirty language. The array() constructor is an a aberration from a theoretical point of view, but for a complete beginner it would provide them with a data structure that could be used as: an array, a list, a queue, a map, and a couple of others probably. This was empowering. Of course, when you reach the point that you need to push these data structures to the limits of their design, PHP wouldn’t deliver.

                                                                      It certainly allowed many people to experiment and build something. Myself included. How things would be had it not seen the light of day, it’s just something left for us to speculate.

                                                                      1. 1

                                                                        Many of these shared hosting providers offered CGI in up till 2000~2002, but that was a resource drain when compared to PHP

                                                                        That’s not my experience (from running a shared-hosting server around 2001-2004). Back then, PHP either used CGI, or the Apache plugin. The Apache plugin did not offer user separation, so was not appropriate for shared hosting. You paid the PHP interpreter start-up cost for every PHP page fetch.

                                                                        PHP+Apache didn’t get reliable FastCGI (which let you spawn one copy of the PHP interpreter per user, so you got useful isolation between users) until later. I think we deployed it around 2003 / 2004 and it was quite flaky at the start. Today, it works very reliably and once the Zend accelerator stuff was merged it gave a big performance win (you weren’t parsing the entire PHP script every page fetch, it was parsed once, compiled to bytecode, and interpreted on each invocation).

                                                                        In terms of system load, we saw much higher overheads from PHP than stateful web-app development frameworks because every PHP request needed multiple round trips to the database (we were using PostgreSQL instead of MySQL, because we liked our data and wanted to keep it), whereas alternatives (my favourite from that era was Seaside) kept it in memory, where it could be swapped out when the site was idle but then much cheaper to access when the side was under load. Java was a bit of a beast back then, but Perl was lighweight and offered FastCGI at about the same time (slightly earlier? Not sure, I didn’t use Perl).

                                                                        I’d agree that PHP coincided with a growth in shared hosting and server-side scripting, but I disagree with the jump from there to that it caused this growth. There was a huge, thriving, ecosystem of people trying to solve the same problems. Once one player became slightly dominant in that market, it snowballed because people focused teaching resources on that language, optimised the interpreters, and gave discounts to people using it to try to attract a large customer base from competitors. That player happened to be PHP but it could easily have been one of hundreds of other frameworks that have since died out. And we’d probably now have long blog posts about how it sucks and doesn’t deserve to be successful.

                                                                        1. 1

                                                                          I don’t know how the Apache plugin worked, but it was what everyone was using to run away from the CGI. CGI was quickly deprecated or discouraged in many shared hosting providers as far as I remember.

                                                                          For example,.many perl programmers switched because the platform was just more convenient and trivial to deploy in a properly scalable way.out of the box.

                                                                          1. 1

                                                                            The Apache plugin ran everything as the same user. That meant that you had to make all of your PHP files readable by the www user and anyone who could run PHP via Apache could then read them. Lots of people put things like database passwords in their PHP files. Things like MySQL let you authenticate per user (didn’t work with mod_php because everyone was the same user, so would get the same connection credentials) or with a username and password (which required storing the username and password somewhere where the user executing the PHP scripts could see them). Another PHP script running from a different user would still be executed as the www user and so could read your PHP files and pull passwords out of them.

                                                                            We hit a bunch of security issues like this and moved back to CGI for PHP. A bunch of less scrupulous shared hosting providers kept using mod_php and blaming compromises from their inherently insecure configuration on their customers. As I recall, the docs for mod_php back then explicitly told you not to use it for shared hosting where the users did not all trust each other.

                                                                            I think Apache 2.0 came with a security model, but by the time 2.x was stable everyone I knew had moved to lighttpd and was using FastCGI for PHP and for everything else. With FastCGI, you’d need a process per (scripting language, user) pair, but that would be swapped out for idle users and so didn’t impose much penalty.

                                                                            1. 1

                                                                              I agree that it was a mess security wise and all the holes that occurred in shared hosting were kind of expected. If you were anything else than a tiny company or a personal blog, you would run your own server for this reason.

                                                                              But the last sentence in the first paragraph is not true. You would have a Unix account and place your php files inside your home folder. But other users could naturally not access them.

                                                                              But I digress. Your timeline is a bit off, Apache 2 was huge before lighttpd gain prevalence. Either way, I think we all can agree that PHP aced easy of deployment for the hobbist to an extent that no other stack even came close at the time.

                                                                              1. 2

                                                                                But the last sentence in the first paragraph is not true. You would have a Unix account and place your php files inside your home folder. But other users could naturally not access them.

                                                                                They had to be readable by the user that Apache ran as, because all PHP scripts ran as that user. Apache would run every PHP script as that user and so any script, written by any user, could read the text of any other script.

                                                                                Your timeline is a bit off, Apache 2 was huge before lighttpd gain prevalence

                                                                                You might be right. We didn’t move to Apache 2 until 2.2.something (must have been some time after 2005). The 2.0.x series was a mess and was the thing that pushed a lot of people I knew to explore alternatives. We used Lighttpd 1.4.0, but the Lighttpd web site doesn’t seem to have version history that far back. It only goes back to 1.4.15, which was 2007. Lighttpd was an odd flash in the pan, it displaced Apache everywhere I knew about very quickly and was then replaced by Nginx within a few years. It looks as if it’s still actively developed, but I’ve no idea who’s using it.

                                                                                1. 1

                                                                                  Since we are in the middle of a trip down the memory lane. There are also these two webservers that also played their part. Although eventually nginx would win the race, so to speak.

                                                                                  They introduced new ways of thinking of a webserver into the world, and it is cool to see these projects are still alive today.

                                                                    3. 2

                                                                      Wikipedia initially ran on UseModWiki, a Perl CGI application (like most of the wiki engines in use at the time).

                                                                      1. 2

                                                                        Wikipedia was written in Perl originally and was rewritten in PHP by a student over the summer mostly because the Perl version used a flat file database. As near as I can determine, there was no particular reason to prefer PHP over Perl at the time, other than that’s what the author decided to use.

                                                                        No one really liked the PHP version from the outset btw, but they decided to go with it anyway because the flat-file performance problem were pressing. This eventually turned in to MediaWiki.

                                                                        How would things have gone down if they had decided to stick with Perl instead? Who knows…

                                                                        1. 1

                                                                          I was coding web sites in Perl in about 1997. It wasn’t perhaps fun in many ways, and in fact I fantasized about porting the code to PHP. I won’t completely dismiss the idea that PHP was the enabling factor for many of those early web sites – darwinism clearly points to some underlying truths in tech. But I also believe that there were plenty of “something like PHP” besides PHP itself.

                                                                      2. 6

                                                                        Imagine an internet without (…) Slack.

                                                                        It would be a way better place than now, as Slack mostly destroyed the landscape of instant messages one more time, but this time it thrown it into a web browser.

                                                                        Just like someone thought the hypertext document viewer is a great place to use as a canvas for “applications” if you refresh the document contents fast enough so no one notices

                                                                      1. 4

                                                                        At least we have a single connector now…

                                                                        1. 18

                                                                          Arguably having one connector that is not actually interoperable is worse than having multiple connectors whose lack of interoperability is apparent at a glance.

                                                                          1. 5

                                                                            Yep, not knowing if the cable you want will work is like having USB-C Heisenberg edition. Its arguably worse because of the lack of knowing.

                                                                            1. 4

                                                                              It works for me with everything. A variety of phones, laptops, Nindendos Switch, chargers, batteries and cables. Maybe sometimes I’m not getting optimal charging speed, but it’s always better than the situation was before.

                                                                              I don’t have a MacBook though. I hear they have more issues than anything else.

                                                                              1. 3

                                                                                Good for you. But the article shows your experience isn’t shared by everyone. Not knowing if a USB-C cable and charger will charge your device unless you try it is mildly infuriating.

                                                                          1. 10

                                                                            What’s more, Lenovo will also upstream device drivers directly to the Linux kernel, to help maintain stability and compatibility throughout the life of the workstation.

                                                                            Glad to hear that they’re focused on upstreaming their code. IMO if a laptop relies on non-mainline code then it doesn’t really have good Linux support.

                                                                            1. 8

                                                                              Dell was really eating their lunch. Stuff like firmware upgrade work very very well on the Dell XPS 15. One of my colleagues regularly updates his bios from GNU/Linux with a single click.

                                                                              1. 4

                                                                                Me and my colleagues do this with their Thinkpads as well. Lenovo provides a lot of firmwares to the Linux Vendor Firmware Service: https://fwupd.org/lvfs/vendors/

                                                                                1. 3

                                                                                  This works fine on my thinkpad too.

                                                                              1. 5

                                                                                This is one of the reasons that chromium uses a GC for much of its c++. If you can schedule cleanup work you can significantly improve interactive performance.

                                                                                1. 0

                                                                                  What was the author smoking when he wrote this?

                                                                                  1. 2

                                                                                    I didn’t spend a lot of time looking but most of the authors are at Google, nVidia, or one of the US government national labs. All three groups are in a position to rewrite or refactor as much of their code as they wish.

                                                                                    • National Labs (6), e.g. Argonne, Livermore, Oak Ridge, Sandia
                                                                                      It wouldn’t surprise me if their simulation software is refactored or rewritten for each supercomputer.
                                                                                    • Google (7)
                                                                                      “Most software at Google gets rewritten every few years.” Software Engineering at Google
                                                                                    • NVidia (2)
                                                                                      GPU libraries and drivers are probably rewritten or refactored for each new generation of hardware.
                                                                                    • Uber (1)
                                                                                    • Unknown (1)
                                                                                    1. 2

                                                                                      Probably splendid isolation from anyone else at Google.

                                                                                      1. 1

                                                                                        Which aspects are umm pipe dreams to you?

                                                                                      1. 39

                                                                                        Bernstein’s response is something. Here’s a story.

                                                                                        The fastest assembly implementations (amd64-xmm5 and amd64-xmm6) of Salsa20 available from its homepage still have a vulnerability such that they will loop and leak plaintext after 256 GiB of keystream.

                                                                                        This was reported to the Go project last year because our assembly was derived from it. We fixed it in Go, and published a patch for the upstream.

                                                                                        He declared it WONTFIX because there is an empty file called “warning-256gb” in a benchmarks tarball hosted elsewhere. He tweeted we should have seen it. The file was added 4 years after the Go port was made.

                                                                                        1. 15

                                                                                          Filippo! Thanks for your two recent posts on OpenSSH keys from U2F tokens. It’s been nice to see you up to yet more interesting crypto lately, in addition to all the public go crypto work.

                                                                                          You probably know as much (it was all discussed here a few months ago), but qmail itself is a similar long-arc story and a lost opportunity. Even today it has one of the better security designs in a mail server, and back then, it inspired a series of really great patterns and tools, such as those that ship with runit. But, DJB was never willing to take on a traditional open source maintainer role, nor to let anyone else do that with the upstream source. So it never was allowed to ship as distro-specific binary packages, it never got updated to do SMTP auth, it required outside patches to work with linux because of a war on errno.h, etc. (Even so, Artifex.org used it for roughly fifteen years before moving finally to OpenSMTPD. . . and I never had to scramble to patch a CVE for it, unlike the latter.)

                                                                                          So I feel conflicted about it all. On the one hand, DJB’s done more for open cryptography than just about anyone, he’s done fairly reliable software development, and he hasn’t gone off into some sort of St. Ignutius weird place like Richard Stallman, either. But on the other, does it really take that much generosity of spirit to admit fault and accept a patch? If Linus can learn to be less of a jerk on email, then maybe a cryptographer can learn to accept bug reports for the helpful things that they are.

                                                                                          1. 6

                                                                                            djb’s personality is the worst thing about djb’s software.

                                                                                          1. 1

                                                                                            Also interesting: JEP 369: Migrate to GitHub. Nice that one of the goals is “ensure that OpenJDK Community can always move to a different source-code hosting provider”.

                                                                                            1. 5

                                                                                              Oh, that’s even more interesting. Too bad the heavy use of GitHub API can quickly result in vendor lock-in where you cannot migrate easily. I don’t have any complains about the API itself but it’s sad that even if we have a distributed version control system it cannot be used without a collection of proprietary services.

                                                                                              1. 4

                                                                                                Lock-in is true no matter what though. The moment you go farther than plain git hosting, you will take on dependencies which will cause you trouble.

                                                                                                Even with a self-hosted solution you will eventually get into trouble keeping up to date as OSes go out of support and/or security patches dry up.

                                                                                                As long as you have a valid migration plan up your sleeve, github is as good or bad as any other third-party solution and arguably so much more convenient and time-saving compared to a first-party solution that the trade-offs are still worth it.

                                                                                                You’d rather have a PR review system that works well, is available now and is known to many users than multiple months of development to end up with an inferior solution that will lack user engagement and binds resources not available in other places - a solution which you might have to throw away anyways a few years down the road because the platform you have chosen went out of support or doesn’t run on supported OSes.

                                                                                                Yes. You don’t have control over githubs roadmap. But compare the self-hosted landscape from 2007 when github launched to what is best-practice and available now, 13 years later. And then compare the effort you would have gone through to keep up with those dependencies to what it would have taken you to keep up with github.

                                                                                                Github is the more stable platform. And they have a lot of incentive to keep it that way.

                                                                                                Yes, I would love if free and open platforms could be as feature-ful, accepted by users and easy to maintain, but they aren’t and thus at least for now, the positives for the project and thus for its users and developers do outweigh the drawbacks. And with every passing year, the trust put onto github by it’s users only increases.

                                                                                                1. 2

                                                                                                  Thank you for your insight Glenn!

                                                                                                2. 4

                                                                                                  If only it were legal to reimplement an API…

                                                                                                  1. 3

                                                                                                    @ianloic I chuckled loudly when I read this. 😂

                                                                                              1. 4

                                                                                                Gtk+ has great support for bindings so it’s good in any language. Python & Rust for example are very pleasant to use.

                                                                                                But I’m also fond of html+js+css running on a localhost web server. Jupyter is a great example of this. Can you imagine trying to build a UI that rich with any other tools?

                                                                                                1. 8

                                                                                                  Yes please. Track compile time and memory usage. Revert offending commits. Don’t let them bloat LLVM.

                                                                                                  I do remember how fast clang was.

                                                                                                  1. 8

                                                                                                    By “bloat LLVM” do you mean “add optimization passes to make the generated code faster”?

                                                                                                    1. 7

                                                                                                      You might have overlooked the table with different optimization levels and their effect on compilation time.

                                                                                                      I’ve been following clang for a while and I assure you, the generated code hasn’t got that much faster, for the exponential growth in cpu time and ram usage during compilation, despite several hardware generations should have made compilation faster.

                                                                                                      1. 4

                                                                                                        The blog post explicitly calls out that some of the optimisations causing these slowdowns are of marginal benefit.