Threads for imode

  1. 1

    My problem with the SemWeb is all of the above + the lack of executable networks, mostly as a bench test of the formats, query methods and methodology involved.

    What’s stopping us from encoding and executing small programs in RDF? What about intelligent agents, encoded and executed as an RDF subgraph, crawling the larger “internet” with their own intentions or ones driven by their dispatching users? (There are a few examples of RDF-based VMs, but that’s another conversation).

    From General Magic and Telescript to RDF and its pile of infrastructure, I don’t think anybody really fixated on what it means to put a bunch of computers together and have them share and mine knowledge. You had that in silos, like Google and Facebook, but never generally.

    1. 1

      The way I am approaching it is to make a name system using reproducible builds, the data you exchange can be executed in the named (i.e. referenced) context. The name system is decentralized so the peer needs to pick the most likely to be true definition of a name … which means that everything is contextual and shared understanding is emergent. The complicated part is building a currency system on that shared understanding so that the most likely to be true definition is also the most valuable one but I think this is emergent from avoiding a central currency in the first place. In the end, I think the ‘web’ part of ‘semantic web’ gets people confused… it’s just layers of code and data.

    1. 14

      In which we discover that the overhead of C++ I/O streams is larger than a call to printf.

      No surprise here. Apples-to-apples would be the same calls, this post isn’t worth discussing.

      1. 3

        Agreed. C++ iostream is a badly designed, inefficient API. I sometimes use it because it’s often nicely idiomatic, but when IO is on the critical path it’s too slow to use and I fall back to the C stream APIs or even system calls.

        Plus, a microbenchmark that measures just startup time plus one write? Clickbait.

        (I’ve really got to wean myself off of C++ one of these years. I want to be able to have nice things. I just never want to pay my own performance penalty of having to get up to speed in a new language, and to figure out how to bridge to my existing code…j

        1. 2

          No surprise here.

          I’m always a little surprised. This post is really saying that loading code takes time, so the less you have, the faster your program launches. But the surprise is why C++ coding primitives end up with so much more code. In theory, printf is terrible for code size, because it doesn’t know until run time which capabilities it needs, so it needs to include code for every conceivable format conversion. In theory, C++ I/O streams are type aware and can only include things specific for a single use case.

          In practice, it consistently breaks the other way. A full -static for both results in 820Kb for printf and 2.2Mb for C++ I/O streams. Both of these seem shocking in terms of inability to remove (presumably) unreachable code, although the C++ case should have more explicit hints to the compiler about what’s unreachable.

          1. 3

            As I understand it, this isn’t about code loading, this is about the actual runtime cost of C++’s iostreams.

            See this comparison of the actual compiled assembly. Notice all of the extra code..

            This is clickbait, plain and simple. There is nothing worth discussing.

            1. 1

              this is about the actual runtime cost of C++’s iostreams.

              I don’t think it is. The article mentions a 1.4ms time for dynamic loading vs 0.7ms for static linkage, for instance. That’s not about the instructions to execute to invoke iostreams. Godbolt appears to be showing the result of the compilation unit, meaning I don’t think it displays any difference for -static. It seems very unlikely to me that the 700usec is spent executing additional instructions added as a result of dynamic linkage (that’s a lot!) But loading a chain of shared objects, relocating them and resolving imports? That seems much more substantial than pushing a string literal on the stack and calling a function.

              What I got from the godbolt link is the compiler is smart enough to replace printf with puts, which eliminates the code I was referring to - both versions are effectively type specialized.

              1. 1

                I still think it’s not even worth a discussion; dynamic loading affects more than just C++. The more “important” difference is the cost of executing those extra call and comparison instructions related to the use of iostreams.

                ..also, was this article edited? Dynamic linkage was 1.4ms, now it’s 0.8ms? What is the point of updating these metrics and still staying with the same strawman?

                1. 3

                  dynamic loading affects more than just C++

                  Agree completely. My biggest complaint with this article is that libc is dynamically linked in all cases, so it’s measuring the dynamic load time of C and C++ against the dynamic load time of C without C++. A better comparison is the one @c-cube did below of creating both as static, so the linker is free to find what is used and discard the rest, then measure the cost of the code generated for the test in isolation.

                  Some might argue that libc should be retained because it’s the de facto standard syscall layer on Linux, but note this argument implicitly favors one language by ensuring all of its runtime is required to be loaded, and any other language has to load something additional.

                  The more “important” difference is the cost of executing those extra call and comparison instructions related to the use of iostreams.

                  If we assume that there’s zero cost of loading a large static binary, the cost of the extra calls (over C) is 300 usec; the cost of dynamic loading is 600 usec. In reality the difference will be larger, because loading a large static binary has a cost. Whether it’s important or not is a little subjective (based on how an individual prioritizes short lived processes), but it’s definitely a larger cost than extra instructions in the context of a program that can execute in 500 usec.

                  ..also, was this article edited? Dynamic linkage was 1.4ms, now it’s 0.8ms?

                  It looks like something was added to it. I’m fairly sure dynamic linkage was always 1.4 and static was 0.8, and I misquoted these numbers above to suggest a 700 usec difference which should have been 600 usec. (My bad.)

                  The new parts at the end probably do a better job of quantifying the overhead of the extra C++ call instructions at 10k cycles out of 60k (16%), avoiding conflating dynamic library loading with the language.

          2. 1

            I wouldn’t say it’s not worth discussing, but I’d have liked to see more actual discussion of why in the post.

            This post could have been an interesting deep dive into what exactly I/O streams do that makes them slower, and why they were designed that way!

            That’s coming from my perspective as a C/C++ idiot, and I’m sure many people know why. But while I have a basic idea, I’d have been interested in a post with more details (maybe some assembly code).

            1. 4

              There have been a lot of such articles written. This is a particularly evil example, because the C version actually gets compiled to a puts call, because the compiler knows the semantics of printf and, when called with a constant format string, is able to convert it into something that bypasses the formatting. In contrast, the C++ version gets the buffer management inlined and still goes via the generic formatting layer.

              It’s a bit more interesting if you add some real formatting. If you try to print numbers in a loop, the performance goes:

              • fmt::print is fastest because it ends up being specialised on the format string and so doesn’t need any dynamic dispatch.
              • printf is in the middle. It’s having to parse the format string and write the arguments with some exciting variadic magic, but it’s not too bad.
              • std::cout<< is the slowest because it’s doing an insane number of things.

              A moderate chunk of the slowness of std::cout comes from the fact that it also synchronises with C. In C, stdout is a FILE*, not a raw file descriptor. When you call printf, this is a quick tail call to fprintf(stdout, ...). This locks the stdout FILE* and uses its internal buffer to build the output (writing it if it gets full) and then unlocks the FILE*. In C++, you end up needing to lock and unlock the corresponding FILE* on each operator<< (which is stupid, from the C++ side, because it doesn’t prevent interleaving in the output between two subsequent calls like this). Often the C++ ostream has its own locking and buffering and so ends up needing to lock the C FILE, flush its buffer, lock the C++ ostream, collect things in the C++ ostream’s buffer, write that to the underlying file descriptor, and then unlock the C++ ostream and the C FILE and return. This is very slow.

              In exchange for all of this locking, you still get the problem that std::stream << "hello " << someInt << std::endl can end up with some other output interleaved between the string, integer, and newline parts. The libfmt fmt::print call doesn’t have this problem, is type safe, and is faster than printf. Since libstdc++ still doesn’t have a C++20 std::format implementation, fmt::print is the easiest way to get cross-platform formatted output in C++ and also the fastest.

          1. 1

            Working on a simulator for my hardware/software model.

            1. 35

              🦞 Here’s to another decade! 🦞

              1. 2

                Social media was an incentive to start killing Web 2.0, but it wasn’t the only thing that killed it. Interactive, high-quality media delivered by Flash and similar methods were a contributor. When the only thing you have to observe is someone else’s content, it has to be either entertaining or socially relevant. Social media nailed the latter, it didn’t nail the former (though Faceboo-cough sorry, Meta is certainly trying).

                Newgrounds and similar sites had an explosion that lasted well beyond Web 2.0’s “expiry” date because there wasn’t a concerted effort to make something that was just socially rewarding: anybody can create things, and the simplest thing you could create was an animation, which was attractive to anybody that opened a paint program for fun.

                The browser’s default state is a document viewer. That’s what it does. It does it really well. It’s also far from being what anybody wants out of “the web” as a place to “be” and create on. It’s no surprise why Flash took off.

                If a hypothetical “Web 3.0” ever exists, it’s not going to take the form of a document viewer that we need to continue bolting interactive capabilities on to. It’s going to be a collaborative social free-form programming platform with incredibly easy entry and decentralized hosting over a shared compute pool.

                …Not to give anybody business ideas… cough

                1. 1

                  Of course Amazon needed a Copilot competitor…

                  1. 3

                    I haven’t tried their example with Copilot, but as written their blog post is a great advert for not using their product. The isPrime implementation that it generates is spectacularly bad. When I was 10, I wrote a much better one in QBASIC, so this is badness on the level of handing your keyboard to a small child and asking them to act as an autocomplete function.

                    It tries to determine if a number is prime by attempting to divide it by every single number between 2 and the number itself and checking that they’re all 0. If that’s the approach that you want to take (rather than building a Sieve of Eratosthenes, which I leaned about in school aged 8, or something more clever) then there are two trivial things that you can do to improve performance:

                    • Don’t test every number. Test only odd numbers. If the number is not divisible by 2, it is not divisible by any multiple of 2. This observation is the thing that leads to a Sieve of Eratosthenes, but you rapidly hit diminishing returns. Skipping even numbers halves your search space. Subsequently skipping multiples of 3 is less of a win, then skipping multiples of 5 is less, and so on.
                    • Stop at sqrt(n) (rounded up). If A % (sqrt(A) + M) == 0 then A / (sqrt(A) + M) < sqrt(A), and so you’ve already found a stopping point. This dramatically reduces your search space.

                    Just applying a couple of small tweaks to the code will give something vastly faster than the original. Again, these are things that I did in the QBASIC implementation that I wrote when I was 10, using only the mathematical knowledge that I’d gained from school at that age. This isn’t degree-level number theory, it’s just a few obvious hacks to aggressively prune the search space.

                    I guess if you’re a company that makes money from selling compute time, you’ve got a good incentive to encourage people to write inefficient code, so maybe they view that as a feature, not a bug.

                    The outer loop is much worse because, if you actually want to compute all prime numbers from 0-100 then you really want a sieve of Sieve of Eratosthenes. It reduces the algorithmic complexity of the problem, whereas my previous hacks just reduce the constant factors.

                    This is a problem that can be solved efficiently with pre-teen mathematics, yet CodeWhisperer does it very badly. I dread to think how it would fare if you’re trying to solve a difficult problem.

                  1. 12

                    Incredibly useful piece of software for educational and personal use. Here’s to another 10 years.

                    1. 5

                      …and professional usage for communicating with peers on the subject of things like warning flags and optimizers…

                      A lovely tool!

                      1. 4

                        We’ve deployed a version of his code with the CHERI compilers which has been a fantastic tool for showing people how C constructs can be lowered with memory safety. I can’t decide if 10 years feels like longer or less long than I’d have guessed. It still feels like a new and exciting tool in some ways, but also like a tool that’s so useful I find it hard to remember how we worked when we didn’t have it.

                    1. 2

                      Really cool, now do it without hardware-supported atomics! ;)

                      1. 14

                        I’m very curious how these companies address the fact that there are countries where smartphones are not universally owned (because of cost, or lack of physical security for personal belongings).

                        1. 8

                          At least Microsoft has multiple paths for 2FA - an app, or a text sent to a number. It’s hard to imagine them going all in on “just” FIDO.

                          Now, as to whether companies should support these people - from a purely money-making perspective, if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

                          A bigger issue is if public services are tied to something like this, but in that case, subsidizing smartphone use is an option.

                          1. 24

                            if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

                            I had a longer post typed out and I don’t think at all you meant this but at a certain point we need to not think of people as simply customers and begin to think that we’re taking over functions typically subsidized or heavily regulated by the government like phones or mail. It was not that long ago that you probably could share a phone line (telcos which were heavily regulated) with family members or friends when looking for a job or to be contacted about something. Or pay bills using the heavily subsidized USPS. Or grab a paper to go through classifieds to find a job.

                            Now you need LinkedIn/Indeed, an email address, Internet, your own smartphone, etc. to do anything from paying bills to getting a job. So sure if you’re making a throwaway clickbait game you probably don’t need to care about this.

                            But even this very website, do we want someone who is not doing so well financially to be deprived of keeping up with news on their industry or someone too young to have a cellphone from participating? I don’t think it is a god-given right but the more people are not given access to things you or I have access to, the greater the divide becomes. Someone who might have a laptop, no Internet, but have the ability to borrow a neighbor’s wifi. Similarly a family of four might not have a cell phone for every family member.

                            I could go on but like discrimination or dealing with people of various disabilities it is something that’s really easy to forget.

                            1. 15

                              I should have been clearer. The statement was a rhetorical statement of opinion, not an endorsement.

                              Viewing users as customers excludes a huge number of people, not just those too poor to have a computer/smartphone, but also people with disabilities who are simply too few to economically cater to. That’s why governments need to step in with laws and regulations to ensure equal access.

                              1. 11

                                I think governments often think about this kind of accessibility requirement exactly the wrong way around. Ten or so years ago, I looked at the costs that were being passed onto businesses and community groups to make building wheelchair accessible. It was significantly less than the cost of buying everyone with limited mobility a motorised wheelchair capable of climbing stairs, even including the fact that those were barely out of prototype and had a cost that reflected the need to recoup the R&D investment. If the money spent on wheelchair ramps had been invested in a mix of R&D and purchasing of external prosthetics, we would have spent the same amount and the folks currently in wheelchairs would be fighting crime in their robot exoskeletons. Well, maybe not the last bit.

                                Similarly, the wholesale cost of a device capable of acting as a U2F device is <$5. The wholesale cost of a smartphone capable of running banking apps is around $20-30 in bulk. The cost for a government to provide one to everyone in a country is likely to be less than the cost of making sure that government services are accessible by people without such a device, let alone the cost to all businesses wanting to operate in the country.

                                TL;DR: Raising people above the poverty line is often cheaper than ensuring that things are usable by people below it.

                                1. 12

                                  Wheelchair ramps help others than those in wheelchairs - people pushing prams/strollers, movers, emergency responders, people using Zimmer frames… as the population ages (in developed countries) they will only become more relevant.

                                  That said, I fully support the development of powered exoskeletons to all who need or want them.

                                  1. 8

                                    The biggest and most expensive problem around wheelchairs is not ramps, it’s turn space and door sizes. A wheelchair is broader (especially the battery-driven ones you are referring to) and needs more space to turn around than a standing human. Older buildings often have too narrow pathways and doors.

                                    Second, all wheelchairs and exoskeletons here would need to be custom, making them inappropriate for short term disability or smaller issues like walking problems that only need crutches. All that while changing the building (or building it right in the first place) is as close to a one-size-fits-all solution as it gets.

                                    1. 5

                                      I would love it if the government would buy me a robo-stroller, but until then, I would settle for consistent curb cuts on the sidewalks near my house. At this point, I know where the curb cuts are and are not, but it’s a pain to have to know which streets I can or can’t go down easily.

                                    2. 7

                                      That’s a good point, though I think there are other, non-monetary concerns that may need to be taken into account as well. Taking smartphones for example, even if given out free by the government, some people might not be real keen on being effectively forced to own a device that reports their every move to who-knows-how-many advertisers, data brokers, etc. Sure, ideally we’d solve that problem with some appropriate regulations too, but that’s of course its own whole giant can of worms…

                                      1. 2

                                        The US government will already buy a low cost cellphone for you. One showed up at my house due to some mistake in shipping address. I tried to send it back, but couldn’t figure out how. It was an ancient Android phone that couldn’t do modern TLS, so it was basically only usable for calls and texting.

                                        1. 2

                                          Jokes aside - it is basically a requirement in a certain country I am from; if you get infected by Covid you get processed by system and outdoors cameras monitor so you don’t go outside, but to be completely sure you’re staying at home during recovery it is mandatory to install a government-issued application on your cellphone/tablet that tracks your movement. Also some official check ups on you with videocalls in said app to verify your location as well several times per day at random hours.

                                          If you fail to respond in time or geolocation shows you left your apartments you’ll automatically get a hefty fine.

                                          Now, you say, it is possible to just tell them “I don’t own a smartphone” - you’ll get cheap but working government-issued android tablet, or at least you’re supposed to; as lots of other things “the severity of that laws is being compensated by their optionality” so quite often devices don’t get delivered at all.

                                          By law you cannot decline the device - you’ll get fined or they promise to bring you to hospital as mandatory measure.

                                      2. 7

                                        Thank you very much for this comment. I live in a country where “it is expected” to have a smartphone. The government is making everything into apps which are only available on Apple Appstore or Google Play. Since I am on social welfare I cannot afford a new smartphone every 3-5 years and old ones are not supported either by the appstores or by the apps themselves.

                                        I have a feeling of being pushed out by society due to my lack of money. Thus I can relate to people in similar positions (larger families with low incomes etc.).

                                        I would really like more people to consider that not everybody has access to new smartphones or even a computer at home.

                                        I believe the Internet should be for everyone not just people who are doing well.

                                    3. 6

                                      If you don’t own a smartphone, why would you own a computer? Computers are optional supplements to phones. Phones are the essential technology. Yes, there are weirdos like us who may choose to own a computer but not a smartphone for ideological reasons, but that’s a deliberate choice, not an economic one.

                                      1. 7

                                        In the U.S., there are public libraries where one can use a computer. In China, cheap internet cafés are common. If computer-providing places like these are available to non-smartphone-users, that could justify services building support for computer users.

                                        1. 1

                                          In my experience growing up in a low income part of the US, most people there now only have smartphones. There most folks use laptops in office or school settings. It remains a difficulty for those going to college or getting office jobs. It was the same when I was growing up there except there were no smartphones, so folks had flip phones. Parents often try and save up to buy their children nice smartphones.

                                          I can’t say this is true across the US, but for where I grew up at least it is.

                                          1. 1

                                            That’s a good point, although it’s my understanding that in China you need some kind of government ID to log into the computers. Seems like the government ID could be made to work as a FIDO key.

                                            Part of the reason a lot of people don’t have a computer nowadays is that if you really, really need to use one to do something, you can go to the library to do it. I wonder though if the library will need to start offering smartphone loans next.

                                          2. 5

                                            How are phones the “essential technology”? A flip phone is 100% acceptable these days if you just have a computer. There is nothing about a smartphone that’s required to exist, let alone survive.

                                            A computer, on the other hand, (which a smart phone is a poor approximation of), is borderline required to access crucial services outside of phone calls and direct visits. “Essential technology” is not a smartphone.

                                            1. 2

                                              There’s very little I can only do on a computer (outside work) that I can’t do on a phone. IRC and image editing, basically. Also editing blog posts because I do that in the shell.

                                              I am comfortable travelling to foreign lands with only a phone, and relying on it for maps, calls, hotel reservations, reading books, listening to music…

                                              1. 1

                                                The flip phones all phased out years ago. I have friends who deliberately use flip phones. It is very difficult to do unless you are ideologically committed to it.

                                              2. 3

                                                I’m curious about your region/job/living situation, and what about is making phones “the essential technology”? I barely need a phone to begin with, not to mention a smartphone. It’s really only good as a car navigation and an alarm clock to me.

                                                1. 1

                                                  People need to other people to live. Most other people communicate via phone.

                                                  1. 1

                                                    It’s hardly “via phone” if it’s Signal/Telegram/FB/WhatsApp or some other flavor of the week instant messenger. You can communicate with them on your PC just as well.

                                                    1. 4

                                                      I mean I guess so? I’m describing how low income people in the US actually live, not judging whether it makes sense. Maybe they should all buy used Chromebooks and leech Wi-Fi from coffee shops. But they don’t. They have cheap smartphones and prepaid cards.

                                                      1. 2

                                                        You can not connect to WhatsApp via the web interface without a smartphone running the WhatsApp app, and Signal (which does not have this limitation) requires a smartphone as the primary key with the desktop app only acting as a subkey. I think Telegram also requires a smartphone app for initial provisioning.

                                                        I think an Android Emulator might be enough, if you can manually relay the SMS code from a flip phone, maybe.

                                                  2. 2

                                                    You’re reasoning is logical if you’re presented a budget and asked what to buy. Purchasing does not happen in a vacuum. You may inherit a laptop, borrow a laptop, no longer afford a month to month cell phone bill, etc. Laptops also have a much longer life cycle than phones.

                                                    1. 4

                                                      I’m not arguing that this is good, bad, or whatever. It’s just a fact that in the USA today if you are a low income person, you have a smartphone and not a personal computer.

                                                1. 1

                                                  Didn’t Gates once say that / was the stupidest idea after the other stupid idea of using \ for folder paths?

                                                  1. 1

                                                    Stupid in what way?

                                                      1. 3

                                                        You are generally unkind and this kind of behavior is not welcome, appreciated, invited or tolerated on this site.

                                                        Take the sass elsewhere.

                                                  1. 3

                                                    Taking a week off for the first time in six months, digging into OCaml.

                                                    1. 31

                                                      I used to care a lot more about “adoption” and “usefulness” (which is a funny thing to say, I know), but there are so many languages out there and so many ways to do things that at this point, I’m much more interested in seeing what people build for themselves and for their specific niche/needs.

                                                      I really like this article, and I hope Hare accomplishes what Drew hopes it accomplishes.

                                                      1. 8

                                                        There’s something to be said about a constrained, finite design space. The stars are too vast to build up to.

                                                      1. 13

                                                        I don’t feel this is on-topic for this site. Taking the discussion here is effectively leveraging Lobste.rs-as-Twitter, which ideally, it shouldn’t be.

                                                        1. 0

                                                          Unrelated, the article’s dark mode switcher is an irritating misleading eyesore, much like the color scheme.

                                                          1. 3

                                                            I don’t see that this article has reached its conclusions. With the intent being a general conclusion about the state of the industry for the variables in question, vs. Google’s state, possibly narrowed to a particular part of Google (org, team, etc.) that they could focus on with some depth. Google is not every company, and Google is also made up of multiple parts.

                                                            The only thing this has done is signal that the particular reviewers and particular team/org/whatever structural component of Google has some biases, which is Google’s problem (and everyone else’s who looks at this and says “wow, that seems like my team”). I’m not saying that this research is bad, unfounded or wrong, but that you need a larger cross-section of the biases from multiple companies in order to form a proper conclusion. We need more data, more studies, and the methodology to deal with all of that.

                                                            1. 9

                                                              Blatant self-promotion is generally discouraged here for new accounts.

                                                              1. 7

                                                                That’s somewhat hilarious, yet depressing.

                                                                In 1999, on a Pentium MMX 100 and 32MB of RAM, you could easily start up Netscape (or IE, or Mozilla), go to one of many sites aggregating online games, pick almost anything, wait a while and then you’ll get very nice, colorful, engaging, memorable and smooth gameplay experience on almost every hit. It worked flawlessly on each platform which has support for Flash, no matter which browser, OS, environment and what else you had.

                                                                Right now, in 2022, using oh-so-modern stuff like “html5 canvas”, “audio context” and whatever you pull out to make JS pretend to be a serious business… you get this. And, if you add anything more complex, your $3000 Macbook Pro will overheat sooner or later, and the gameplay wouldn’t be even near the smoothness of Flash game running at silly PMMX with Windows 98.

                                                                What went wrong? Where we made that mistake? How to get back?

                                                                1. 13

                                                                  I’m confused by this comment.

                                                                  I’m on a Thinkpad T430, a computer from June 2012, nearly 10 year old hardware with integrated graphics, and this runs just fine and is about as smooth as you could get. The backing game logic just uses basic JavaScript, and uses AudioContext to generate music and sound effects on the fly.

                                                                  This doesn’t even raise my load average above background. If you think this is choppy, try altering PLAYER_MOVE_STEP to a lower value, along with similar variables for the ships. There’s nothing wrong with this game, or this technology, from a performance standpoint, only the perception of choppiness due to how far ships and the player move in a frame (10px on a scaled canvas).

                                                                  Commenters older than myself can probably comment on the Flash situation in 1999, but as I recall, Flash on older hardware was and is (if you have the misfortune of using it) a mess. This piece of programming isn’t “serious business”, and I’m wondering what qualifies as “serious business” outside of the browser-native game engines that are common (Phaser comes to mind).

                                                                  All in all, I think you’ve set up a strawman to beat. There’s nothing wrong here.

                                                                  1. 5

                                                                    I am old enough to remember the flash situation in 1999 and to have tried to make a game in it not different to this one at the time.

                                                                    Flash was a buggy mess with bad performance and even an even worse security model. HTML5 was (and is!) an improvement, but it is also more or less the same.

                                                                    Poorly programmed stuff both does and always has brought every system to it’s knees … Frantic drive clicking has been replaced with (on the balance, quieter) fan noise.

                                                                    Of course, you can find people who miss that comfortable banging of a spinning disk drive trying to keep up with some flash load or another (which, by the way, could take 10-20 minutes to pull off the network).

                                                                    And then, of course back in 1999, you would find the people who missed keying their own games in basic! Same as it ever was.

                                                                  2. 5

                                                                    I don’t think a Pentium MMX would run flash very well, also, flash was programmed in ActionScript, which is basically JavaScript with a different stdlib. I think you might be seeing the past with somewhat rosy glasses…

                                                                    1. 3

                                                                      Couple of thoughts. Not sure if any of them are true, but these reflect the impression I am getting.

                                                                      Technical reasons

                                                                      How we make software dramatically changed. Everything is a framework now, there is massive amounts of “boilerplate” needed for many of these frameworks.

                                                                      One has to glue things together, everything got “bazaar” style. There is not much tightly integrated software, despite all these frameworks.

                                                                      There’s way too many competing options. With Adobe and Flash it was the default category (much like Photoshop nowadays). You chose it because you wanted to create an interactive app in the browser. Both the developer and the customers knew what the goal was. With all these options they all find their niches, which is another thing that changed. Everything needs to have its niche, people think about stuff like USPs maybe a bit too much, while Flash was more an all rounder.

                                                                      There is more developers, throwing out everything they do, because they learned “release early, release often”, even if they just spent a day on it. Maybe it lands them a job. So interesting projects tend to be buried. It’s just A LOT harder to find them. But if you go on websites that still exist where you’d previously find Flash games you still find new HTML 5 projects that are also more interesting. They just might not be findable on GitHub, lobste.rs, HN, etc.

                                                                      With lots of people developing lots of things going into many direction, just throwing stuff out there and never learning to care about even rudimentary performance, just bending things so they work, one often comes across code, that checks all boxes for complicated, hard to read, unmaintainable and slow. I think it’s easier to avoid if you have that one package, where you don’t glue things together.

                                                                      And I say that as someone who actually think it’s a good idea to glue things together, but one should do so with some focus and understanding what one does, instead of copying random example, tutorial, Stack Overflow code and bending it, when it’s not even “the right way” to approach a problem.

                                                                      Non-technical reasons

                                                                      It’s still there, just way better hidden, just like with many things. Everything yells for attention, so it’s also harder to find stuff. A great example are also these “curated” “awesome” lists, that often link to dead, unfinished or simply bad projects, because people will just add things they programmed or came across without curation, because who wants to show an empty list?

                                                                      So one needs to take some time to find them behind walls of unfinished, bad and simply loud projects. I think a large of it is really that compared to 1999 the internet today went from a semi-nerd place to mainstream. People that would have given you a weird look back then for spending hours in front of the computer talking to total strangers nowadays are always connected, get impatient if you don’t respond to an email in half an hour, have their Facebook, Instagram, LinkedIn, etc. account and flood the web with what’s less interesting or only interesting to them. Which probably is why echo-chambers are almost a requirement. Imagine if social networks would just throw everything at you.

                                                                      Of course for group efforts that means that might make it less likely for these not found interesting projects to accumulate contributors, especially if there is no or not much money behind it.

                                                                      1. 1

                                                                        Flash included its own version of JS (“ActionScript”), so there’s more than bad language choice affecting this.

                                                                      1. 33

                                                                        I find it so interesting thinking about the way that programming languages can reflect natural language with dialects.

                                                                        I feel like you could also think about writing with a particular ‘accent’ when you are writing code that isn’t idiomatic to the language you are using. For example, if you came from Go to Ruby you might start writing a bunch of for loops, which is valid but not typically how Ruby is written. You are speaking Ruby, with a Go accent.

                                                                        1. 14

                                                                          The world has seen so much bad Fortran code that the name of the language is now a synonym for bad coding. Many of us have never seen real Fortran code, but we know what coders mean when they say, “You can write Fortran in any language.”

                                                                          How Not to Write Fortran in Any Language

                                                                          1. 8

                                                                            This has been my exact experience with Ruby. I started using it within Amazon, where they utilize live pipeline templates (LPTs) that are packages built out with layers and layers of monkey-patched Ruby, and these in turn spit out generated build artifacts, e.g. Cloudformation templates.

                                                                            Now my current role makes use of a Rails monolith and even after some time, the differences are still jarring and I’m still trying to rid myself of the muscle memory from my AWS experience, speaking that LPT dialect of Ruby, as you put it.

                                                                            1. 4

                                                                              Having maintained LPTs, Octane templates, etc., you’re exactly correct. Horrifying kludge, but we’re steadily replacing it with.. TypeScript, Java/Kotlin and Python. ;)

                                                                            2. 6

                                                                              programming languages can reflect natural language

                                                                              I think a lot about how Perl was designed by a linguist, and it shows.

                                                                              1. 4

                                                                                In a good way, or in a bad way?

                                                                                1. 8

                                                                                  Yes.

                                                                                  (In all seriousness, I suspect a bad way for actually implementing it; does an independent implementation of Perl 5 that’s compatible mostly exist? But it is fascinating less from a PLT and more a “you can phrase it like that?” angle you see in NLP…)

                                                                              2. 4

                                                                                A lot of programming languages have constructs that “infect” the code base. Once they’re in use, you have to keep using them. Some I can think of:

                                                                                • async in Rust
                                                                                • Rc vs owned in Rust
                                                                                • null vs Optional in Java
                                                                                • FP vs OOP style in Scala
                                                                                • Akka in Scala
                                                                                • naming schemes (tends to be a problem in older languages like Python or C++)
                                                                                1. 5

                                                                                  I’m confused about the mention of Rc here? Rcs are owned. What I think is a infecting problem is that you can’t be generic over Arc vs Rc.

                                                                              1. 7

                                                                                As part of the incident response process, we quickly discovered that the client was hanging inside a network request to one of the Firefox internal services.

                                                                                Maybe someone can explain to me why I use a browser on my desktop to visit a third party, yet some of my traffic goes through Mozilla, and worse, it’s blocking. Is this true of Chrome/Google and Edge/Microsoft as well? Why does the web, a distributed system, have such a single point of failure?

                                                                                1. 18

                                                                                  First of all, your traffic to a website isn’t going through Firefox internal services. Firefox needs to talk to a variety of services to function. As it is explained in the blog post. Secondly, and this is also explained in the blog post, Network connections are complicated. If a thread keeps reading (infinite loop), because it got the Content-Length header wrong, then you’ll have a bad time.

                                                                                  1. 10

                                                                                    Why does Firefox need to talk to a variety of services to function? It is a web browser. The only thing it should be talking to is the web site I’m visiting.

                                                                                      1. 10

                                                                                        While I appreciate the article, I’m questioning three things.

                                                                                        1. Why are these features required for Firefox to function? None of these are critical to the point of being a web browser.

                                                                                        2. Why are these enabled by default? Again, critical functionality is functionality that you need to remain functional. A web browser doesn’t need a majority of this to remain functional.

                                                                                        3. Why were these added in the first place? As a very long time Firefox user, a majority of these weren’t present when the browser launched or hit its popularity spike. It functioned perfectly fine and was a refreshing change from IE.

                                                                                        I don’t have reasonable answers for any of these, and I suspect they don’t exist.

                                                                                        1. 33

                                                                                          i hear that last sentence as: “I don’t work on this project, and yet I believe I understand it just as well as people who do; well enough to lecture them on it.”

                                                                                          I would say that, at a minimum, the security features are required for a browser to function. You can’t securely trust X.509 certificates without a means of knowing whether a cert has been revoked. And checking for updates is important for mitigating security problems soon after they’re discovered and fixed.

                                                                                          1. 7

                                                                                            I wouldn’t perceive a list of questions as a lecture, just that I don’t have answers as to why “Mozilla Content” and “Diagnostics” (more accurately, active telemetry), are required, enabled by default, and were added in the first place.

                                                                                            I can make some arguments for diagnostics (crash reports are helpful, for instance, provided they’re actually acted on). I can also agree that update checks are good. That does not qualify as a “majority of features”, and these features don’t exactly speak to the message of “Firefox needs to talk to a variety of services to function” if they can be disabled and the browser still functions fine.

                                                                                            If only the people who work on this project can ask questions about it, or disagree as to what’s required for basic functionality, then the project isn’t meant for external use. Firefox is clearly meant for external use. Shutting down conversation under the guise of “you don’t work on it, therefore you can’t comment on it” doesn’t do anybody any good, developers or users.

                                                                                            1. 21

                                                                                              Your conversation is getting shut down because your questions are thinly-veiled statements: these features shouldn’t be required to function, these features shouldn’t be enabled by default, these features shouldn’t have been added in the first place. The fact that you’re framing these statements as questions (“should they be required/enabled by default/have been added?”) doesn’t really change things, since you’re literally following up with “I have no idea, so I suspect not”.

                                                                                              1. 4

                                                                                                I’m following up with “I don’t know, I suspect not, but I’m open to being proven wrong”.

                                                                                                Asking questions and then providing your current opinions should not be framed as stating anything, but I can understand why it would be perceived that way.

                                                                                                1. 21

                                                                                                  We’ve had so many discussions about that. And in the end it all boils down to

                                                                                                  • yes mozilla needs auto update, that’s correct for 99% of the users, your distro may change that
                                                                                                  • yes mozilla may want to get user statistics, it can help a ton with decisions for hardware support and features
                                                                                                  • yes fetching a blacklist of known malicious websites and revoked certificates as well as the current trusted root certs is very reasonable and also valid for 99% of the userbase
                                                                                                  • yes you’re already trusting mozilla, whether it’s an auto update, a manual one or a first installation
                                                                                                  • yes you can ask again and again why that’s the case and tell us that this shouldn’t be necessary, instead of simply switching browsers / disabling these options when asked by firefox / disabling it via usersettings file / disabling it in about:config and moving on
                                                                                                  • yes some features (mozilla experiments) aren’t really required (and I dislike them), but you can also disable them
                                                                                                  • no it’s not on-topic to ask why they had stats enabled anyway, after receiving links to the why+how, this could have also happened with auto updates if not for the flag preventing http3 switching - it’s an http3 handling bug after all
                                                                                                  1. 1

                                                                                                    There are so many people that desperately want moz://a to be something it is never going to be because:

                                                                                                    • They market themselves as being it
                                                                                                    • There doesn’t seem to be any other contender

                                                                                                    At this point though it’s just delusion to argue, they’ve killed servo and doubled down on things like pocket.. Either you customize it carefully and study every update or you just stop using it and move to nyxt (or something else that is trying to represent your interests).

                                                                                                    1. 1

                                                                                                      I don’t need to study every update. Some stuff comes up here or on HN early enough and otherwise I simply disable the default search + experiments, that’s it.

                                                                                              2. 10

                                                                                                It’s a list of questions-followed-by-definitive-statements, like “ None of these are critical to the point of being a web browser.” To dig into just one of those: I’ve already pointed out that, if you think cert revocation is not a critical part of a web browser, you don’t know enough about security to be critiquing one.

                                                                                              3. 2

                                                                                                My browser being dependent on decisions made at Mozilla and software running on Mozilla-controlled servers in order to work is itself a security problem.

                                                                                                1. 10

                                                                                                  There will always be security problems. If you think Mozilla is a bigger security problem than the malware, phishing, compromised certs, etc. that many of these socket connections are there to help with, then we live in different realities.

                                                                                                  Can’t you just check out your own copy of Firefox or Chromium and turn off all the parts you object to?

                                                                                              4. 14

                                                                                                That’s your opinion and you’re entitled to an opinion and I’m not on the internet to convince you. My opinion is different and I’ll spend a brief amount of time to explain why:

                                                                                                I believe many, if not all of those features ARE critical for web browsers. Fundamentally, your browser is not a document reader for hyper text. Firefox, like any other browser, is an application platform for multi-media apps, like Netflix, YouTube, Google Maps and what not. Whether you like it or not. For those to work well, you’ll need codecs, DRM, security updates and so on and so on. And I’m not even talking about PKI (which @snej pointed out beautifully. Thanks.).

                                                                                                1. 5

                                                                                                  If I may add: Firefox gained popularity by being “better than IE”. That’s great when there’s only one browser you need to compete with. Firefox is probably still better than IE, but, you know, the ecosystem has evolved :-)

                                                                                              5. 2

                                                                                                There’s a lot more going on there than I’d have guessed, and many of those services are very desirable to me, so thanks for providing them. Nevertheless, it seems like the browser should maybe timeout on them and try to come back later. Of course you probably thought all that through already.

                                                                                                1. 5

                                                                                                  It does in normal circumstances. That’s not the cause of the bug here; the cause was an infinite loop caused by a regular old bug.