1. 17

    Honestly, I don’t get it. Why does it matter what the text looks like as long as it’s satisfactory?

    1. 29

      Different people have different thresholds for “satisfactory”, I guess?

      1. 6

        I don’t really buy this, it’s not satisfaction but habit. Sure, you realize there’s a difference when you change, it’s not like your brain has changed. It’s just the inverse effect of upgrading and admiring what’s better – but after a while you get used to it. Just like you’re not inhibited by this initial admiration, you won’t be by the initial annoyance.

        In the end, it’s not pixels you’re looking at, but like art tells us, whatever we are looking at is in our head. And we’ve long passed the point where this kind of consumer scaring is necessary.

        1. 2

          I don’t really buy this, it’s not satisfaction but habit. Sure, you realize there’s a difference when you change, it’s not like your brain has changed.

          What is “habit”, if not your brain changing to optimize itself for a particular use case?

          1. 3

            Fair enough, my point is that this change isn’t permanent, and all it takes for someone to forget about what resolution the screen is is a week or two (except if actually inhibits your work, of course).

          2. 1

            But what is satisfaction if not informed by habit?

          3. 1

            Something inexplicably obvious about it just doesn’t occur to me, it seems.

            1. 1

              …. which is fine! My wife can’t see the difference, either.

          4. 16

            After using retina and 4k displays for several years, when forced to use a 1080p, 96dpi monitor I find I no longer consider any text on it “satisfactory”. To me, it all looks painfully bad now that I’m accustomed to a sharper, higher quality experience. The eye strain after 8 hours of staring at fuzzy, low res fonts takes a real toll.

            But others would be happy with a super low-res vt100, I’m sure. Everybody’s satisfactory is different.

            1. 6

              Doesn’t the vt100 use a bitmap font? This being the actual true solution to get sharp fonts on a low res display - just use bitmaps at the correct size.

              1. 4

                The original VT100 is quite low-res and fuzzy. Later VT terminals used higher-resolution screens which looked better.

                1. 4

                  There’s this fascinating story about how DEC used the fuzz to good effect in their bitmap font, as a primitive form of anti-aliasing. ‘Dot stretching’, phosphor response curves… well worth a quick read!

                  1. 2

                    This is wild. Thanks for the link!

                2. 2

                  Bitmap fonts will avoid loss of sharpness due to antialiasing, but they’re not going to make an extremely low resolution screen any less low res, so I don’t know that I’d call 5 pixels arranged in a vaguely “e”-shape exactly “sharp”.

                  1. 1

                    There are bitmap fonts which are more high res than 5 pixels per “e”. Check out stuff like atarist for alternatives.

                    1. 2

                      We’re talking about the vt100. You can have high resolution bitmap fonts, but you can’t fix a low resolution screen with a high res bitmap font.

                3. 6

                  This reads to me like advice to avoid 4K as long as possible. If there’s no significant quantitative difference in efficiency/eyestrain between 4K and 1080p, and I’m currently happy with 1080p, switching to 4K will only make it more unpleasant for me to use perfectly useful 1080p monitors, pushing me to needlessly purchase more expensive monitors to replace those that I already have, and increasing consumerism.

                  1. 2

                    You’re certainly free to stick with what you’re accustomed to. I have no compunctions about spending a lot of money to get the absolute most comfortable experience possible out of something I’m probably going to spend a year or more of my life, cumulatively, staring at. It’s one of the cheapest possible investments in the pleasantness of my career on a dollars-per-hour-used basis.

                    1. 3

                      Explained that way, I understand where you’re coming from. Even if there’s no objective benefit to upgrading your monitor, and you already feel “perfectly comfortable”, making work slightly more pleasant is desirable.

                      Now, you still need to make the decision as to whether the benefit gained from the monitor upgrade is worth the money you’re spending on it, but that’s much more personal. Thanks for sharing your perspective!

                    2. 2

                      The eyestrain is already there, you are just accustomed to it

                      1. 1

                        Citation needed.

                  2. 2

                    I concur. To my eyes text with a 1.5x scaled 4K looks better than text with a 2x scaled 4K. I think the psychovisual system is complex and subjective enough to warrant “if you like it then it’s good”.

                    1. 1

                      Some people fetishize over fonts, font rendering, font shapes, dithering, smoothing and more such visual trickery. The author of this piece has published a programming font so I assume he puts more weight on font-related things than the average font consumer. Other people have other fetishes, my own is to cram as much onto the screen as I possibly can while still being able to distinguish what is written from the fly poop which partially conceals some characters on the screen. This makes that I always have to guffaw a bit when I see people lamenting the bad state of high-dpi support in Linux since the first thing I end up doing is turning all the stuff off so I can get 16 terminals on a 15” display. To each his own, I guess…

                    1. 11

                      In short, it’s important to understand the problem before trying to solve it.

                      1. 16

                        Well, that’s half of it. The other half is knowing that there’s a single x86 instruction that can replace a chunk of your code and give you part of the answer in the blink of a CPU’s eye :)

                        1. 3

                          I would argue that the instruction probably takes a relatively long time from the perspective of the CPU, but yea the it’s also important to understand your tools!

                        2. 3

                          And understand if there is a problem. If you haven’t profiled then optimisations are probably a waste of time.

                        1. 3

                          Embrace, extend, extinguish?

                          1. 1

                            Scalar is a .NET Core application

                          1. 3

                            This would work well for me. But I have friends for whom the ‘standard’ teaching style works well and for whom this style would work less well. That’s a common problem that these kinds of articles have: they don’t address the fact the alternative teaching style that is promoted is also suboptimal for some students. You can’t expect to convince anybody to change their lesson plan if they feel their just going to disadvantage a different subgroup. Do we have any empirical data on this that would e.g. at least show this would improve the average?

                            1. 2

                              See this comment for links to empirical research: https://lobste.rs/s/dkxt6e/stop_teaching_code_solicit_predictions#c_eqqyen

                              1. 1

                                Right. There is no one approach that will work for everyone. Instead of prescribing an exact way to teach, I’ve found it simpler to remember that a large component of teaching is addressing misconceptions.

                              1. 12

                                Please don’t implement core system commands in python. There are some reasons:

                                • Python has no spec, (but at least there are multiple implementations).
                                • Python is a large dependency.
                                • Python is slow.
                                1. 9

                                  Python has no spec

                                  https://docs.python.org/3/reference/index.html

                                  Is the specification of the Python language. It provides a machine-readable grammar for parsing Python, along with prose commentary specifying the behavior of the language. Coupled with the reference for the standard library (which specifies the built-in types and modules and their behavior), this is sufficient to build your own compatible implementation of Python.

                                  Do you feel something is missing?

                                  1. 1

                                    oh nice, i guess I’m wrong on that point.

                                  2. 8

                                    More importantly than being slow, Python is also extremely slow to start up. The shell isn’t exactly fast with every line almost being a fork, but it starts running the first lines instantly, so implementing commands in it works pretty well, assuming the commands don’t do a lot of computation in the shell code itself.

                                    1. 3

                                      Python has no spec

                                      Neither does Rust 😁 (I know it’s in the works)

                                    1. 18

                                      I continue being amazed both by how fragile the security of our systems is and the ingenuity of the security researchers. It seems it’s impossible for anyone to completely understand all the implications of every design decision. Even the ECC correction is not enough in this case by exposing yet another side-channel in the latency of reads, giving the attacker the information it needs to know if there has been a flip or not.

                                      What could be done in order to mititgate side-channels systematically? Is it to go back to simpler, even if slower systems? I don’t think even that would help, right? Is security really a completely unaittenable goal for computing systems? I know that the general idea is that perfect security doesn’t exist and the level of security depends on tradeoffs, but hardware side-channels are very scary and I don’t think it is that much about trade-offs anyway (although I am far from knowledgeable in this).

                                      I used to have this trust in hardware, don’t know really why, but more and more I’m scared of the amount of ways to get secret information there are (even if impractical).

                                      I think we humans got into levels of complexity we were completely unprepared for, and we will pay it badly very soon.

                                      1. 11

                                        I continue being amazed both by how fragile the security of our systems is and the ingenuity of the security researchers. It seems it’s impossible for anyone to completely understand all the implications of every design decision.

                                        Sort of. Applying covert-channel analysis to Intel CPU’s in the mid-1990’s showed pervasive vulnerability. If you do it at system level, you’d see even more of these problems. I’d seen folks on HN griping about QA being a low priority when they worked at RAM companies. The problems were mostly ignored due to market and management’s economic priorities: make things faster, smaller, and with less power at max profit. That leads to less QA and more integration instead of separation. Both apathetic users and companies supplying their demand got here willingly.

                                        The attacks have been really clever. There were always clever defenses that prevented many of them, too. Companies just don’t use them. There’s a whole niche of them dedicated to making RAM untrusted. They define SoC itself as security boundary, try to maintain confidentiality/integrity of pages, and typically take a performance hit from the crypto used to do that. Another strategy was using different DIMM’s for different applications with separation kernels flushing the registers and caches on a switch. The RAM controller would get targeted next if that got popular. Others suggested building high-quality RAM that would cost more due to a mix of better quality and patent royalties RAM cartel would sue for. It has to be high volume, though, if nobody wants to lose massive money up-front. I was looking at sacrificing RAM size to use SRAM since some hardware people talked like it had less risks. I’d defer to experts on that stuff, though.

                                        “What could be done in order to mititgate side-channels systematically?”

                                        Those of us worried about it stuck with physical separation. I used to recommend small-form PC’s or high-end embedded (eg PCI cards) tied together with a KVM switch. Keep untrusted stuff away from trusted stuff. Probably safest with a guard for what sharing needs to happen. Most people won’t know about those or be able to afford them. However, it does reduce the problem to two things we have to secure at users’ end: a KVM switch and a guard. Many guards have existed with a few high security. I think Tenix making a security-enhanced KVM. It’s a doable project for open source, small company, and/or academia. It will require at least two specialists: one in high-security with low-level knowledge; one doing EMSEC, esp analog and RF.

                                        1. 11

                                          Is security really a completely unattainable goal for computing systems?

                                          Well, yes. Not because they are computer systems, but because they are physical systems.

                                          Let’s take fort-building techniques and materials as an analogy. Suppose you want to protect a crown. There was a pre-fort era: anybody could walk up and take the crown, if they knew where it was. Think dialup access to a prod system; no password. Early forts were a single, short, unconnected wall (designed to halt the progress of foes coming at you from a single point) and they were trivial to defeat: think of a front end with a password and a backend database with no password, also connected to the internet. Let’s fast forward…

                                          Modern forts have moats and observation towers and doors that are armored and that armor is engineered to be stronger than the walls–which provides a sort of guarantee that they ain’t gonna breach that door–it’s cheaper for them to go through the wall. Modern forts have whole departments dedicated to simply determining ahead of time how powerful the foe’s strongest weapon is and making sure the armor is at least strong enough stop that weapon.

                                          You see where I’m going. A fort is never “done”. You must continue to “fortify”, forever, because your foe is always developing more powerful weapons. Not to mention, they innovate: burrowing under your walls, impersonating your staff, etc.

                                          That said, there are some forts that have never been breached, right? Some crowns that have never been stolen? This is achieved by keeping up with the Jones, forever. It’s difficult and it always will be, but it can be done.

                                          What about physics? Given infinite time, any ciphertext can be brute-forced, BUT according to physics, the foe can not have infinite time. Or, given infinite energy, any armor can be pierced, BUT, according to physics, the foe can not have infinite energy. Well, this isn’t my area, but.. does physics say that the foe can not better at physics? Better keep up…

                                          The horror we’re facing now with all these side channel attacks is analogous to the horror that the king in that one-wall fort must have felt. “Oh crap, we’re playing on a massive plane, rather than a single line between them and me. I’m basically fort-less right now.”

                                          (EDIT: moved my last paragraph up one and removed the parens that were wrapping it.)

                                          1. 3

                                            What could be done in order to mititgate side-channels systematically?

                                            Systematic physical separation of everything.

                                            Provision a new Raspberry Pi for each browser tab :D

                                            (more practically, never put mutually untrusted processes on the same core, on the same DRAM chip, etc. maybe?)

                                            1. 4

                                              There’s not that much unpractical about it, I do it on a daily basis - though Pine64 clusterboard turned out a bit cheaper (~300usd / for 7 tabs) than the PIs. Ramdisk boot chromium (or qemu, or android or, …) as a kiosk in a “repeat-try connect to desktop; reboot” kind of loop. Have the DE allow one connection everytime you want to spawn your “tab”. A bit more adventurous is collecting and inspecting the crashes for signs of n-days…

                                              1. 3

                                                Provision a new Raspberry Pi for each browser tab :D

                                                Ah yes, the good old “Pi in the Sky” Raspberry Pi Cloud

                                                1. 3

                                                  Power usage side channels will still leak data from one Raspberry Pi to another. The only larger point I could tie that to is that perfect defense is impossible, but sebboh already said that quite eloquently, so I’ll leave it at that.

                                                  1. 6

                                                    Most of the more esoteric side channels are not readily available to other systems however. Even physically colocated systems aren’t hooked into the same power monitor to watch each other.

                                                    There will be a never ending series of cpu/ram performance side channels because the means of measurement is embedded in the attack device.

                                                    1. 3

                                                      Separate battery systems (power), everything stored at least 30cm apart (magnets) in a lead-lined (radiation) soundproof (coil whine) box. Then you’ll want to worry about protecting the lines to the keyboard and monitor…

                                                      1. 1

                                                        is it possible to protect monitor cables / monitors for remot scanning. From what I’ve gathered there is hardware that can get a really clear picture of what’s on screen from quite the distance. Faraday’s cage around the whole unit and or where you are sitting or what?

                                                        1. 2

                                                          From my fairly basic knowledge of the physics, yes. Any shifting current in a wire will make that wire act a little like an antenna and emit radio waves, which is how these attacks work. It’s usually undesirable to have the signal you’re trying to send wander off into the ether, so cables are designed to minimize this, but it will always happen a little. Common coax cables already incorporate braided wire mesh or foil around the signal-carrying bits, for example.

                                                          But, it can never eliminate it completely. So, it’ll always be another arms race between better shielding and more sensitive detectors.

                                                          1. 1

                                                            ah so they work against the cable and not the display itself right? Does this mean that say a tablet or a laptop is less susceptible to this kind of attack than a desktop computer?

                                                            Also to really be foolproof would it be useful to build faraday’s cages into the walls? I’ve heard that if the metal rods stabilizing the concrete in buildings gets in contact with water that grounds them, creating a faraday’s cage and this explains why cell phones can get really bad reception in old big concrete houses. Wouldn’t it be a sensible measure for large companies to do exactly this but on purpose. For cell reception they could have repeaters inside where that would be needed. Wifi is supposed to stay indoors anyways and yeah chinese spies with tempest equipment shouldn’t get their hands on any radiation either.

                                                            1. 2

                                                              They’re called emanation attacks. The defense standards are called TEMPEST. Although they claim to protect us, civilians aren’t allowed to buy TEMPEST-certified hardware since they’d have harder time spying on us. You can find out more about that stuff here (pdf), this history, this supplier for examples, and Elovici et al’s Bridging the Airgap here for recent attacks.

                                                              The cat and mouse game is only beginning now that teams like Elovici’s are in the news with tools to develop attacks cheaper and more capable than ever. It’s why Clive Robinson on Schneier’s blog invented concept of “energy gapping.” All types of matter/energy that two devices share is potentially a side channel. So, you have to mitigate every one just in case. Can’t just buy a product for that. ;)

                                                              1. 2

                                                                yeah I heard about TEMPEST there was this fun program that let you broadcast FM or AM via your CRT that I played with forever ago tempest for eliza or something.

                                                                messed up that they make laws against things like that.

                                                                My thinking is to protect the whole house at once or why not cubicle depending on how much you are willing to spend on metal of course

                                                                1. 1

                                                                  This?

                                                                  Far as whole house, they do rooms and buildings in government operations. A lot of the rooms don’t have toilets because the pipes or water might conduct the waves. Air conditioning is another risk. Gotta keep cellphones away from stuff because their signal can bounce off the inside of a passively-secured device, broadcasting its secrets. All sorts of issues. Safes/containers and SCIF-style rooms are my favorite solutions since scope of problem is reduced.

                                                                  1. 1

                                                                    Yeah that’s the one.

                                                      2. 2

                                                        I always recommended EMSEC safes with power filters and inter-computer connections being EMSEC-filtered optical. So, yeah, it’s a possibility. That said, some of these systems might not have the ability for firmware, kernel code, or user code to measure those things. If none are this way, new hardware could be designed that way with little to no modifications of some existing hardware. Then, a compromise might just be limited to whats in the system and whatever the code can glean from interactions with hardware API’s. On the latter, we use ancient mitigations of denying accurate timers, constant-time operations, and masking with noise.

                                                        I think there’s potential for making some of those attacks useless with inexpensive modifications to existing systems. Meanwhile, I’m concerned about them but can’t tell you the odds of exploitation. We do need open designs for EMSEC safes or just containers (not safes), though.

                                                    2. 3

                                                      I used to have this trust in hardware, don’t know really why, but more and more I’m scared of the amount of ways to get secret information there are (even if impractical).

                                                      As long as there’s physical access to a machine, that access will be an attack vector. As long as there’s access to information, that information is susceptible to being intercepted. It comes down to acknowledging and securing against practical attack vectors. Someone can always cut my brakes or smash my windows and take my belongings from my car, but that doesn’t mean I operate in fear every time I park (of course this is a toy analogy: it’s much easier and far less risky to steal someone’s digital information, EDIT: and on second thought, you would immediately know when your belongings have been tampered with).

                                                      From the paper:

                                                      We now exploit the deterministic behavior of the buddy allocator to coerce the kernel into providing us with physically consecutive memory

                                                      Does the Linux kernel currently have any mitigations like randomization within its allocators? I believe this is orthogonal to ASLR.

                                                      1. 2

                                                        Hardware is cheap; use that as your security boundary between trust domains. On-device process separation, virtualization, still makes a lot of sense for other reasons (compatibility, performance, resilience), but it is about as alive as a parrot in a monty python sketch when it comes to security. Rowhammer should have been the absolutely last straw in that respect - there were plenty of indicators well before then. What sucks is that the user-interfaces and interaction between hardware separated tasks (part of the more general ‘opsec’ umbrella) is cumbersome at the very best. Maybe that is easier to fix than multiple decades of opaque hardware…

                                                        1. 4

                                                          Consumer grade hardware may be cheap; Power and hardware with ECC RAM support is not so much. With dedicated hardware you are burning a lot more power for useful computations performed.

                                                          For this particular attack, AMD’s Secure Encrypted Virtualization (SEV) is an actual solution and is mentioned as such in the paper. Intel’s Multi-Key Total Memory Encryption (MKTME) should be too when it comes out. Unfortunately software support is not really what I would call complete yet.

                                                      1. 4

                                                        Thanks. I hate it. :P

                                                        EDIT: tongue emoticon for good measure

                                                        1. 3

                                                          It is a very silly idea that was never meant to be useful.

                                                          Even so, I’m considering training it on ten years of text I’ve typed and then actually learning all 26 layouts! Oh no!

                                                          1. 3

                                                            All 26 layouts? Shouldn’t it be all 26! layouts instead? >_>

                                                        1. 1

                                                          I enjoyed the article. But I still don’t like DSLs. Perhaps it’s because I’ve always had to hack into them and embed myself in someone else “world”. While that in itself can be an interesting experience, it also causes a lot of cognitive overhead, to exist and swap between multiple “worlds”.

                                                          1. 1

                                                            So this is essentially a lower-level, finer-grained way one would otherwise mmap two processes to the same shared file in memory?

                                                            1. 4

                                                              It warms the heart to know that some people push back against adding syscalls just to be convenient for one set of programs. Progress needs to have reasons and be reasoned about.

                                                              Are minimal syscall OSes akin to RISC?

                                                              1. 5

                                                                Are minimal syscall OSes akin to RISC?

                                                                Microkernels have minimal functionality and thus also very few system calls. In fact, some microkernels only have a single system call for inter-process communication.

                                                                I’m not sure whether it’s useful to reduce the number of system calls in a big monolithic kernel. I think it might lead to a complex system call interface with system calls that perform multiple (possibly unrelated) functions. This is already reality, for example with the ioctl system call in Linux that is used for lots of very different tasks.

                                                                1. 3

                                                                  RISC no longer has anything to do with a reduced instruction count, but instead reduced instruction complexity.

                                                                  1. 1

                                                                    Somewhat relevant to the orthogonality of instruction count/complexity: https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf

                                                                1. 5

                                                                  Can someone please explain to me why it’s not sufficient to have the OS schedule all logical cores on the same physical core as a unit to a process?

                                                                  You have 2 logical cores per physical core? Great, each process gets logical cores in units of 2. If it’s multi-threaded, it gets to take advantage of them, without having to fear about a security leak to another process because of SMT-specific vulnerabilities.

                                                                  The Unix security boundary has long been “the process”. If you need a process to defend against itself, you’re doing it wrong. So as long as the OS doesn’t let logical cores on the same physical core belong to different processes, there shouldn’t be an issue. If particularly paranoid, if a process triggers any of the existing setuid-boundary checks, you could also disable the extra logical cores for that process – heck, an OS reward for minimizing attack surface in your application’s process architecture, that’s a win.

                                                                  But this seems too obvious to have been missed, yet isn’t happening, so I must be missing something significant. What?

                                                                  In particular, I don’t see why gamers or scientists doing modeling or any other such group needs to give up on a lot of parallelism to protect against things being allowed to leak into their security domain.

                                                                  I’m not a fan of SMT but in this case, a lot of the fear seems overblown, if only our OSes would update to reflect the reality of the lack of independent schedulability of SMT cores.

                                                                  1. 3

                                                                    This sounds very possible, and probably already implemented in some sort via cgroups, However that kind of design analysis and implementation would take some time and leave machines vulnerable in the mean time. As of right now, the fastest way to fix this issue, which is now detailed via proof-of-concepts that malicious actors can study (if they haven’t already used the exploit), is to simply disable hyperthreading. Someone please correct me if I’m incorrect here.

                                                                  1. 17

                                                                    According to the article: Very.

                                                                    But in the end, Intel will be fine. They reaped the reward of their dangerous and faulty design for a decade, while their competitor (which didn’t play as fast and loose) suffered substantially.

                                                                    One can only wonder how the microprocessor space would look today, if Intel and AMD had played by the same rule book.

                                                                    1. 17

                                                                      One could also make the argument every processor vendor has been playing fast and loose for decades by allowing memory accesses to unexpected locations, resulting in countless exploited systems. Other suggested hardware designs mitigate that, in hardware 1. However neither AMD nor Intel have taken it upon themselves to integrate such work due to the resulting performance overhead. My point is it’s all relative, and we should not be so quick to point fingers and decry industry-wide accepted practices like data forwarding. Instead it’s more productive to reflect on the current art in security and have a nuanced discussion on how we can do better.

                                                                      For example, would the community prefer locking down the entire processor, or is it better to have secure co-processors and leave it up to the programmer to decide what data is dangerous to leak, and what data we don’t care about it? This could also be seen as an opportunity for innovations in the hardware design space, where cloud, workstation, mobile, and IoT processors all see an increase in heterogeneity to support their respective threat vectors.

                                                                      1. 12

                                                                        One can only wonder how the microprocessor space would look today, if Intel and AMD had played by the same rule book.

                                                                        They sort of did, though. They both offered complex ISA’s that sacrificed security and reliability for performance optimizations. The companies that offered the former either went bankrupt or had to ditch hardware-level security. The Itanium, which had security enhancements, was a recent example of a massive loss for trying to do something different. The market has been very clear that they don’t give a shit about security: just backward compatibility and performance that goes up every 18 months or so. Any company that did anything else suffered greatly.

                                                                        Now, people are worried about CPU security. Yet, I’m still not sure they’ll sacrifice backward compatibility and/or lots of performance upfront if a new CPU advertised that. Instead, Intel and AMD are selling people on both of those being good while apologizing about effects of patches later on after purchase. Works better with market psychology. Intel apparently doing worse but AMD’s processors aren’t secure either.

                                                                        1. 5

                                                                          Memory segmentation also comes to mind, where modern desktop OS’s shirked it out of complexity, to the point where it’s now legacy in x86. They sure did use some of its features to make code faster, though (registers to aid with thread local data).

                                                                          1. 4

                                                                            That’s a great example. High-assurance security, like GEMSOS kernel, made good use of segments. NaCl used them if I remember right. Most recent was the version of Code Pointer Integrity that didn’t get broken. Unlike the non-segmented version. ;)

                                                                            I wonder if any 2018-2019 work is using segments. Them on the way out might be why folks are mostly doing SGX instead. I figured Intel would screw that up with lots of flaws, though. Segments are simpler.

                                                                      1. 15

                                                                        In a way this is why I use Go. I like the fact that not every feature that could be implemented is implemented. I think there’s better languages if you’d want that. Don’t use a language either just because it is by Google.

                                                                        Also I think that it is actually more the core team, than Google. I think that Go if it was the company would be much different, than we have now. It would probably look more like Java or Dart.

                                                                        One needs to see the context. Go is by people with a philosophy in the realms of “more is less” and “keep it simple”, so community wise closer to Plan9, suckless, cat-v, OpenBSD, etc. That is people taking pride in not creating something for everyone.

                                                                        However unlike the above the language was hyped a lot, especially because it is by Google and especially because it was picked up fairly early, even by people that don’t seem to align that much with the philosophy behind Go.

                                                                        I think generics are just the most prominent example of “why can’t I have this?”. Compare it with the communities mentioned above. Various suckless software, Plan9, OpenBSD. If somehow all the Linux people would be thrown onto OpenBSD a lot of them would probably scream to Theo about how they want this and that and there would probably be some major thing they “cannot have”.

                                                                        While I don’t disagree with “Go is owned by Google” I think on the design side (and generics are a part of that) I’d say it’s owned by a core team with mostly aligned ideas. While I also think that Google certainly has a bigger say even on the design side than the rest of the world I think the same constellation of authors, independently of Google would have lead to a similar language, with probably way fewer users, available libraries and I also don’t think Docker and other projects would have picked it up, at least not that early.

                                                                        Of course there’s other things such as easy concurrency that could have played a role in adoption, but Go would probably have had a lot of downsides. It probably would have a lot less performance improvements and slower garbage collection, because I don’t think their would be many people working so much in that area.

                                                                        So to sum it up. While Google probably has a lot of say, I don’t think that is the reason for not having generics. Maybe it is even that Go doesn’t have generics (yet) despite Google. After all they are a company where a large part of the developers have generics in their day to day programming language.

                                                                        EDIT: Given their needs I could imagine that Google for example was the (initial) cause for type aliases. I could be wrong of course.

                                                                        1. 8

                                                                          it was picked up fairly early, even by people that don’t seem to align that much with the philosophy behind Go.

                                                                          Personally, I think this had a lot to do with historical context. There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time. I think there were a lot of people suffering from “interpreter fatigue” (I’ve read several times that Python developers flocked to Go early on, for example). So I think that, for quite a few people, Go is just the least undesirable option, which helps explain why everyone has something they want it to do differently.

                                                                          Speaking for myself, I dislike several of the design decisions that went into Go, but I use it regularly because for the things it’s good at, it’s really, really good.

                                                                          1. 5

                                                                            There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time.

                                                                            Have you looked at “another language”, and if so, what are your thoughts?

                                                                            1. 4

                                                                              Not a whole lot. My superficial impression has been that it is pretty complicated and would require a pretty substantial effort to reach proficiency. That isn’t necessarily a bad thing, but it kept me from learning it in my spare time. I could be totally wrong, of course.

                                                                            2. 4

                                                                              There weren’t (and still aren’t, really) a lot of good options if you want a static binary (for ease of deployment / distribution) and garbage collection (for ease of development) at the same time

                                                                              D has both.

                                                                              1. 2

                                                                                I completely agree with your statement regarding the benefits and that this is certainly a reason to switch to Go.

                                                                                That comment wasn’t meant to say that there is no reason to pick up Go, but more that despite the benefits you mentioned if there wasn’t a big company like Google backing it, it might have gone unnoticed or at least other companies would have waited longer with adopting it, meaning that I find it unlikely it would be where it is today.

                                                                                What I mean is that a certain hype and a big company behind is a factor of this being “a good option” for much more people, especially when arguing for a relatively young language “not even having classes and generics” and a fairly primitive/simple garbage collector in the beginning.

                                                                                Said communities tend to value these benefits much higher than the average and align very well in terms of what people emphasized. But of course it’s not like one could be sure what would have happened and I am also drifting off a bit.

                                                                            1. 2

                                                                              Should this have the “satire” tag?

                                                                              1. 2

                                                                                Isn’t satire supposed to be fictional?

                                                                                1. 3

                                                                                  Afaict these are “what if” anecdotes, where the author considers the result of submitting those seminal works in the modern academic climate.

                                                                                  1. 1

                                                                                    Haha, I’m gullible it seems. So given that I +1 that suggestion

                                                                                2. 1

                                                                                  I’ve suggested that tag now.

                                                                                1. 4

                                                                                  I’m so excited for this. WSL is great, but I’ve gone through four or five different third-party console applications and none of them have been quite as good as my favorite terminal emulators from Linux-land; I just want a bare-bones, good-looking window, and most Windows consoles only give me one or the other (who wants two toolbars full of icons on a console?!). Here’s hoping Microsoft delivers!

                                                                                  1. 6

                                                                                    Ditto. cmder comes closest, but a good shell can’t hide the underlying suck that is the Windows CONSOLE in all its MSDOS compatible glory.

                                                                                    1. 2

                                                                                      I’ve enjoyed using mintty with WSL. It’s been able to handle most of my cases well (fancy colors/italics/relatively low latency).

                                                                                      1. 1

                                                                                        Yes but can it handle cutting and pasting huge multi-page blocks of text without falling on its face?

                                                                                        1. 3

                                                                                          I regularly cut multiple page log outputs (stdout), and paste large chuncks of text into vim. I’ve not experienced any problems.

                                                                                          1. 2

                                                                                            That sounds like an XY problem. If you need something like that you should probably be using a file input, or redirecting the clipboard to standard input, or writing a script. I’m not excusing slow terminals but there is just not a good use case that I can think of, where pasting a big chunk into terminal is the best way to do it.

                                                                                            1. 1

                                                                                              I agree totally, this is a particularly sub optimal workflow which I have no choice but to use. We’re working to get away from it but for now we’re stuck.

                                                                                              1. 2

                                                                                                Why not a windows equivalent of pbcopy on MacOS? You can pipe anything to it and it goes straight into your C&P buffer.

                                                                                      2. 3

                                                                                        Alacritty worked great for me, and it is the same terminal emulator as everywhere

                                                                                        1. 5

                                                                                          I had no idea alacritty worked on Windows! That’s what I use on Linux and I love it. Thank you thank you myfreeweb :)

                                                                                      1. 2

                                                                                        This post is obviously written from the perspective of someone who cares about safety and security. Safety is a very important ‘top’ to the system but there are others which can be more important depending on what the user values. The software can be as safe as you want, but if it doesn’t solve the problem I need the software to solve, then it’s useless to me. If safety concerns are preventing me from writing software that is useful to people, then it’s not valuable. In other words, sometimes ‘dangerous code’ isn’t what we need saving from.

                                                                                        Personally, I feel what we need saving from is people building software who have zero consideration for the user. So the better I can directly express mental models in software, the better the software is IMO. Modern C++ is actually really good at allowing me to say what I mean.

                                                                                        1. 3

                                                                                          This is based on assumptions that safety is only useful as an end to itself, and that safety decreases language’s usefulness. The counterpoint to it is that safety features eliminate entire classes of bugs, which reduces amount of time spent on debugging, and helps shipping stable and reliable programs to users.

                                                                                          Rust also adds fearless concurrency. Thread-safety features decrease the amount of effort required to parallelize the program correctly. For example, parallel iterators are simple to use, and can guarantee their usage won’t cause memory corruption anywhere (including dependencies, 3rd party libraries!).

                                                                                          So thanks to safety features you can solve users’ problems quickly and correctly.

                                                                                          1. 1

                                                                                            I feel that one day both C and C++ will be relegated to academic “teaching languages” that students will dread, that are used only to explain the history and motivations by the more complex (implementation wise) but better language that overtakes them.

                                                                                            1. 1

                                                                                              I am not sure why that would ever happen. As teaching languages both are pretty much useless, for the surface simplicity and hidden complexity of C or the sheer size of C++. We are currently not teaching BCPL or ABC or any other predecessors of currently popular languages, because while interesting from a historical perspective it doesn’t teach you all that much.

                                                                                              1. 3

                                                                                                Late response, but I totally agree with you. I was thinking of it more in terms of the way assembly is typically taught to CS students. It’s good to know that your code will be run as these machine instructions eventually, but it’s not strictly necessary for developing useful applications.

                                                                                        1. 1

                                                                                          For most computing environments, performance is the problem of two decades ago. Last decade’s problem was already different, and this decades problems are at least 20 years advanced beyond performance being the main driver of technology decisions. We have new problems, and performance is not the place to waste time.

                                                                                          I prefer to think of “performance” as “computational and energy efficiency”. I wonder if my thinking is a fallacy or if I’m simply too far off from the target audience? I see he comments on this in the next paragraph and perhaps that’s where I’m at.

                                                                                          The biggest costs associated with static type systems would be the slowing of mental momentum. Seeing “something happen” has great effects on the mind to further push one to continue trekking towards a solution, compared to getting dense compiler errors.

                                                                                          1. 1

                                                                                            Sometimes I think the whole Rust vs C++ debate is a non-starter because both camps are coming from different perspectives. At some level, I think a lot of programmers who side with C++ would prefer introducing memory/lifetime bugs at the cost of reducing performance bugs. At some level, I think a lot of programmers who side with Rust prefer introducing performance bugs at the cost of reducing memory/lifetime bugs. Unless the conversation starts with “X is the priority. Y is secondary”, it ends up going in circles.

                                                                                            1. 11

                                                                                              For reference:

                                                                                              Copy elision [is the only allowed form of optimization (until C++14)][one of the two allowed forms of optimization, alongside allocation elision and extension, (since C++14)] that can change the observable side-effects. Because some compilers do not perform copy elision in every situation where it is allowed (e.g., in debug mode), programs that rely on the side-effects of copy/move constructors and destructors are not portable.

                                                                                              https://en.cppreference.com/w/cpp/language/copy_elision

                                                                                              1. 2

                                                                                                In other words, constructors and destructors are not necessarily good ideas, and should be utilized sparingly.

                                                                                                It’s a shame there are so many footguns in CPP.

                                                                                                1. 2

                                                                                                  Well, they’re OK as long as they’re pure, i.e. don’t have observable effects, which is good practice anyway.

                                                                                              1. 14

                                                                                                10/10 post, I love it when people do real deel dives into these topics. This post is about Ruby, but it affects any program that uses malloc.

                                                                                                As for why the author’s patch is faster, I have a couple of theories, but I’d love to see a CPU flamegraph of the benchmarks.

                                                                                                1. 3

                                                                                                  Some related recent work: Quantitative Overhead Analysis for Python

                                                                                                  1. 1

                                                                                                    As a followup on this article, what I would be interested is a matrix of comparisons between jemalloc, the malloc_trim patch and the MALLOC_ARENA_MAX env tweak. Also, an explanation with numbers on why jemalloc handled things better.

                                                                                                    1. 1

                                                                                                      Well, jemalloc was designed to avoid fragmentation and facilitate cache hits. By default it uses a 32 KiB thread-local cache (tcache) for small allocations. And the default number of shared arenas is a lot smaller, 4 per physical core, rather than glibc’s 8 per hyperthread.