Threads for owen

  1. 2

    I’m a little worried about someone talking about aligning code when they chose to full-justify their blog.

    I am less worried about how editors interpret tabs for indentation, since I believe that people who hate how an editor does that will change the behavior or change editors.

    1. 6

      Meh, in a single-column layout like that the lines are wide enough that full justification is fine IMHO, Butterick says basically “it’s an individual aesthetic choice” on that page you linked to.

      I do worry about tabs/indents, because of reading or editing other people’s code. If they use spaces I’m stuck with their indentation preference, and some people think 2-space or 8-space indents are OK (they’re wrong.) If they use tabs I see the indentation I prefer, but any spacing within the line, like marginal comments, gets badly messed up. And lines that have extra indentation to match something on the previous line, like an open-paren, come out wrong.

      This elastic system seems like a nice idea for within-line spacing. Maybe it can be combined with a syntax-driven rule that generates the leading indentation automatically when the file is opened; then the editor can just ignore leading whitespace. (Except in indent-sensitive languages like Nim and Python, obvs.)

      1. 3

        The problem is that one often has to work with others, and they have different preferences.

        By separating the semantics of the code (indented blocks, columns of code/comments) from the presentation of the code (amount of whitespace added), you provide the option for each reader to see a presentation that is most pleasing to parse without forcing a everyone to align their editor settings.

        1. 2

          The most recent versions of clang-format finally support my preferred style: tabs for indentation, spaces for alignment. Each line starts with one tab for each indent level. If you have whitespace beyond that (or anywhere other than the start of a line) then it is spaces. This means that the comment in the motivating example in this article looks correct for any tab width, and even with tabs not set to a uniform, but consistent, value (for example, in C++, it would be great if my editor could use a single 2-space tabstop inside a namespace but 4-space tabstops everywhere else, including the second tabulator in a line inside a namespace). This doesn’t need the editor to do anything special and works fine even with cat.

        2. 2

          I’m a little worried about someone talking about aligning code when they chose to full-justify their blog.

          Any links I can read about justification concern?

            1. 4

              To piggyback off this – I consider Butterick’s Practical Typography a must read. I’ve learned a ton and improved my written communication thanks to his guidelines.

            2. 3

              HTML/CSS has few, if any affordances for hyphenation, so setting text to be fully justified runs the risk of getting huge gaps in some lines if there’s a long word that can’t be hyphenated correctly.

              It’s not a dealbreaker for most but it can look a bit unprofessional if you’re unlucky.

              1. 5

                CSS has the hyphens property, which gives coarse control over hyphenation, set to auto (aka, use hyphens) by default.

                1. 3

                  It’s also simple to make it gracefully degrade to left-aligned text in browsers that don’t support hyphens, via @supports (hyphens: auto). In the modern browser world, I see no valid reason not to enable justification for browsers that support it, at least not for websites with significant amounts of text.

          1. 35

            As I was reading this wonderful writeup, I had a nagging feeling that most certainly ‘someone on the internet’ will bring up some half-assed moralizing down to bear on the author. And sure enough, it’ the first comment on lobsters.

            I think it’s a beautiful and inspiring project, making one think about the passage of time and how natural processes are constantly acting on human works.

            @mariusor, I recommend you go troll some bonsai artists, they too are nothing but assholes who carve hearts in trees.

            1. 8

              We do have an ethical obligation to consider how our presence distorts nature. Many folks bend trees for many purposes. I reuse fallen wood. But we should at least consider the effects we have on nature, if for no other reason than that we treat nature like we treat ourselves.

              I could analogize bonsai to foot-binding, for example. And I say that as somebody who considered practicing bonsai.

              1. 11

                Foot binding is a social act in which women are deliberately crippled in exchange for access to certain social arrangements in which they don’t need to be able to walk well. The whole practice collapsed once the social arrangement went away. It’s very different than just getting a cool gauge piercing or whatever.

                1. 6

                  Thank you Corbin for addressing the substance of my admittedly hot-headed comment. It did give me food for thought.

                  I am definitely in agreement with you on the need to consider the impact of our actions on the environment. I have a bunch of 80-year old apple trees in my yard which were definitely derailed, by human hands, from their natural growth trajectory. This was done in the interest of horticulture, and I still benefit from the actions of the now-deceased original gardener. All in all I think the outcome is positive, and perhaps will even benefit others in the future if my particular heritage variety of apple gets preserved and replicated in other gardens. In terms of environmental impact, I’d say it’s better for each backyard to have a “disfigured” but fruitful apple tree than to not have one, and rely on industrial agriculture for nutrition.

                  Regarding the analogy with foot-binding, which I think does hold to a large extent (i.e it involves frustrating the built-in development pattern of another, without the other’s consent) – the key difference is of course the species of the object of the operation.

                  1. 7

                    Scale matters too, I think.

                    I’m a gardener who grows vegetables, and I grow almost everything from seed - it’s fun and cheap. That means many successive rounds of culling: I germinate seeds, discard the weakest and move the strongest to nursery pots, step out the strongest starts to acclimatize to the weather, plant the healthiest, and eventually thin the garden to only the strongest possible plants. I may start the planting season with two or three dozen seeds and end up with two plants in the ground. Then throughout the year, I harvest and save seeds for next, often repeating the same selecting/culling process.

                    Am I distorting nature? Absolutely, hundreds of times a year - thousands, perhaps, if I consider how many plants I put in the ground. But is my distortion significant? I don’t think so; I don’t think that, even under Kant’s categorical imperative, every back-yard gardener in the universe selecting for their own best plants is a problem. It fed the world, after all!

                    1. 3

                      My friend who is a botanist told me about research he did into how naïve selection produces worse results. Assume you have a plot with many variants of wheat, and at the end of the season, you select the best in the bunch for next year. If you’re not careful, the ones you select are the biggest hoarders of nutrients. If you had a plot with all that genotype, it would do poorly, because they’re all just expertly hoarding nutrients away from each other. The ones you want are the ones that are best at growing themselves while still sharing nutrients with their fellow plants. It’s an interesting theory and he’s done some experiment work to show that it applies in the real world too.

                      1. 2

                        The ones you want are the ones that are best at growing themselves while still sharing nutrients with their fellow plants.

                        So maybe you’d also want to select some of the ones next to the biggest plant to grow in their own trials as well.

                2. 3

                  I think it’s a beautiful and inspiring project, making one think about the passage of time and how natural processes are constantly acting on human works.

                  I mean… on the one hand, yes, but then on the other hand… what, we ran out of ways to make one think about the passage of time and how natural processes are constantly acting on human works without carving into things, so it was kind of inevitable? What’s wrong with just planting a tree in a parking lot and snapping photos of that? It captures the same thing, minus the tree damage and leaving an extra human mark on a previously undisturbed place in the middle of the forest.

                  1. 14

                    As I alluded in my comment above, we carve up and twist apple trees so that the actually give us apples. If you just let them go wild you won’t get any apples. Where do you get your apples from? Are you going to lecture a gardener who does things like grafting, culling, etc., to every tree she owns?

                    The same applies here: the artist applied his knowledge of tree biology and his knowledge of typography to get a font made by a tree. I think that’s pretty damn cool. I am very impressed! You can download a TTF! how cool is that?

                    Also, it’s not ‘in the middle of a forest’, but on his parents’ property, and the beech trees were planted by his parents. It’s his family’s garden and he’s using it to create art. I don’t get the condemnation, I think people are really misapplying their moral instincts here.

                    1. 4

                      Are you going to lecture a gardener who does things like grafting, culling, etc., to every tree she owns?

                      No, only the gardeners who do things like grafting, culling etc. just to write a meditative blog post about the meaning of time, without otherwise producing a single apple :-). I stand corrected on the forest matter, but I still think carving up trees just for the cool factor isn’t nice. I also like, and eat, beef, and I am morally conflicted about it. But I’m not at all morally conflicted about carving up a living cow just for the cool factor, as in, I also think it’s not nice. Whether I eat fruit (or beef) has no bearing on whether stabbing trees (or cows) for fun is okay.

                      As for where I get my apples & co.: yes, I’m aware that we carve up and twist apple trees to give us apples. That being said, if we want to be pedantic about it, back when I was a kid, I had apples, a bunch of different types of prunes, sour cherries, pears and quince from my grandparents’ garden, so yeah, I know where they come from. They pretty much let the trees go wild. “You won’t get any apples” is very much a stretch. They will happily make apples – probably not enough to run a fruit selling business off of them, but certainly enough for a family of six to have apples – and, as I very painfully recall, you don’t even need to pick them if you’re lazy, they fall down on their own. The pear tree is still up, in fact, and at least in the last 35 years it’s never been touched in any way short of picking the pears on the lowest two or three branches. It still makes enough pears for me to make pear brandy out of them every summer.

                      1. 6

                        I concede your point about the various approaches as to what is necessary and unnecessary tree “care” :)

                        No, only the gardeners who do things like grafting, culling etc. just to write a meditative blog post about the meaning of time, without otherwise producing a single apple :-).

                        But my argument is that there was an apple produced, by all means. You can enjoy it here: https://bjoernkarmann.dk/occlusion_grotesque/OcclusionGrotesque.zip

                  2. 3

                    Eh. I hear what you’re saying, but you can’t ignore the fact that “carving letters into trees” has an extremely strong cultural connection to “idiot disrespectful teenagers”.

                    I can overlook that and appreciate the art. I do think it’s a neat result. But then I read this:

                    The project challenges how we humans are terraforming and controlling nature to their own desires, which has become problematic to an almost un-reversible state. Here the roles have been flipped, as nature is given agency to lead the process, and the designer is invited to let go of control and have nature take over.

                    Nature is given agency, here? Pull the other one.

                    1. 3

                      You see beautiful and wonderful writeup, I see an asshole with an inflated sense of self. I think it’s fair that we each hold to our own opinions and be at peace with that. Disrespecting me because I voiced it is not something I like though.

                      1. 15

                        I apologize at venting my frustration at you in particular.

                        This is a public forum though, and just as you voiced your opinion in public, so did I. Our opinions differ, but repeatedly labeling other as “assholes” (you did in in your original post and in the one above) sets up a heated tone for the entire conversation. I took the flame bait, you might say.

                        Regarding ‘inflated sense of self’ – my experience with artists in general (I’ve lived with artists) is that it’s somewhat of a common psychological theme with them, and we’re better off judging the art, not the artist.

                    1. 4

                      HP has a tradition in making terrible software. I remember how they once implemented the printer „control panel“ partially as HTML files in Program Files (yes, there is a space in the URL that sometimes broke the application), partially as an ActiveX component and partially as resources loaded from an internet domain (that does not exist anymore, of course).

                      On another project I met an API for Oracle templating (generating PDF files from templates+parameters) that was build as a web service, but this webservice was just a wrapper around a command line application (maybe they just emulated it in later versions?) and instead of some meaningful service/method parameters, there was an array of CLI arguments on the input side of the web service (and on the output side, you should do polling to check whether the document is already generated, IIRC).

                      Not sure why, but big technological corporations produce, from time to time, software of a really poor quality. You would get fired in college with a term paper like that. Or if you applied for a job in an average firm with that kind of example code, they would laugh at you. But in “big tech” such software sometimes finds its way through the corporate processes.

                      1. 4

                        Most often, it’s because incentives don’t exist to do a better job. And that’s not always bad.

                        When I worked manufacturing IT, coworkers smarter than I framed every feature in terms of “how many refrigerators will this help me sell?” Every bit of software that I built and deployed was a liability unless it helped us build and ship appliances faster, cheaper, or more reliably. So lots of things that weren’t core to the business got built up to the point that adding any more wouldn’t be worth the investment.

                        I would imagine the story here is very similar. HP put just enough effort into this to achieve some goal. Maybe it was all about winning a large procurement contract with unusual stipulations. Maybe it was part of a licensing struggle with Microsoft. But either way, HP probably got out of it just what they needed, and any more effort put into making FreeDOS run won’t help sell more laptops.

                      1. 23

                        If you liked this as much as I did, please take the time to see previous submissions by the same author - this is a really remarkable collection of visualizations.

                        1. 4

                          I recognised the style from the GPS article. Even the bits of that where I already understood the concepts were a fantastic read. I’m looking forward to spending the time to read through this one properly.

                        1. 6

                          Is every possible sequence of digits found somewhere in π?

                          1. 9

                            That property is called disjunctive, and, while a widely held belief, it’s unknown whether π is in fact a disjunctive number.

                            1. 3

                              We don’t know for sure: pi is believed to be a normal number. Normal numbers have the pattern that all single digits are equally likely to appear, all pairs are equally likely, all triplets, etc. If this holds, any sequence of digits of length n, represented in base b will appear with probability 1/b^n.

                              An example of a normal number formed by construction is Champernowne’s constant. In base 10, it is 0.12345678910111213(...) - a concatenation of all natural numbers. As you can see, any natural number you can name is guaranteed to appear inside this constant.

                              Normal numbers are interesting because it’s been proven that almost all real numbers are normal; however, outside of numbers explicitly constructed to be normal, very few reals have been proven to be normal.

                              e: u/sinic is actually more correct than I; all normals are disjunctive but not all disjunctives are normal.

                              1. 1

                                Thanks to this thread I understand today’s SMNC! https://www.smbc-comics.com/comic/normal

                              2. 1

                                That’s an open mathematical question. No one has proven it, but I think the consensus guess is yes.

                              1. 16

                                MacBook Pro M1 Max for work, Reg M1 for personal. I kid you not I can go for a weekend with these laptops and use them extensively and I will have battery left come Monday. I have been missing my OpenBSD machine but alas. The hardware is super solid and the battery life and performance have been amazing.

                                1. 4

                                  Mostly the same here, 16” with M1 Pro for work and 14” with M1 Pro for personal use. The hardware is incredible honestly, they don’t get hot, they done make noise, just this Monday I noticed that there was a faint sound of wind coming from my laptop and then remembered I’d had a Chromium compile using 100% of all cores for the past couple of hours. Any previous laptop I’ve owned (mostly Dells) or any of the Intel macs in the office would’ve been painfully hot and loud as a jet for the whole duration – while compiling Chromium less than half as quickly.

                                  The Intel monopoly in the laptop space desperately needs to come to an end, and PC manufacturers need to drastically step up their game. I’m not a huge fan of macOS, but I can’t defend getting vastly inferior PC hardware (in terms of performance, battery life, build quality and the screen/speakers/webcam/etc) at the same price just to be able to run an OS I prefer.

                                  I hope Asahi gets really good, because dual booting macOS and Linux on these things would be amazing.

                                  1. 3

                                    vastly inferior PC hardware (in terms of performance, battery life, build quality and the screen/speakers/webcam/etc) at the same price

                                    Are you buying new hardware, or used?

                                    I paid AUD325 for my refurbished ThinkPad W520 w/ 16GiB RAM back in 2018, and transplanted the (fast) SSD from my old X220 into it.

                                    Yeah, the M1 that my employer issued me is superior in most respects to the W520. But it’s literally an order of magnitude more expensive new.

                                    1. 3

                                      I’m buying hardware new for the most part.

                                      I don’t tend to go for cheap hardware, I know PC manufacturers are way more competitive in the lower market segments than where the MacBook Pro operates. Apple also has insane pricing on things like storage space, so if you need a couple terabytes it very quickly becomes a much worse value proposition, and the lack of upgradability absolutely sucks. But if you’re looking for something around the MacBook Pro price range, and don’t need more than around 1TB of storage and 16GB of RAM, it’s really, really hard to find a better laptop than the MacBook Pro in my experience; at least when factoring in qualities like the screen and speakers and trackpad.

                                      1. 3

                                        Agreed. That M1 that I’ve started using is indisputably the next generation of laptop; nothing I’ve used that is Intel based comes close. It also has better sound than the Bluetooth speaker currently adorning my desk.

                                        But a while ago I switched to refurb and I haven’t looked back.

                                        Leaving aside my wife who runs new XPSs (also on Ubuntu) I bought the rest of our family fleet - 1 x W520 for me and 3 x X250s for the kids (so we can share docking stations, etc.) - for around AUD1,200 in total.

                                        1. 1

                                          Indisputably? According to who?

                                          1. 2

                                            Have you read the articles around the M1 vs Intel cpus in laptops (here’s just one)? These M1 cpus are incredibly powerful, about as powerful as the intel cpus if not more so. But they use half the energy in all comparisons. On top of Apple moving over to arm, there’s also Windows 11 built for arm and chromebooks too. I’m happy that arm is rising in popularity, so maybe we won’t just have to go with only Intel or AMD in the future.

                                            1. 2

                                              That’s from a year ago, and a disingenuous time frame as well being pre-Tiger Lake. Here’s one from last month, PCMag, that shows Apple wininng at effecincy but losing out to AMD and Intel in several benchmarks. The differences aren’t great, but “indisputable” is the wrong word, as here it is, being disputed.

                                              1. 4

                                                I think my use of that word literally made your head explode.

                                                ;)

                                                1. 2

                                                  I only see power being disputed there, and not by a huge margin either. Efficiency is still going to be a huge part of “the next generation of laptops.” That’s why you don’t have phones running x86 processors. Also with microsoft developing windows for arm, Apple with its m1, and google’s chromebooks, I still firmly believe that arm is indisputably the future for laptops, and hopefully desktops while we’re at it.

                                                  1. 1

                                                    If not RISC coming around the corner before then. It will be a while for the industry adjusts to a non-x86 architecture. Microsoft has had ARM support for ages and was garbage. I used Ubuntu on ARM Chromebook in 2012 and still to this day, a lot of binaries aren’t available. Steam Deck had to be x86 or there’d be no games.

                                              2. 1

                                                Okay, fair cop - that was rhetorical flourish, but only slightly, and I didn’t mean to preclude the idea that other laptops are similarly good.

                                                To explain a bit further: the M1 represents a step change from previous laptops I’ve used in terms of battery life, convenience, and general usability in some ways. For example: no fan, no touchbar, usable keyboard, magsafe power adaptor, actual ports in addition to USB-C, etc.

                                                There may certainly be Intel laptops at a similar level, but in which case, they also would be the “next generation” of laptops compared with the XPSs and X and W series I’m used to running.

                                                Also, I’m still going to DIY my next laptop, because I’m quite sick of almost everything about “consumer” (how I hate that term) laptops and operating systems. But the M1 will make that a harder trade-off.

                                              3. 1

                                                520

                                                W540 :/

                                            2. 2

                                              I mean, I looked at the the prices of new X-series ThinkPads and all except IIRC the X13 were more expensive than a base MacBook Air, and usually worse specced, not to mention things hard to put on a specsheet like mouthfeel/build quality.

                                              You might not be buying used, but someone else has to buy new in the first place for used to actually happen.

                                              1. 2

                                                Yup, and just as with cars, I will continue to benefit from the second hand market while not really understanding why people buy new in the first place.

                                              2. 1

                                                I had strongly considered getting a used W520 years back. The ability to just buy replacement and extended capacity batteries, the DVD drive bay which can be repurposed, no numeric keypad, sufficient RAM… that was all great. I think the prices back in 2015 / 2016 were still a bit high for a used system, so I didn’t get one back then. The other thing giving me pause would have been the weight.

                                                1. 1

                                                  Actually I’d recommend the 521, as the 520 has an awful trackpad. I still have a 521 trackpad in my workshop to fit to the 520 at some point.

                                                  1. 1

                                                    Argh, I meant 540/541 here.

                                                2. 1

                                                  520

                                                  Argh! Following this up like some of my other posts … I meant a W540, not a W520.

                                              3. 4

                                                I kid you not I can go for a weekend with these laptops and use them extensively and I will have battery left come Monday.

                                                Yeah, this part is impressive. I can get with the Air 8 hours continuous with max brightness, and light usage at like half brightness for a week, maybe two. It’s a fully fledged laptop you can treat like an iPad battery life wise.

                                                1. 4

                                                  My boss, a long-time Apple hater, ended up getting an M1 MBP which he took on a two-week vacation. He told me that he realized a week in that he forgot to pack its charger, then realized “woah, it’s been a week… and I only now thought about charging it?!” Turns out it had just over 50% charge. He turned the screen brightness down and brought it home with over 10% to go.

                                                  He now begrudgingly respects Apple.

                                                2. 2

                                                  I was really wanting an arm laptop for the battery life after hearing about how well the M1 cpus performed. But I couldn’t justify spending so much money on an Apple product, I just don’t like them that much personally. I bought a Galaxy Book Go instead for under $300. Pretty dang cheap for a laptop, and obviously the specs show for it. But I like it so far. I’m gonna wait until Ubuntu 22.04 is officially released and install it probably.

                                                  For work I use a regular Galaxy Book with Ubuntu on it. It works very well for what I do, no problem running my dev environment.

                                                  1. 1

                                                    I can compile things on it without fans making noise, at a speed competitive with a big Ryzen box. M1 is so good that I can forgive Apple the dark days of TouchBar and janky keyboards.

                                                    1. 1

                                                      I had just built a Ryzen 3600 hackintosh when the M1 Air came out. I had spent a lot on cooling to try and get it to run silently. It was still annoyingly audible.

                                                      I bought the M1 Air and it was the same speed for everything I tried it on - and was a laptop. With no fan.

                                                      I sold the Ryzen tower straight away.

                                                  1. 5

                                                    I heard there was a project to port Qubes to seL4 as hypervisor. This was years ago, and I haven’t heard about it again.

                                                    I am hopeful it will pick up, at which point Qubes architecture will possibly be sound. Right now, unfortunately, they use Xen, which hypervisor runs with full privileges and is far too large to be trusted.

                                                    edit:

                                                    Found the effort, makatea.

                                                    The requirements document suggests, near the end, that this effort is funded. I am very hopeful for this project.

                                                    I quote:

                                                    This effort is co-funded by a grant from NLnet Foundation. Neutrality’s time is co-funded by the Innosuisse – Swiss Innovation Agency and the European Union, under the Eurostars2 programme as Project E!115764.

                                                    1. 5

                                                      There was work to port Qubes to Genode and the NOVA hypervisor, but not seL4. Last I heard the seL4 virtualization is pretty buggy (yes, seL4 does crash).

                                                      1. 4

                                                        and is far too large to be trusted

                                                        Trusted to protect against who/what?

                                                        If I was looking for an OS that would keep my computer relatively safe against 0days in my web browser, Qubes would be an attractive option: my threat model would essentially be “let me keep my bank activity over there, my work activity over here, and any risky browsing contained far away”. Could someone build an exploit chain from Firefox through the OS into Xen? Sure, but I’m not betting on it: that’s a ton of effort that would be best spent exploiting cloud computing users instead of little ’ol me.

                                                        If I was looking for an OS that would keep me safe against an advanced threat actor with state-level funding? I probably would decide to do less interesting things with my life and stop using a computer to do those things.

                                                        Sweeping claims about what can/cannot be trusted are disappointing when they aren’t tied to a threat model. Qubes, however, disappoints me as well because they don’t publish any explicit threat model that I could easily find via searching.

                                                      1. 10

                                                        I’ve (mercifully) never needed to deal with CVEs, but my understanding is that maintainers often dislike them because the process isn’t run by vendors/developers/maintainers but by “anyone who plugs details in the MITRE form”. After looking at the process a bit, it looks like it would be easy for me to submit a CVE for any product I wanted, give a link to a self-referential security page (“Foo has security issue bar, see CVE-XXX”), and have the same thing happen.

                                                        Strange system.

                                                        1. 8

                                                          Part of the problem here is that the idea of CVEs is not aligned with what many people think CVEs are.

                                                          Ultimately the core idea of the CVE system is just to have a common identifier when multiple people talk about the same vulnerability. In that sense having bogus CVE entries doesn’t do much harm, as it’s “just another number” and we won’t run out of numbers.

                                                          But then some security people started treating CVEs as an achievement, aka “I did this research and I found 5 CVEs!” etc. - and suddenly you have the expectation of the CVE system being a gatekeeper of what vulnerability is “real” and what not. (And having seen a lot of these discussions, they are imho a waste of energy, you’ll have cornercases of the form: might be a vuln, might help if you have another vuln to chain together, whether to call it a vuln people will never agree.)

                                                          1. 2

                                                            But then some security people started treating CVEs as an achievement, aka “I did this research and I found 5 CVEs!” etc. - and suddenly you have the expectation of the CVE system being a gatekeeper of what vulnerability is “real” and what not.

                                                            That is a maddening behavior that I’ve also observed. It’s also a hard thing to fix, considering that the CVE database was conceived in the face of vendors who refused to admit that they shipped security issues. We needed a common way to reference them even if the vendor disagreed.

                                                            I’m not sure how I think we should fix it, yet.

                                                          2. 3

                                                            The fact that you can is a strong checks-and-balance method of making sure that a maintainer cannot stonewall an actual vulnerability and pretend it doesn’t exist. The process is not without flaws, but it’s the devil we know and being an honors system it works surprisingly well. Speaking as a member of a security team in a well known project, the CVE part of the security process is among the least complicated aspects.

                                                            1. 2

                                                              It’s even weirder. I needed a CVE once for a library I maintain, and I couldn’t get one! Apparently ranges of numbers are allocated to certain organizations (big corporations and Linux distros), and you’re supposed to ask “your” organization to give you a number. I was independent, and nobody wanted to talk to me.

                                                            1. 6

                                                              Put it in the types:

                                                              The Haskell library dimensional goes a long way to solving this problem once and for all. https://hackage.haskell.org/package/dimensional

                                                              If this sort of thing were in widespread use in standard libraries, it would be wonderful.

                                                              1. 4

                                                                I like using type systems to keep programmers on the right path, but we have to be careful not to over-do it either. The job of a programmer ought to be to efficiently transform data; with very invasive types, the job often changes to pleasing the type checker instead. A programmer should create a solution by thinking about what the data looks like, what its characteristics are, and how a computer can efficiently transform it. If a library provides extremely rigid types, the programmers start to think in terms of “what do the types allow me to do?”; if the library tries to address this rigidity by using more advanced features of the type system to make usage more flexible, the programmer’s job is now to deal with increased accidental complexity.

                                                                Looking at the doc for dimensional, I find that the cost-benefit is not one that I would make. The complexity is too high and that’s going to be way more of a problem in the short and long run.

                                                                1. 3

                                                                  The job of a programmer ought to be to efficiently transform data; with very invasive types, the job often changes to pleasing the type checker instead.

                                                                  I’ve always seen “pleasing/fighting the type checker” as a signal that my problem domain is necessarily complex. If I need to implement a stoichometry calculator, for example, I’d much rather use dimensional or a similar library that allows me to say “we start with grams of propane, we end with grams of water” and let the type checker say that my intermediate equation actually makes sense. The alternative I could see is… what, extensive unit tests and review? Those rarely worked for me in chemistry class ;)

                                                                  1. 1

                                                                    Another upside of types is that they nudge you toward rendering them correctly as user-facing (or debugger-facing) strings.

                                                                  2. 1

                                                                    At work we have a custom units library built that wraps the units library. We also provide a bunch of instances for the numhask hierarchy. This pair has felt like a sweet spot. We mostly work with geometric quantities (PlaneAngle and Length), and have had great success lifting this up to vector spaces (e.g., Point V2 Length represents a point, V2 Length is a vector, (+.) lets us combine points and vectors, etc).

                                                                1. 46

                                                                  If there’s anything that should be required to use free software, it should be biomedical devices. To have a part of your body only modifiable and controllable by a corporation that keeps the details as proprietary secrets is like something out of a cyberpunk dystopia. And yet, this article is a great example of it happening!

                                                                  1. 5

                                                                    I guess the counter-argument is that (as a society) we don’t want random people tinkering with medical implants, for the same reasons we don’t want people making snake-oil medicine or amateur surgeons.

                                                                    I’d like to think there’s an acceptable middle-ground, but I’m not sure what that would even look like.

                                                                    1. 18

                                                                      I guess the counter-argument is that (as a society) we don’t want random people tinkering with medical implants, for the same reasons we don’t want people making snake-oil medicine or amateur surgeons.

                                                                      There are actually pretty robust communities working towards automated insulin delivery and (IIRC) CPAP machines. And on the medication front you have RADVAC. There’s a lot of interesting exploration going on here.

                                                                      I’d like to think there’s an acceptable middle-ground

                                                                      One option I heard that appealed to me: the FDA should require code escrow for all medical devices, and release code for any devices that are still in service after the manufacturer ends support.

                                                                      1. 7

                                                                        Code escrow is a good first step.

                                                                        But you need access to all the development tools like JTAG debuggers as well.

                                                                        Also, this all breaks down of the device firmware is locked down, and / or requires signed code.

                                                                        1. 3

                                                                          Required signed code is fine - even good - but the key for signing code that runs inside your body should be your own private key.

                                                                      2. 17

                                                                        There’s nothing illegal or immoral about eating snake oil or trying to do surgery on yourself.

                                                                        Segfaulting my own cyberware is pretty scary, but somebody else segfaulting it and having no recourse is scarier. The acceptable middle ground is sufficient documentation that a professional is capable of fixing it, and an amateur is capable of hurting themselves, but hopefully wise enough to choose a professional. This is how medicine works on stock hardware already.

                                                                        1. 8

                                                                          Requirungy something to be open source doesn’t imply that random people add random code.

                                                                          People also tend to run custom versions of big open source projects. However I think you should have full access on software installed on your body and if you want to should be able to have it changed, just like body modification at large.

                                                                          Will it be exploited by greedy companies? Certainly. But the same is true already for body modification and likely medicine at large. There’s always gonna be someone going as unethical as someone is legally allowed to to maximize profits.

                                                                        2. 1

                                                                          there don’t seem to be many companies creating this kind of technology. Adding additional burdens on them doesn’t seem like a promising way to promote these systems.

                                                                          1. 21

                                                                            Medicine has never really been a place where market dynamics did either well or good.

                                                                            1. 11

                                                                              Yeah, this might not be a problem which the free market is capable of solving on its own. Good thing there are ways to fix that too.

                                                                          1. 4

                                                                            Network devices can fake long ping responses but they can’t fake short ones. If an IPv4 address block has WHOIS information stating they’re located in the US but servers in Dubai, Mumbai and Istanbul have pings times of less than 30ms, it is unlikely those addresses are being used in the US.

                                                                            I’m assuming you tried to control for IP anycast here? What was your methodology to do that?

                                                                            1. 4

                                                                              Yep - anycast is really easy to detect with our probe network! If we get ping times from multiple different locations that suggest faster-than-light travel then the IP must be anycast (ie. exist in more than a single location).

                                                                            1. 2

                                                                              Neat article that helped give some formal structure to what I’ve been doing. When I write user facing code, I’ve always tried to use my type system to catch risks like this: a username isn’t just a username, it’s a TaintedUsername and you then have to explicitly go through some guarded path to get untainted data. This always fit nicely with the ports and adapters model - your core logic only accept clean things, your interfaces only handle tainted things.

                                                                              Of course, there were huge limitations here. if the library I grafted in would give you root if the user passed in {#root-shell-please!}, I had to know to look for that string in my cleaning function, to aggressively whitelist only certain inputs, or how to configure my library with less magic. So the unknown unknowns were always scary.

                                                                              Now I’m going to read the linked thesis and discover this problem was solved decades ago ;)

                                                                              1. 2

                                                                                Parse, don’t validate is even closer to what you’re describing. Capabilities are about securing access to “the outside world” (filesystem, network, …) while you’re talking about data in the application. These things are complementary and similar in some ways but not exactly the same thing…

                                                                                1. 2

                                                                                  Isn’t user generated content always tainted? What does cleaning it mean? If you put a username into your DB you protected against SQL injection, but that string is still tainted and unsafe to stick in an HTML attribute, for example.

                                                                                  1. 2

                                                                                    If I’m correctly guessing what you’re saying, here’s some advice: don’t think of a username with special characters in it as “tainted”. It’s not tainted. It’s a regular string, and its possible values are all possible strings (possibly with a length limit or whatnot).

                                                                                    However, don’t confuse strings with SQL code snippets or HTML snippets. “ ’ “ (single quote) is a valid string, and a valid HTML snippet, but not a valid SQL snippet. “&” is a valid string, and a valid SQL id snippet, but not a valid HTML snippet.

                                                                                    If you write:

                                                                                    "<p>Hello, <b>" + username + "</b>!</p>"
                                                                                    

                                                                                    that’s akin to a type error. The type of HTML should be different than the type of strings. A “better” way to write this would be

                                                                                    HtmlP(HtmlText("Hello, "), HtmlBold(HtmlText(username)), HtmlText("!"))`
                                                                                    

                                                                                    Notice the implied type signature of HtmlText: it takes a string and turns it into an HTML. Now this is verbose enough that you can see why it’s common to represent HTML as text. Which is an OK thing to do, as long as you realize it, and realize that when you jam a username into HTML, you need to do the type conversion from string to HTML by escaping special characters in the username.

                                                                                  1. 7

                                                                                    Interesting - I’ve not had much experience with ASN.1, but I’ve heard consistent gripes around the TLV encodings it defines being really bloated and making parsers difficult. IIRC, Mozilla and OpenSSL have had security critical bugs arise from misparsing TLS certificates rooted in ASN.1 parsing problems.

                                                                                    1. 7

                                                                                      This is an article I’m been trying to write for awhile and failing. The reason is, I can’t come up with the clear-cut “use x not y” scenario here. Monoliths are easier from an infrastructure perspective, but in my experience working on large monoliths at a certain scale becomes extremely difficult. You need a lot of practical experience with the monolith before you can start contributing meaningful improvements and overall feature velocity drops.

                                                                                      I suspect that’s why microservices took off, in part due to CTOs losing interest in the amount of time it took to ship a new feature. Suddenly you could leave the cruft behind and embrace the specific language and framework best suited to the problem at hand, not “the tool you were forced to use because that’s how the monolith works”. However it does add a ton of operational cost in terms of management. Tracing, a nice to have in a monolith becomes a real requirement, because you need to see where the request is failing and why. Your metrics and logging platform becomes really a core production system, but at any time one of your services can crush it.

                                                                                      I think if I were starting a company today I would likely start with a monolith, only because it would keep my application complexity down. Then as I grew I would break off services into microservices, but I don’t know if this is “best practice” or simply the pattern I’ve seen work ok.

                                                                                      1. 10

                                                                                        You need a lot of practical experience with the monolith before you can start contributing meaningful improvements and overall feature velocity drops.

                                                                                        I keep seeing people make that argument, but I never really understand it. I can’t imagine what architectural benefit is gained by having the network boundary between the components of your system? How is an HTTP request better than a function call? In what world do you get better IDE, debugging etc. support for making an HTTP request compared to making a simple function call? How is it helpful that whenever you make an HTTP request instead of a function call there’s the possibility that that there might be network delays or version differences between the two components talking?

                                                                                        And before anyone replies with “but with HTTP requests you get better logging, you have tracing frameworks etc. etc.”, what stops you from logging and tracing the same things through function calls? And before anyone replies with “but with microservices, the requests need to be self-contained, so they result in better decoupling”, what stops you from designing the internal structure of a monolith the same way? (I.e. passing around immutable data, instead of turning your codebase into OOP spaghetti?) I think microservices force teams to write more pure and functional code and that’s why people perceive architectural benefits in them, but the only thing stopping you from writing your entire application in a functional style is your habits IMO…

                                                                                        So, I think microservices are only about performance and scaling (the performance kind, not the complexity kind).

                                                                                        1. 7

                                                                                          I keep seeing people make that argument, but I never really understand it. I can’t imagine what architectural benefit is gained by having the network boundary between the components of your system? How is an HTTP request better than a function call? In what world do you get better IDE, debugging etc. support for making an HTTP request compared to making a simple function call? How is it helpful that whenever you make an HTTP request instead of a function call there’s the possibility that that there might be network delays or version differences between the two components talking?

                                                                                          You’re looking at microservices through the technical lens, and through that lens you are correct: they’ll almost always fail. But your analysis doesn’t consider people.

                                                                                          In my experiences with large development teams, microservices shine when you need to scale people. When you have an application that becomes so large that the test suite is irreducibly slow, QA takes a week to complete regression, release engineering means coordinating hundreds of simultaneous changes in diverging areas of the system, that one developer can’t keep the features of the product in their brain at one time, and you’re still working on dozens & dozens of new features and the requests aren’t slowing down…

                                                                                          @maduggan mentioned “feature velocity” – they’re spot on. When the 10+ year old monolith I am working on decomposing was a baby, you could launch new functionality in minutes to hours. Now, two weeks is the lower bound… and it’s not just a procedural lower bound anymore.

                                                                                          Microservices let you get back to better velocity and better fault tolerance – if you build for it! – but you pay for it in exchange for more difficult debugging, slower performance, etc etc. IME, those are the worst tradeoffs to take when just starting a product, so always start with a monolith. Worry about the problems you’ll face when you’re a big success if/when you become that big success!

                                                                                          1. 4

                                                                                            In my experiences with large development teams, microservices shine when you need to scale people. When you have an application that becomes so large that the test suite is irreducibly slow, QA takes a week to complete regression, release engineering means coordinating hundreds of simultaneous changes in diverging areas of the system, that one developer can’t keep the features of the product in their brain at one time, and you’re still working on dozens & dozens of new features and the requests aren’t slowing down…

                                                                                            Couldn’t all of these issues be solved more efficiently by splitting the monolith up into packages? (Ie instead of moving something to a separate service, distribute it as a package and consume it just like you would any other 3rd party software.)

                                                                                            The only overhead I can think of is that you may need to host your own repo mirror (in the worst case). You may also need to handle sunsetting of older, incompatible package versions, but this release synchronization problem already exists in micro services, so it’s not really a strong argument against packaging.

                                                                                            1. 4

                                                                                              In some cases I’ve found that pattern to work really well. One of my current monsters is a Ruby application, and partitioning functionality into individually versioned Rubygems hosted in the same repo that holds our vendored dependencies have let introduce a hub & spoke model. You can isolate development in those gems, they can have their own test/deployment/release cadences, and the core application can pull them in as they are ready.

                                                                                              But it’s not a silver bullet. As an example, we’ve had to introduce new software and infrastructure to change how our core search process works. We decided to cleave that as a microservice: the technology stack is necessarily different from the core application, and hanging it as a sidecar let us make use of an existing OLTP -> reporting stream. No new infrastructure to the core application, no changes to tap into the data, and a nicely defined failure domain we can take advantage of: the search microservice lives behind a circuit breaker and we always have the ability to fall back to the existing (horrifically slow and expensive) search.

                                                                                              Another place where I’ve felt pain is with acquisitions. Your nice homogeneous company can become a sprawling polyglot with very different infrastructure and deployments. Exposing an API / RPC endpoint is often the least common denominator you have.

                                                                                              1. 1

                                                                                                We decided to cleave that as a microservice: the technology stack is necessarily different from the core application.

                                                                                                I think this is a solid argument for extracting a service, at least under certain circumstances, but I wouldn’t call it a microservice architecture just because you support your monolith with a few tiny suppporting services with well defined purposes.

                                                                                                BTW, having to use a different stack doesn’t always have to mean extracting a separate service. Whenever I need to cross the language/ecosystem barrier, I first evaluate whether FFI is an option, if not, then I consider writing that “service” as a one-shot CLI application and spawning it as a child process from my service every time I’d otherwise make a network request (making sure that the executable is reproducibly built and injected into your path is very easy with nix, for instance). I know these are painful options compared to just writing a new function or a new package, but I think they’re still less painful than writing a new service.

                                                                                                Another place where I’ve felt pain is with acquisitions.

                                                                                                Yeah, that does sound hairy…

                                                                                              2. 4

                                                                                                If we split a monolith into distinct packages and those packages can only interact with each other through strict interfaces, then I’d argue you’ve implemented micro-services that run inside of a single execution environment. The organisational benefits are the same in both models.

                                                                                                1. 3

                                                                                                  I strongly favor monoliths where possible, but this isn’t true. Teams releasing bugs in their packages can stall a release of the monolith if they cause a rollback. The packages approach definitely scales better than an entangled monolith if you’re doing it right, but at some level of scale those packages will need to be broken out into services with their own ops teams. As a hyperbolic example, suppose all of Google was a monolith—barring obvious technical limitations. It wouldn’t work.

                                                                                                  In my eyes, the real problem is people dramatically underestimate the development scalability of a well designed monolith. Or have problems with a poorly designed monolith and jump to microservices right away rather than refactoring their code.

                                                                                                  1. 1

                                                                                                    I’m not sure I totally agree. If you: a) allow your software to be released continuously as opposed to large staged releases then an individual change is a small rollback, not an entire feature; or b) use feature flags to enable new features (and expose bugs) then a rollback of a release might be as simple as unflagging instead of doing a new deploy; or c) you release each package independently as opposed to all at once, then you can rollback a single package instead of the whole application (consider say hot reloading a single module with erlang/elixir)

                                                                                                    That’s not to say you should do these things, but more calling out that proper scalable package based development is much more similar to microservices than we normally acknowledge and that it’s easy to conflate monorepo problems with monolith problems with deployment practice problems with release strategy problems and so on, but that they’re actually different problems and certain things aren’t necessarily incompatible?

                                                                                                    1. 1

                                                                                                      I think I was unclear. I meant they aren’t completely equivalent. Yes the benefits are similar, but at some level of scale you will need to split up into services. To express it mathematically: suppose each team causes a rollback every 1000 years, but you have infinity teams. Your monolith rollback rate is now 100%.

                                                                                                      1. 2

                                                                                                        But why does a package rollback have to mean a system rollback? Imagine you have organized your system into n packages, each of which (or their subsets) are somehow versioned independently. Then you have a topmost layer where you bind these together, specifying which versions come from where and builds the monolith. If one of the packages turn out to have a bug, you just revert that package by configuring the topmost layer to pull its previous version and rebuild -> deploy.

                                                                                                        1. 1

                                                                                                          That’s a rollback with more steps. You’re waiting on a full rebuild, and by reverting one package and not the others you’d be pushing an untested configuration into production. People roll back to the last known-good build for a reason, whack-a-mole in prod is not a fun game.

                                                                                                          If you have perfectly clean interfaces exactly as a discrete service would have, it could work, but you’d still be waiting on a rebuild, instead of rolling back the 1-10% of canary instances immediately. And if you have enough packages getting individually rolled back you’d be constantly rebuilding and redeploying one binary to a huge fleet.

                                                                                                          @rslabbert’s point about hot code reloading could solve these issues, but in practice few people use languages with that capability. The Erlang VM is enough of an “operating system” that you could argue code reloading modules isn’t that different than redeploying containers on k8s, and message passing isn’t that different than connecting services with RPCs. In other words, the hot code reloading solution also demonstrates the desirable properties of having discrete services.

                                                                                                          1. 2

                                                                                                            by reverting one package and not the others you’d be pushing an untested configuration into production … whack-a-mole in prod is not a fun game.

                                                                                                            How is that different than rolling back one microservice and not the others?

                                                                                                            I think the rest of your argument is primarily about performance and I readily admit that microservices will eventually win in any performance-related direction you pull the scale to infinity.

                                                                                                            1. 1

                                                                                                              Like I said, it’s the same if you assume that your packages are equally as isolated as the equivalent microservices. Except the ergonomics of releases and rollbacks become more complicated for ops teams. It’s not really about performance, it’s about the ability to respond to incidents quickly. There are myriads of performance issues—mostly related to locality—that I’ve chosen to ignore for this discussion. One of the biggest advantages of monoliths is operational simplicity, take that away and they’re a lot less compelling.

                                                                                                              1. 2

                                                                                                                I don’t think there are fundamental reasons why the ergonomics have to be worse. If you’re fine with having a network interface between the two sides of a function call, than that means you can introduce enough dynamic dispatch that at the layer where you aggregate your “micropackages”, the rebuild only involves rebuilding the packages you’re rolling back.

                                                                                                                That said, I acknowledge the status quo when it comes to the availability of tools for the ops teams. Microservices are at one end of an architectural spectrum, and there’s a natural tendency for people to meet at the ends of spectrums and then the network effects kick in.

                                                                                                            2. 1

                                                                                                              untested configuration

                                                                                                              Yeah, but that’s the point! :) Opting in to a service architecture necessarily means losing the concept of a single testable configuration, in this sense. This is a good thing! It acts as a forcing function toward improved observability and automation, which ultimately yield more robust and reliable systems than QA-style testing regimens.

                                                                                                  2. 2

                                                                                                    Packages are worse than monoliths and worse than services IME. Teams pin to package versions, requiring massive coordination when rolling out changes — and you don’t understand what client code calls you, and so understanding impact on perf, IO, etc is much harder. And if pinning is banned, rolling out changes is even harder: the engineer making the change has to understand every callsite from every team and is inevitably responsible for anything that goes wrong.

                                                                                                    Services make understanding the full lifecycle of your code basically trivial: you know latency of your API endpoints, you know how much IO it’s doing, you know every bottleneck in your own code. Making a small improvement is easy, and easy to roll out. Not so with packages.

                                                                                                    Services are one approach to scaling teams. Monoliths are another, although people vastly underestimate how hard they are to scale to large numbers of engineers. I worked at Airbnb, which went the service route, and FB, which went the monolith route: both worked, and IMO FB worked better — but FB invested far, far, far more resources into scaling the monolith, including staffing teams to create new programming languages (Hack), VMs (HHVM), and IDEs (Nuclide). Airbnb largely used off-the-shelf components. It’s def a tradeoff.

                                                                                                    For small teams it’s no comparison though: monoliths are easy, services are a pain.

                                                                                                    1. 1

                                                                                                      Teams pin to package versions, requiring massive coordination when rolling out changes

                                                                                                      True - I guess this is both a blessing and a curse since teams can keep on using older versions if they need to - the flip side is you also risk having an unknown amount of old versions floating around at any given time. I can definitely see scenarios where this could become a problem (ie you quickly need to address a CVE)

                                                                                                      the engineer making the change has to understand every callsite from every team and is inevitably responsible for anything that goes wrong.

                                                                                                      Don’t microservices also suffer from this problem? (Except now you don’t have the benefit of static analysis)

                                                                                                      1. 1

                                                                                                        Don’t microservices also suffer from this problem? (Except now you don’t have the benefit of static analysis)

                                                                                                        Not really. With a service, your team knows roughly every codepath that might hit disk, open a DB connection (or attempt to reserve a thread in the DB connection threadpool), etc — because only your own team’s code can do that, since the machines are partitioned away from clients. Basic monitoring + a staged rollout is pretty safe, and if something goes wrong, the expertise for understanding how to fix it lies on your direct team. Every caller is logged via however you’re logging your API requests, so as long as those graphs look good, you’re good.

                                                                                                        With a “modular monolith,” IME a lot more human complexity happens, because you’re more likely to need to work cross-team when you make changes: any client could be hitting disk, opening connections, etc, because your code for doing that is shipped directly to them. And working cross-team is slow.

                                                                                                        1. 1

                                                                                                          Sorry, maybe I was unclear - in a microservices setup you still have to be aware of all the calls made in to your system so as not to accidentally introduce breaking changes to the protocol.

                                                                                                          If you ship your code as a library the other teams probably have a test suite and / or static analysis tools that exercises the integration between your code and theirs, which they can run as part of an upgrade.

                                                                                                          With microservices this isn’t possible, so you usually have to resort to having a CI environment that’s a poor man’s version of the prod environment. In my experience the QoS of the CI environment is way worse than its prod equivalent, which usually causes slow and brittle tests.

                                                                                                          Also, intentionally breaking changes to your microservice have to be carefully coordinated between all teams that use it, and is usually precluded by a period in which a backwards compatible version has to be maintained in parallel.

                                                                                                          Contrast this to the library approach where teams can opt in to new versions on demand, so there’s no need for a “compatibility layer”. Of course this has problems as well (eg what if your team’s library uses log4j?) but I would say it’s a viable alternative to microservices if your primary concern is code splitting.

                                                                                                          1. 2

                                                                                                            If by “opt in to new versions on demand” you mean that teams can pin to old versions of your library… That’s true! But, it’s not the unpinned “modular monolith” others were describing. Pinning carries its own issues with it:

                                                                                                            • Fixing important bugs? Time to patch the universe, because no one is on the latest version, and you’ll need to upgrade all of them. Or, you’ll need to update many old versions of your own code.
                                                                                                            • Fixing non-critical, but noticeable bugs? Whatever support channel you have open with clients will get pinged forever about the bug in the old version.

                                                                                                            Services have many technical downsides, but they’re nice at removing human communication bottlenecks. At a small company those don’t exist, at a big one they dominate almost all other work. There are other ways to skin the cat, but they usually require at least as much effort as services — I haven’t seen a perfect solution yet.

                                                                                                      2. 1

                                                                                                        That depends on how the packages are structured. I’ve done the modular monolith approach using a monorepo that did not use package versioning, and this structure works much better than a regular monolith.

                                                                                                        The true purpose of decomposing into modules, IMO, is not versioning but isolation. You can ensure that there is no implicit coupling between modules by running the test suite without any undeclared dependencies. Shopify has taken this even further with their static analysis tooling.

                                                                                                        1. 1

                                                                                                          Shopify has taken this even further with their static analysis tooling.

                                                                                                          The Shopify tooling was mostly a play to keep the engineers working on it from leaving AFAICT. The monolith there has been so stalled for years that teams actively build things as services that actually would have to do in to the monolith to work right, just to get anything done at all.

                                                                                                          1. 1

                                                                                                            source for this?

                                                                                                            1. 1

                                                                                                              My 5 years of employment at Shopify :)

                                                                                                    2. 1

                                                                                                      In our case, while we started out with a microservice for political reasons, it actually turned out to be the right call. We started out receiving requests to our service via SS7 (the telephony network), which required linking to a commercial SS7 network stack that only has support for specific versions of Solaris. Over time, it became a requirement to support SIP (over the Internet). Since we started with a microservice (an SS7 interface and the business logic), it was easy to add the SIP interface without having to support two separate versions of the code had it been a monolith.

                                                                                                      Velocity is not a concern here, since our customers are the Oligarchic Cell Phone Companies (it took us five years to get them to add some additional information in the requests they send us). Performance is a concern, since our requests are in real time (they’re part of the call flow on the telephone network).

                                                                                                    3. 2

                                                                                                      I keep seeing people make that argument, but I never really understand it. I can’t imagine what architectural benefit is gained by having the network boundary between the components of your system? How is an HTTP request better than a function call? In what world do you get better IDE, debugging etc. support for making an HTTP request compared to making a simple function call? How is it helpful that whenever you make an HTTP request instead of a function call there’s the possibility that that there might be network delays or version differences between the two components talking?

                                                                                                      The main benefit of a service-oriented architecture (or microservices, I guess, but that’s a semantic distinction I’m not really interested in exploring right now) is that services can be deployed and updated independently of one another. Folks who make this choice are often choosing to trade social coordination problems for distributed systems problems, and a certain point, it’s the right call. People who chose this architecture often don’t make raw HTTP calls to other services—the service contract is defined through things like gRPC, Thrift, or Smithy, which allows for the automatic generation of client/server stubs. I’ve found it to be a very pleasant development experience.

                                                                                                      The other benefit of a service-oriented architecture is the blast radius reduction of an outage, but that’s a bit more rare.

                                                                                                      1. 1

                                                                                                        … services can be deployed and updated independently of one another. Folks who make this choice are often choosing to trade social coordination problems for distributed systems problems …

                                                                                                        In {my-language-of-choice}, I use a lot of packages written by people that I have never met and never talk to. They don’t ask me when they want to release new versions of their packages. Though I guess there’s an important point to make here that not many languages support having multiple versions of the same package in the same process, so if different teams want to depend on different versions of the same package, that forces them to separate processes. Depending on the technology choices, that could indeed inevitably shatter the monolith, but a shattered monolith still doesn’t have to mean “every team develops a microservice”. In my experience, people reach out to creating new services too fast, while they could just as well be maintaining a package that a few “macroservices” could pull in and update at their leisure.

                                                                                                      2. 2

                                                                                                        Operationally we have a lot of tools that let us look at HTTP requests, and run things on different machines.

                                                                                                        You have a web app that has a 3D rendering feature. Having two processes instead of function calls lets you properly provision resources instead of doing “our web app requires GPUs because 1% of requests spin up blender”

                                                                                                        Similarly while you can have instrumentation for function calls, having stuff broken up at the process level means your operations teams will have more visibility into what’s going on. By making that layer exist they could independently do things like move processes to different machines or add debugging only to certain parts.

                                                                                                        It seems like it’s futzing around but if you are bought into kubernetes and the like this stuff isn’t actually as hard (security is tricky tho)

                                                                                                        1. 1

                                                                                                          Operationally we have a lot of tools that let us look at HTTP requests

                                                                                                          How are they better than wrapping your function with a decorator (or whatever your language supports) that logs the arguments and the stack trace at that point along with the time it took to execute etc.?

                                                                                                          You have a web app that has a 3D rendering feature.

                                                                                                          That’s certainly a use case that justifies spinning up a new service! But I find it unlikely that those people that boast having 2500 distinct microservices in production actually have 2500 genuinely unique hardware requirements, like I can only simulate this flux capacitor on that quantum FPGA we’ve bought for $100M. If you have a few services lying around supporting narrowly defined needs of a central monolith, I wouldn’t call that a microservice architecture.

                                                                                                          having stuff broken up at the process level means your operations teams will have more visibility into what’s going on… move processes to different machines or add debugging only to certain parts.

                                                                                                          This is a little fuzzy, because: a) It assumes that your operations teams can understand the workings of a web of microservices, but they can’t understand the web of function calls in your monolith, or how a bunch of packages work together. Imagine if you composed your application out of “micropackages” to make an analogy, along with a topmost layer that pulls together those micropackages, what stops your operations teams (capable of understanding the microservice mesh) from isolating, modifying and deploying your micropackages? b) I can’t tell how much of the benefit is actually performance-related (like “it seems like this functionality is consuming more CPU than is available to the rest of the application”), in which case, it goes back to my original argument that microservices are about performance.

                                                                                                        2. 1

                                                                                                          For one, you can mock HTTP “calls” in basically any language. The difficulty (or even, possiblility) of mocking functions in other languages varies greatly.

                                                                                                          1. 1

                                                                                                            How is it possible that you can turn a piece of functionality into an HTTP endpoint, register it somewhere so that its consumers can find it, and have that registration point be flexible that you can redirect the consumer or the producer to a mock version of the other side, but you can’t do the same without leaving your process? In the worst case (not a real suggestion, but a lower bound) why can’t you just register your function to a global “myServices” dictionary that is a key-value store that binds names to functions (classes, objects, function pointers, etc. etc.), then whenever you want to call it, just grab your function from the dictionary and call it. I know this is less practical than just importing a module and calling its functions, but certainly not more so than turning the whole thing into an HTTP workflow.

                                                                                                            1. 2

                                                                                                              Everything you’re asking about in this thread is entirely possible. Here is an example of an architecture that can be deployed as independent services or as a single monolith. Same code, different wire-ups.

                                                                                                              The reasons to prefer multiple services are almost all organizational rather than technical; service-oriented architectures are almost always technically worse than alternatives. But most organizations of nontrivial size are not constrained by technical bottlenecks, they’re constrained by logistical ones.

                                                                                                        3. 3

                                                                                                          at a certain scale becomes extremely difficult.

                                                                                                          Yes, but most places aren’t at a certain scale.

                                                                                                          1. 2

                                                                                                            Then as I grew I would break off services into microservices…

                                                                                                            Agreed. I’d add that splitting off services that are non-essential is a good way to start. That way you can figure out how you want to “do” microservices (and your team can get up to speed) without being overly concerned about downtime. You also get the resiliency benefits since the microservice can, in fact, go down without taking the entire app with it. My personal view is that a core monolith (macroservice?) that contains everything that is absolutely essential to the product with a bunch of non-essential microservices supporting it is often the sweet spot.

                                                                                                            1. 1

                                                                                                              Microservices as a tool for scaling organizations. When done well, the contract between services is clearer — it’s just the surface-level API + SLAs. Whereas multi-team monoliths need a TPM to coordinate work.

                                                                                                              On a purely technical level, microservices are worse than monoliths in almost every way. But they’re incredible for scaling out organizations.

                                                                                                            1. 6

                                                                                                              The best way to do this is to convert from awkward “Microsoft-ish quoted CSV” to “strict TSV” as a pipeline preprocessing stage (which can also run in parallel from the pipe consumer). This can be like a 100 line program in a fast language like Nim. Run time is similar to a less cautious "tr , \\t". You can always save the converted data if you need to repeat any processing a lot.

                                                                                                              Parenthetically, I think it is unfortunate that a purely informational RFC (explicitly not a standard) and/or UIs have allowed “CSV” to come to mean (in some circles such as text in the linked article) one very specific “quoted CSV” format. Classic “awk -F,-like CSV” or maybe “simple split CSV” is perfectly sensible already. If commas or newlines can occur in fields then the data is not really “separated”. So, “CSV” is a contradiction, not a description.

                                                                                                              Anyway, in my opinion, given the unfortunate situation now people should always qualify what kind of “CSV”, e.g. “RFC4180 CSV” or at least “quoted CSV”. Awk handles “splittable CSV” out of the box. It is probably a pipe dream to hope clear terminology prevails though.

                                                                                                              1. 2

                                                                                                                The best way to do this is to convert from awkward “Microsoft-ish quoted CSV” to “strict TSV” as a pipeline preprocessing stage (which can also run in parallel from the pipe consumer).

                                                                                                                My “oh god, CSV again, come on” pipeline starts with:

                                                                                                                xsv fmt --quote-always -t (echo -en '\x1C') input.csv

                                                                                                                using @burntsushi ’s wonderful xsv. You have to deal with more ceremony around quoting, but at least it is unambiguous.

                                                                                                                It is probably a pipe dream to hope clear terminology prevails though.

                                                                                                                I once specified CSV as an input format for a file from customers to try to be accommodating. It didn’t go well. I think I’ve had better luck with fixed-length files.

                                                                                                                1. 1

                                                                                                                  It may vary with your data, but I just timed c2tsv as ~3X faster than xsv on just selecting all columns and reformatting that World Cities Population test file.

                                                                                                                  For awk, as per the article, unquoted but with “out of the way” delimiters is probably more convenient than --quote-always, no?

                                                                                                                  Anyway, it’s best to transform to the most CPU/prog.lang friendly format possible, and best to do that only once. For compiled languages, that is binary, but c2tsv can be a useful preprocessor for a later stage parse to binary. That is why it is in that nio package. You can almost think of such loaders as “compiling your data”. Then you just want good “debuggers” like nio print to debug your data. (EDIT: but, of course, this is “engineering” not “customer management” which is, as you suggest, a whole other can of worms.)

                                                                                                              1. 4

                                                                                                                The same authors also propose allowing use of 0/8 and 240/4.

                                                                                                                1. 15

                                                                                                                  240/4 feels like the only one that could have legs here. I can’t see a world where 0/8 and 127/8 are anything but eternal martians, with anyone unlucky enough to get an IP in that space just doomed to have things never work.

                                                                                                                  Can we just have IPv6 already? :/

                                                                                                                  1. 4

                                                                                                                    totally agree, we should have had IPv6 10 years ago - and yet here in Scotland my ISP cannot give me IPv6.

                                                                                                                    1. 2

                                                                                                                      Vote with your feet and change ISP.

                                                                                                                      1. 4

                                                                                                                        Neither of the two broadband ISPs available where I live provide IPv6. Voting with my feet would have to be uncomfortably literal.

                                                                                                                    2. 2

                                                                                                                      Call me naive but I’m actually not sure if 0/8 would be such a big problem. I’ve surely never seen it actively special cased like 127/8. Which might just mean my experience with different makes of switches etc is not the best, but for 127/8 I don’t even need to think hard about 10 things that would break, wheres 0/8 is more like “I’d have to check all the stuff and it might work”

                                                                                                                      1. 1

                                                                                                                        That’s weird, I thought I’ve seen an IP address like 0.3.something publicly routable. Not completely sure, but I vaguely remember seeing something along those lines and thinking it was weird.

                                                                                                                      2. 7

                                                                                                                        Allocating 240/4 is the most realistic option because equipment that has hardcoded filters for it is either some really obscure legacy stuff that shouldn’t even be allowed access to the public Internet (like any unmaintained networked device) or, if maintained, should have never had it hardcoded and should be fixed.

                                                                                                                        Maybe it’s their secret plan: make two infeasible proposals to make the 240/4 proposal look completely sensible by comparison. ;)

                                                                                                                        1. 2

                                                                                                                          In all seriousness, I don’t think you have any concept of how much aging network kit there is out in the world which will never see a software upgrade ever again (either because the manufacturer don’t release them anymore, or because “it ain’t broke, why fix it?”).

                                                                                                                          1. 1

                                                                                                                            I know it quite well, but whose problem is it? Those people are already at a much greater risk than not being able to reach newly-allocated formerly reserved addresses.

                                                                                                                            1. 1

                                                                                                                              That may be the case but it’s ultimately everyone’s problem — there are network operators who will end up having to take on the support burden from users who can’t reach these services (whose hands may be tied for other reasons, e.g. organisational, budgetary etc), there are service operators who will end up having to take on the support burden from users who can’t reach their services (who can do basically nothing because it’s your network problem not ours), and there are users who will no doubt be unhappy when they can’t reach these services and don’t understand why (my friend or colleague says this URL works but for me it doesn’t).

                                                                                                                      1. 18

                                                                                                                        This “the browser’s default styles suck therefore every webpage has an imperitive for CSS” meme is depressing. Like, yes, I sort of agree that since browser defaults are garbage it is responsible to include a couple lines of CSS to fix them. But that isn’t a reason we need CSS! We could fix the browsers!

                                                                                                                        I think some CSS is super useful for other reasons, but it should always be possible to disable it and get a usable, readable webpage.

                                                                                                                        1. 4

                                                                                                                          Yes, while I support Gemini and related minimalist projects, I think there is also room for a simple web browser that basically just implements reader mode, no CSS or JS at all. It would only be useful for reading articles, not web apps or more complicated interactive websites, but for many use cases that would be more than adequate. Such a browser would be much simpler to implement, and might be possible for an individual to build from scratch, although parsing modern HTML5 is no joke (certainly much more complicated than gemtext).

                                                                                                                          1. 3

                                                                                                                            Yeah, I do agree with you. But I don’t think the browsers will ever fix that at source. Plus, design is so subjective that one person may find the browser defaults to be just fine, whereas other may loath them.

                                                                                                                            1. 5

                                                                                                                              But I don’t think the browsers will ever fix that at source.

                                                                                                                              Yes, because that would break websites that use CSS. But if you come from a standpoint where websites wouldn’t use CSS, then browsers would immediately change default styles to be more readable. See the reader mode example used in the article: it doesn’t just make the page unstyled, it also makes it readable, like browsers would do by default if CSS didn’t exist.

                                                                                                                              1. 3

                                                                                                                                It would be very easy to detect unstyled websites and always put those in reader-mode. There’s a small gotcha if we’d want to allow JS in that reader-mode (to provide a default style for interactive elements) cause JS can set CSS, but that’s trivial to work around.

                                                                                                                                Maybe we just need a head Element to instruct the browser to use reader-mode.

                                                                                                                                1. 2

                                                                                                                                  Yes, because that would break websites that use CSS.

                                                                                                                                  It shouldn’t. The browsers don’t all use the same defaults, and their defaults can be changed to some extend already without breaking pages. Webpages with a lot of CSS use a reset so that browser defaults don’t matter. Webpages with only a little responsibly written CSS are already written to consider browser defaults a feature.

                                                                                                                                  We’re not talking about a heavy theme from the browser here, just changing obviously bad defaults. Some browsers already use sans-serif by default, all should use a sensible sans-serif font from the system. Most already have some padding around the body, just need to make it reasonably sized (I find 5% left and right pretty good, but more than 1 or 2 em for sure). And so on. Maybe creme or slate background color based on if the browser is in dark mode or not. This kind of thing.

                                                                                                                                  1. 4

                                                                                                                                    We’re not talking about a heavy theme from the browser here, just changing obviously bad defaults.

                                                                                                                                    Changing the default margin to center text is a big change. It would break a lot of websites.

                                                                                                                                  2. 1

                                                                                                                                    I don’t think it would break anything. The cascade ensures this wouldn’t happen.

                                                                                                                                    A i understand it, it goes browser style > website style > user style with each in turn taking precedence over the last.

                                                                                                                                    1. 3

                                                                                                                                      I don’t think it would break anything. The cascade ensures this wouldn’t happen.

                                                                                                                                      If every websites expects no margin to be set, and you set a default margin, a lot of websites will break. It should be obvious.

                                                                                                                                  3. 4

                                                                                                                                    We’ve all seemed to forget that we used to call web browsers “user agents”. They are an agent of the user and should be something the user controls!

                                                                                                                                    You wrote:

                                                                                                                                    As you can see from the screenshot above, the text spans the entire width of the screen. I also think that the text is too small. Even on my little 13” MacBook Air screen, constantly scanning my eyes from the far left to far right of the screen really strains my eyes.

                                                                                                                                    This is why I think CSS should continue to exist.

                                                                                                                                    In Netscape Navigator 4, I used to be able to set my preferred link color, background/foreground, fonts, etc. I was able to configure my user agent in such a way that it was comfortable for me to use. I’d expect in a world where the browser exists as a true engine of hypertext instead of as some quasi-hypertext/quasi-remote terminal engine, we’d see even deeper customization options than that.

                                                                                                                                    1. 6

                                                                                                                                      You still have a “user agent” and it still is customizable to a pretty high degree.

                                                                                                                                      It just also happens to let web page authors suggest how to display the page, as a default in case you don’t choose to override it. And some people are good at coming up with helpful, useful styling. Some people are less good at it. But the world is richer for the variety they produce, just as the world is richer for the fact that we didn’t all standardize on exactly one font and list of rules for typesetting at the advent of movable type, but instead got a wide variety of books which experimented with different ways of doing things.

                                                                                                                                      And the real logical conclusion of your argument is to require everyone to become as skilled at putting together their own user-specific styles are you are, which would ironically be a terribly disempowering thing for most users, who don’t want to put in that kind of time and effort.

                                                                                                                                      1. 1

                                                                                                                                        I think all modern browsers support user CSS. This takes precedence over any other CSS, so any element that is styled in your user CSS will see that style, even if it is also styled by the site’s CSS. CSS is very powerful and includes things like regex matches, so I can add a little superscript [pdf] next to any links that link to .pdf files and add a health warning next to any links that go to Facebook in my user CSS.

                                                                                                                                        1. 1

                                                                                                                                          Does my browser support user CSS? Certainly! Is that particularly useful in a world where a hyperlink to another site may be represented as an <a> tag, a button, a span/div with an onclick() handler, … … …? Not particularly: I either spend time building my user CSS to fit every site I interact with (hah!), or I use some type of best-effort tooling. And that’s often a huge pain. If you’ve never looked at the hoops Firefox jumps through to generate a ‘reader mode’ page, you should.

                                                                                                                                          A counterfactual history where we didn’t get the ability to drive pixel-perfect design and instead picked up <article> and <menu> and other semantic friends early would look very different: my user agent could allow me to describe how I want to see my data presented, but more importantly, that control would exist over every site I visit.

                                                                                                                                          1. 1

                                                                                                                                            A counterfactual history where we didn’t get the ability to drive pixel-perfect design and instead picked up and and other semantic friends early would look very different: my user agent could allow me to describe how I want to see my data presented, but more importantly, that control would exist over every site I visit.

                                                                                                                                            There was a problem with this history (which is what the W3C was pushing with XHTML): semantic markup makes it very easy to separate adverts and content and a UA will then always make the decision not to show adverts. When web standards are driven by a company that gets the vast majority of its revenue from adverts, this model could never exist. It’s far better from their perspective if the browser just renders pixels and the difference between ad pixels and content pixels is completely opaque.

                                                                                                                                    2. 2

                                                                                                                                      One of the original intents is that users would provide their own stylesheets. I don’t think that’d be as practical even in the 90s though; sites presume a default stylesheet that may not match your custom reality, nor would sites likely to use the same selectors or semantics elements for exactly the right purpose.

                                                                                                                                      1. 2

                                                                                                                                        Why wouldn’t that work? I understand not every single person is going to implement their own stylesheet, but os and browser vendors would likely have a nice default style and some themes one could choose from.

                                                                                                                                        For people who want more, there would probably be a subreddit with fancy stylesheets to download for free. Browsers could make this easy by adding a button that makes it easy to switch between them.

                                                                                                                                    1. 2

                                                                                                                                      My corollary to this would be: keep your tech stack small and well curated to let you prove or disprove the rule faster.

                                                                                                                                      I like writing little web services in Go. I am not convinced Go is the best language for everything, but having a robust and well understood stdlib means I don’t have to go find dozens of external packages to vet, understand, and glue together. When I hit a bug I typically don’t have to go looking very far.

                                                                                                                                      1. 5

                                                                                                                                        But I’ll not say using tricks like this is completely wrong. The flawed code is in github.com/philpearl/avro (I’ve not fixed it yet), and that library is riddled with tricks with unsafe. The unsafe tricks reduced data-processing runs of ~5 hours on 60-plus cores with 95% of the time in GC, to ~24 minutes.

                                                                                                                                        Boy, that feels like a red flag – any time I’ve spent that much effort fighting the GC I’m either writing code that overallocates like crazy and just needs profiling, or I’ve done all that and now need to move down the tech stack and start thinking about things like custom allocators and arenas and asking if a GCed language is the right decision.

                                                                                                                                        1. 28

                                                                                                                                          The title of this article is deceiving. I like the twist and the happy ending.

                                                                                                                                          1. 31

                                                                                                                                            It had the lesson I expected should be learned from the title but I was pleasantly surprised that it actually was. This is a big part of my argument for supporting niche platforms. The more niche an open source platform, the higher the ratio of developers to users. I don’t use Haiku and I will probably never use Haiku. I’d have been very excited by it if it were in its current state 20 years ago but now it feels too little, too late. I am; however, super happy to get patches to my projects that add Haiku support because, in my experience, 100% of Haiku users are experienced developers and so the return on investment in terms of valuable contributions relative to the cost of maintaining the Haiku support if probably higher than for any other OS. I’ve had one very subtle bug in one project exposed because it deterministically triggered on Haiku but only happened on Linux in specific and quite rare conditions. We’d probably never (or, at least, not without hundreds of hours of hunting) have found it on Linux and just had occasional crashes making everyone unhappy, but on Haiku it triggered sufficiently reliably that an external contributor was able to find the root cause and file a bug report.

                                                                                                                                            1. 11

                                                                                                                                              This reminded me of Theo talking about why OpenBSD keeps old platforms around

                                                                                                                                              On a regular basis, we find real and serious bugs which affect all platforms, but they are incidentally made visible on one of the platforms we run, following that they are fixed. It is a harsh reality which static and dynamic analysis tools have not yet resolved.