Threads for tonyarkles

  1. 32

    Like many others who went to engineering school in the ‘80s, I appreciate high fidelity audio gear to the point of being a little bit of an audiophile. Also, like many people who knew electrical engineers from that era, the amount of bad advice that you can find in “audiophile” circles is astounding. And don’t get me wrong, I’ll tell you to your face that popular music sounds better today on vinyl rather than CD. What I won’t do is try to convince you that the reason for this is because digital is an inferior technology incapable of meeting audiophile standards because that’s not even close to the reason.

    1. 8

      (What’s the real reason? :-))

      1. 69

        As far as your ears are concerned, louder sounds better. CD and digital have far greater fidelity to the original wave form than vinyl does. Vinyl also has less dynamic range, the difference between the loudest sound that you can record and the softest sound that you can record. When music was regularly sold in both formats in the late eighties, the mastering process for popular music was the same for both formats. In the early ‘90s people discovered that the common mastering process for CD and LP was leaving a lot of dynamic range unused when the music was pressed onto CDs. As the vinyl LP was falling out of favor, people discovered that you if you reduce the dynamic range of the music you can master it at a higher sound level or loudness on CD without generating distortion. To people listening to the music these louder pressings initially sound better. Every rock and pop artist on the planet wants their song to sound the best when played alongside other music. So artist started asking for this as part of the mastering process. The problem with doing this is that everything begins to get a wall of sound feeling to it. By making the soft parts louder and the loud parts softer so you can make the whole thing louder, you take some of the impact that the music would have with its original dynamic range. When vinyl records starting coming back in to favor, the music destined for LP was mastered the way they used to for vinyl back in the ‘80s. If you listen to two versions of a song, one mastered for LP, and the other mastered for CD, the LP will sound better in the first few plays before vinyl’s entropic nature ruins it, because the LP version will have more dynamic range. The same is true of two pressings of Pink Floyd’s “Dark Side of the Moon” if you are comparing an early 1980’s issue CD to a late ’90s CD reissue. This is really only true of Rock, Pop, and R&B. Classical and Jazz were unaffected by the Loudness War because fans of those genres put fidelity highest in their desired traits.

        1. 18

          Summarizing: when you say you prefer vinyl over CD, you say that you prefer 80’s style mastering over the overly compressed end-90s+ mastering.

          It’s interesting that the extra headroom on CDs sparked the loudness war, instead of resulting in better dynamics. And now that people expect music to have a certain loudness, I guess we can’t go back.

          Perhaps one day we could get a new wave of artists mastering their 320kbps mp3s 80s-stlye?

          1. 7

            A loud mix makes sense if you’re listening to music in a noisy environment. (On your commute, say.) But I’d rather have the ability to compress the dynamic range at the time of playing, so I can adjust it to suit the environment.

            1. 1

              I used to have a decent, though inexpensive, stereo system setup. Back when I would sit down just to listen to music, with no other distractions, like the Internet.

              But when was the last time I really sat down to listen to music? For me it is usually in the car, or through a pair of earbuds. Or maybe washing the dishes.

            2. 4

              The extra headroom mostly provided the opportunity, alongside the fidelity and lack of physical limitations of a CD: on a vinyl if you try to brickwall you end up with unusable media.

              What sparked the loudness war is the usual prisoner’s dilemna, where producers ask for more volume in order to stand out, leading the next producer to do the same, until you end up with tons of compression and no dynamic range left. Radio was a big contributor, as stations tend(ed?) to do peak normalisation[0], so if you leverage dynamic range widely you end up very quiet next to the pieces played before and after.

              Perhaps one day we could get a new wave of artists mastering their 320kbps mp3s 80s-stlye?

              To an extent it’s already been happening for about a decade: every streaming service does loudness normalisation[1], so by over-compressing you end up with a track that’s no louder than your neighbours, but it clips and sounds dead.

              Lots of “legacy” media companies (software developers, production companies, distributors, …) have also been using loudness normalisation for about a decade following the spread of EBU R 128 (a european recommendation for loudness normalisation), for the same reason.

              [0] where the highest sound of every track is set to the same level

              [1] where the target is a perceived overall loudness for the entire track

              1. 2

                That’s me. When I buy rock music on LP, I’m purchasing music mastered to the 1980’s LP standards. I do that because Rock music works well with the 60 dB or so of dynamic range that vinyl LP offers.

              2. 7

                It really is quite shocking to take a CD mastered in the early 90s and another in the late 90s-early 2000s and play them at the same volume settings.

                1. 4

                  This is an excellent explanation. Being able to explain things clearly without hiding behind technical terms like “compression” is a strong indicator to me that you are a true expert in this field.

                  1. 3

                    For drummers and bassists, “compression” is a well-known term, because compressing dynamic range is almost required in order to record them faithfully. The typical gigging bassist will have a compressor pedal in their effects chain for live performance, too.

                  2. 4

                    I do appreciate it when digital releases are mastered in way that preserves dynamic range, and playing it after any typical digital release in the affected genres, it will sound really quiet.

                    Some bands have demonstrated to me that you can be a loud rock band with dynamic range mostly intact.

                  3. 12

                    I think it’s fun to watch the record spin around. :-)

                    1. 2

                      I listen to 90% of stuff on vinyl, and I have no rational explanation beyond yours as to why I like it more than streaming.

                      1. 3

                        I heard on the radio that Metallica are investing in their own vinyl-pressing plant.

                        1. 3

                          I stream way more than I use my turntable, basically for the same reasons @fs111 mentions. But I definitely prefer vinyl because while streaming is pure consumption, vinyl is participatory. I enjoy handling the vinyl and really taking care of it (cleaning it when I get it/before I play it, taking care of the jacket, etc.). It makes me feel like a caretaker of music that’s important to me - a participant in the process, instead of just a consumer.

                          On my phone I listen to music. On my turntable I play music.

                          1. 2

                            I like the physicality of it, too, and I also love the actual artifacts, the records and their sleeves and such.

                          2. 1

                            While I can see the appeal most of my music consumption is while working. I would not like getting up constantly to switch records.

                        2. 2

                          Possibly: I’ve read that over time many pop songs get remastered with more and more dynamic range compression. This makes all parts of the song sound similar in loudness, but also removes some musical features (dynamics) and depending on the DRC method (fast attack/decay, slow attack/decay, manual envelope adjustment) can introduce audible distortion.

                          Older vinyl record and CD releases are from earlier masters. Albeit some records are newly manufactured, so some will be based on newer remasters anyway.

                          Cannot confirm or deny, I don’t buy or listen to pop :/

                          1. 3

                            This is called the loudness war. This site collects the dynamic range for albums: https://dr.loudness-war.info

                          2. 1

                            In addition to the loudness wars people have been talking about, certain technical restrictions limit what can accurately be recorded on vinyl. This leads to a subtle “sound” that people get used to and prefer. This could be reproduced when mastering for digital audio formats, but people either don’t do that processing or “audiophiles” claim that it gets lost in translation somehow.

                        1. 8

                          Started with Vim, went to Emacs, installed evil-mode to get the best of both worlds, and have been using that for the last few years now. Emacs is really a treat.

                          1. 12

                            Right, when people say Emacs is a good operating system, but needs a good editor, the editor they’re looking for is Evil. That said, the vanilla Emacs keybindings have been burned into my muscle memory for decades.

                            1. 7

                              “Emacs does have a good text editor in it; I’ve just decided not to use it.” – Statements dreamed up by the utterly deranged (including myself)

                              1. 3

                                I recently got a hand-me-down Mac after my previous one was stolen and thoroughly destroyed a year ago. I forgot how wonderful it is that all of the standard Emacs key bindings (e.g. C-a = home, C-e = end, etc.) are baked thoroughly into the OS X input handling layer.

                                1. 7

                                  all of the standard Emacs key bindings

                                  Interesting definition of “all”. =)

                                  IMO they implement just enough to be a kind of uncanny valley; it tricks you into thinking you can use your muscle memory, and then every few minutes you try something it doesn’t know about and get the software equivalent of the feeling of stubbing your toe.

                                  1. 3

                                    These shortcuts actually come from a C library that I think predates Emacs called readline. Browse man readline.

                                    1. 10

                                      Emacs predates read line by over a decade. GNU Emacs predates readline by several years. Readline supports both vi and emacs-style editing depending on the contents of .inputrc.

                                      1. 2

                                        This is true. However, Ctrl-e for “end”, ctrl-a for “home”, etc. also pre-date Emacs. They were present in TECO first.

                                  2. 3

                                    I, on the other hand, like the Emacs keystrokes and will continue to be happy Evil exists because the more people use Emacs, the more likely it is that those keystrokes will continue to be available in the context of a good editing environment with a good extension language.

                                1. 9

                                  I’m working on a draft for a blog post about database cryptography, as a follow-up to a footnote on a friend’s blog post.

                                  Database cryptography is hard. The above sketch is not complete and does not address several threats! This article is quite long, so I will not be sharing the fixes.

                                  1. 9

                                    I knew a guy who believed that setting the column encoding to “binary” meant that the column was encrypted and safe for directly storing passwords.

                                    He was a senior developer. He got paid very well while believing this.

                                    1. 7

                                      I wish I was making this up… Back in the day (before JWT was a well-established thing), we had a JSON blob that the the client needed to retain and send back to the server for reasons, and I pointed out that the JSON had data in it that was a) slightly sensitive and b) could be modified by the untrusted client to do bad things. The following sprint he stated that he’d addressed the problem by encrypting it. I had a look and it was literally ROT-13 “encrypted”. When I raised the following concern… his response was “you only know that it’s ROT-13 encrypted because you looked at the code. No one’s going to guess that.”

                                      1. 6

                                        This kind of thing is why I am glad I always work with a red team when building anything I want to make security claims about. They may not find everything I’ve done wrong, but they at least find anything I’ve done embarrassing wrong.

                                        1. 3

                                          That’s the right mindset! I mean, when someone’s learning or has a knowledge gap, I’m more than happy to help out. When they double down on being wrong… that’s a frustration I’m not great at dealing with.

                                          1. 3

                                            “Tell me more. When _____ why do you believe _____?” is a good tool to use.

                                        2. 5

                                          Lmao rot13 is literally one of the first cryptography challenges you encounter when learning

                                          1. 3

                                            I am no cryptographer, but I probably could solve the ROT-13 using just pen and paper.

                                    1. 13

                                      This was the perfect reminder for starting my Friday morning. I’m going through a similar situation right now. Historically I’ve jumped in and pulled the heroic “fix it at the 11th hour” bullshit that the article talks about at the end after having the system fail in the ways I’ve predicted it would; this time around we’re communicating clear requirements to the other team. While I’ll still likely be keeping an eye on things and preparing for that 11th hour firefighting, I’m going to be taking a more passive approach in the lead-up and let the root causes see the light of day before just fixing it.

                                      1. 5

                                        Yeah. You will destroy yourself if you try to fix everything on overtime. When things stay broken for a while, people suddenly get some appreciation for all the systems that “magically “ just work all the time. Suddenly there’s talk about scheduled maintenance and which resources are assigned to keep things running.

                                        1. 3

                                          On my current project with working with an external team, I’ve been taking a more stern approach when things fall apart. Writing up very detailed reports on what all went wrong leading up to this, and action items that need to be taken so that we never have to deal with this again. They’ve taken to those much more than up front stuff, and things have actually gotten better.

                                        1. 18

                                          I’ve been doing some back of the napkin math on my company’s cloud transition and containers are incredibly expensive. (An order of magnitude more expensive than virtual servers!)

                                          The joke of “Kubernetes was the Greek god of spending money on cloud services” is pretty accurate.

                                          On the other hand, increasing our headcount is more expensive than containers. We actually save money this way. And we’re unlikely to grow our headcount and business enough that switching to less expensive infrastructure would be cheaper in the long run.

                                          1. 12

                                            … I’m confused, how is “adopt containers in a ‘cloud’” an alternative to “hire staff” ?

                                            1. 7

                                              Depending on scale, you need to have skills and hours for:

                                              cloud containers:

                                              • containerization
                                              • orchestration
                                              • cloud networking (high level)
                                              • cloud security
                                              • access management

                                              physical hardware in a datacenter:

                                              • hardware build/buy, deploy, monitoring, maintenance
                                              • network setup, deploy, management, monitoring (low-level)
                                              • security
                                              • access management

                                              If you think that one of these requires skills and hours you don’t currently have, and you do for the other, then you need to hire people.

                                              1. 9

                                                Ah yes, the old “it’s the cloud or break ground on your own datacenter, there’s no in between” trope.

                                                1. 20

                                                  That’s uncharitable. Everything I attributed to “physical hardware in a datacenter” applies equally to renting rackspace from an existing colo provider… which is what my employer does.

                                                  You can also lease servers from many datacenters, pay them for deployment, and pay them for networking.

                                                  1. 11

                                                    It took me a while to figure out what parent is getting at but I think it’s a matter of walking a few miles in young people’s shoes. All this is happening in 2023, not 2003. Lots of people who are now in e.g. their late twenties started their careers at a time when deploying containers to the cloud was already the norm. They didn’t migrate from all that stuff in the second list to all that stuff in the first list, they learned all that stuff in the first list as the norm and maybe learned a little about the stuff in the second part in school. And lots of people who are past their twenties haven’t done all that stuff in the second list in like ten years. Hell, I could write pf and iptables rulesets without looking at the man pages once – now I’m dead without Google and I woke up to find nftables is a thing, like, years after it was merged.

                                                    It’s not a dying art (infrastructure companies need staff, too!) but it’s a set of skills that software companies haven’t really dealt with in a while.

                                                    1. 2

                                                      I’m actually more skilled in running servers than containers. My company is transitioning to the cloud and I’m getting the crash course on The New Way. Docker and Dockerfiles are the currently bane of my existence.

                                                      But I can’t ignore that containers allow a level of automation that’s difficult to achieve with virtual or physical ones. Monitoring is built in. Monit or systemd configs aren’t needed anymore. They’ve been replaced by various AWS services.

                                                      And frankly, we can push the creation of Docker images down the stack to experienced developers and keep operations headcount lower.

                                                      It’s more efficient to hire a developer like me who works part time on devops than hire a developer and an devops person.

                                                      1. 1

                                                        I’m 100% not an infra guy so I’m probably way off but my (possibly incorrect) expectation is that a company that’s running cloud-hosted services deployed in containers & co. at the moment would also deploy them in containers in a non-cloud infrastructure, too. I mean, regardless of whether that’s a good idea or not in technical terms (which I suspect it is but I have no idea) it’s probably the only viable one, since hardly anything can be built and ran in another environment today. IMHO you’d need people doing devops either way. Tooling may be “just” a means to an end but it’s unescapable and we’re stuck with the ones we have no matter what we run them on.

                                                        That’s probably one reason why gains like the ones the author of the article wrote about are currently accessible only to companies running large enough and diverse enough arrays of services, who probably need, if not super-specialised, at least dedicated staff to manage their cloud infrastructure. In that case, you’re ultimately shifting staff from one infrastructure team to another, so barring some initial investments (e.g. maybe you need to hire/contract a network infra expert, and do a lot of one-off work like buy and ship cabinets and the like), it’s mostly a matter of infrastructure operation costs.

                                                        Smaller shops, or at least shops with less diverse requirements and/or lighter infrastructure requirements that can be (mostly?) added to the developers’ plates aren’t quite in the same position. In their case, owning infrastructure (again) probably translates into having a full-sized, competent IT department again to keep the wheels spinning on the hardware that developers deploy their containers on. So they’d be hiring staff again and… yeah.

                                                    2. 1

                                                      I mean, there are other options where you rent VMs or even physical servers, but those require additional skills as well that you have to hire for. If you’re alluding to a PaaS then you won’t need additional headcount, but you may well be spending more for your resources than you would in the cloud.

                                                      1. 3

                                                        I’m coming at this with quite a bit of grey in my beard, but it makes me profoundly uncomfortable to think that the folks who are responsible for all of the cloud bits that “dsr” outlines would be uncomfortable handling the physical pieces. I get that it’s a thing, but having started from the other side (low-level), the idea that people are orchestrating huge networks without having ever configured subnets on e.g. an L3 switch… that freaks me out.

                                                        1. 4

                                                          Fun, isn’t it? I don’t (usually) feel like I’ve been at this that long, but a lot of fundamentals that I’d have expected as table stakes have been entirely abstracted away or simplified so much that people starting today just aren’t going to need to know them. (Or, if they do, are going to need a big crash course…)

                                                          OTOH I spend a lot of my time realizing that there’s yet another new thing I need to learn to stay current…

                                                          1. 3

                                                            I feel attacked xD

                                                            More seriously, I love programming, but years of family and friends asking me to help with their network issue over the phone or text just completely killed my will of doing this kind of configuration.

                                                            The exception being terraform, I was pleasantly surprised by how satisfying it was to be ablee to declare what you want and be able to inspect the plan before executing it. But that’s still pretty high-level I guess…

                                                        2. 1

                                                          I think even when colocating, you still are needing some extra level of expertise. There is most definitely more people who can get by with cloud hosting stuff who would be more overwhelmed by the issues coming with managing the hardware.

                                                          I think that if you have people in a team with that skillset, though, then it’s a different calculus. But it’s hard to overstate how little you have to think about the hardware with cloud setups. I mean you gotta decide on some specs but barely. And at least in theory it lets you ignore a level of the stack somewhat.

                                                          Most companies are filled with people who are merely alright at their jobs, and so when introducing a new set of problems you’re looking at pulling in new people and signing up for a new class of potential problems.

                                                      2. 2

                                                        You need slightly less people if you don’t have servers (virtual or otherwise) to monitor and maintain.

                                                        As annoying as I’m finding The Cloud, containers natively support automation in a way servers do not. Linux automation isn’t integrated, it’s bolted on after the fact.

                                                        It’s easy to mistake something you’re familiar with as being simpler than something you aren’t.

                                                    1. 2

                                                      Very interesting even if I don’t use Windows: I suspect similar stuff is going on in modern Linux desktops but I would not have imagined such a sequence of events.

                                                      1. 2

                                                        My laptop (maybe other Ubuntu 20.04 users too?) has an issue where occasionally coming out of Suspend will display the password prompt but not accept any input for a while. It’s rare enough to be an annoyance and not a show stopper, but frequent enough that this blog post immediately made me think of it. I wonder what kinds of tools Linux has to dig into something like this.

                                                      1. 1

                                                        Key pairs are all right but what about certificate-based auth? Anybody use that? I set up Teleport on my home cluster one time, it was smooth but (I think?) required all sessions be proxied through a public central server. Wonder whether there’s a way to combine its cert management capabilities with point-to-point SSH sessions like Tailscale enables.

                                                        Edit: reading the docs I think proxying is only necessary for reverse tunneling when nodes are behind NAT or firewall, which Tailscale will take care of. Maybe time to set this up again and check; the free VM you can get on Oracle cloud seems like a decent option for hosting the public Teleport cluster management server. Heck, maybe I’ll set up my own Tailscale cluster management server on the same box!

                                                        1. 2

                                                          I set up certificate-based auth for a team a couple of years ago and it generally worked pretty well. We didn’t have a great system in place to manage revocation lists, but had a good-enough workaround. Before doing this, the situation was:

                                                          • Dev systems generally just had a common username and password that everyone knew
                                                          • Prod systems had strong passwords that no one knew and manually-managed authorized_keys files

                                                          The issue that remained when we moved to certs without a good revocation list strategy was that, upon termination, we didn’t have a good way to revoke the ex-employee’s long-term dev certificate. We punted on solving that problem in the short-term by justifying that it was no worse of a situation than having to roll passwords on the dev systems every time someone quit.

                                                          On the prod side, devs very rarely needed access to any prod infrastructure, but they would occasionally need it. Certificates were awesome for that; we could issue them a certificate that only gave them access to a particular set of prod machines for a very limited timespan (e.g. 24 hours). No passwords ever had to be revealed to the devs, nor did we have to worry about accidentally leaving something in the authorized_keys files down the road.

                                                          It wasn’t perfect, but it worked quite a bit better than how it had been running previously. I would have likely gotten a revocation solution put together, but I ended up leaving the company before then.

                                                          1. 2

                                                            The issue that remained when we moved to certs without a good revocation list strategy was that, upon termination, we didn’t have a good way to revoke the ex-employee’s long-term dev certificate…. I would have likely gotten a revocation solution put together, but I ended up leaving the company before then.

                                                            Ironically, you could still go back and fix that for them ;)

                                                            1. 2

                                                              Bahahahaha, that’s probably true! I didn’t take my SSH key with me when I left, but probably have a copy of the signing root certificate somewhere…

                                                        1. 2

                                                          Unless Twitter requires manual interventions to run (imagine some guys turning cranks all day long :)) , why exactly would it go down ?

                                                          1. 15

                                                            Eventually, they will have an incident and no one remaining on staff will know how to remediate it, so it will last for a long time until they figure it out. Hopefully it won’t last as long as Atlassian’s outage!

                                                            1. 15

                                                              Or everyone remaining on staff will know how to fix it but they will simply get behind the pace. 12 hour days are not sustainable and eventually people will be ill more often and make poorer decisions due to fatigue. This post described the automation as clearing the way to spend most their time on improvements, cost-savings, etc. If you only spent 26% of your time putting out fires and then lost 75% of your staff well now you’re 1% underwater indefinitely (which completely ignores the mismatch between when people work best and when incidents occur).

                                                              1. 6

                                                                Even worse - things that would raise warnings and get addressed before they’re problems may not get addressed in time if the staffing cuts were too deep.

                                                              2. 8

                                                                That’s how all distributed systems work – you need people turning cranks all day long :) It gets automated over time, as the blog post describes, but it’s still there.

                                                                That was my experience at Google. I haven’t read this book but I think it describes a lot of that: https://sre.google/sre-book/table-of-contents/

                                                                That is, if such work didn’t exist, then Google wouldn’t have invented the job title “SRE” some time around 2003. Obviously people were doing similar work before Google existed, but that’s the term that Twitter and other companies now use (in the title of this blog post).

                                                                (Fun fact: while I was there, SREs started to be compensated as much or more than Software Engineers. That makes sense to me given the expertise/skills involved, but it was cultural change. Although I think it shifted again once they split SRE into 2 kinds of roles – SRE-SWE and SRE-SysAdmin.)


                                                                It would be great if we had strong abstractions that reduce the amount of manual work, but we don’t. We have ad hoc automation (which isn’t all bad).

                                                                Actually Twitter/Google are better than most web sites. For example, my bank’s web site seems to go down on Saturday nights now and then. I think they are doing database work then, or maybe hardware upgrades.

                                                                If there was nobody to do that maintenance, then eventually the site would go down permanently. User growth, hardware failures (common at scale), newly discovered security issues, and auth for external services (SSL certs) are some reasons for “entropy”. (Code changes are the biggest one, but let’s assume here that they froze the code, which isn’t quite true.)


                                                                That’s not to say I don’t think Twitter/Google can’t run with a small fraction of the employees they have. There is for sure a lot of bloat in code and processes.

                                                                However I will also note that SREs/operations became the most numerous type of employee at Google. I think there were something like 20K-40K employees under Hoezle/Treynor when I left 6+ years ago, could easily be double that now. They outnumbered software engineers. I think that points to a big problem with the way we build distributed systems, but that’s a different discussion.

                                                                1. 7

                                                                  Yeah, ngl but the blog post rubbed me the wrong way. That tasks are running is step 1 of the operarional ladder. Tasks running and spreading is step 2. But after that, there is so much work for SRE to do. Trivial example: there’s a zero day that your security team says is being actively exploited right now. Who is the person who knows how to get that patched? How many repos does it affect? Who knows how to override all deployment checks for all the production services that are being hit and push immediately? This isn’t hypothetical, there are plenty of state sponsored actors who would love to do this.

                                                                  I rather hope the author is a junior SRE.

                                                                  1. 3

                                                                    I thought it was a fine blog post – I don’t recall that he claimed any particular expertise, just saying what he did on the cache team

                                                                    Obviously there are other facets to keeping Twitter up

                                                                  2. 4

                                                                    For example, my bank’s web site seems to go down on Saturday nights now and then. I think they are doing database work then, or maybe hardware upgrades.

                                                                    IIUC, banks do periodic batch jobs to synchronize their ledgers with other banks. See https://en.wikipedia.org/wiki/Automated_clearing_house.

                                                                    1. 3

                                                                      I think it’s an engineering decision. Do you have people to throw at the gears? Then you can use the system with better outcomes that needs humans to occasionally jump in. Do you lack people? Then you’re going to need simpler systems that rarely need a human, and you won’t always get the best possible outcomes that way.

                                                                      1. 2

                                                                        This is sort of a tangent, but part of my complaint is actually around personal enjoyment … I just want to build things and have them be up reliably. I don’t want to beg people to maintain them for me

                                                                        As mentioned, SREs were always in demand (and I’m sure still are), and it was political to get those resources

                                                                        There are A LOT of things that can be simplified by not having production gatekeepers, especially for smaller services

                                                                        Basically I’d want something like App Engine / Heroku, but more flexible, but that didn’t exist at Google. (It’s hard problem, beyond the state of the art at the time.)

                                                                        At Twitter/Google scale you’re always going to need SREs, but I’d claim that you don’t need 20K or 40K of them!

                                                                        1. 1

                                                                          My personal infrastructure and approach around software is exactly this. I want, and have, some nice things. The ones I need to maintain the least are immutable – if they break I reboot or relaunch (and sometimes that’s automated) and we’re back in business.

                                                                          I need to know basically what my infrastructure looks like. Most companies, if they don’t have engineers available, COULD have infrastructure that doesn’t require you to cast humans upon the gears of progress.

                                                                          But in e.g. Google’s case, their engineering constraints include “We’ll always have as many bright people to throw on the gears as we want.”

                                                                          1. 1

                                                                            Basically I’d want something like App Engine / Heroku, but more flexible, but that didn’t exist at Google.

                                                                            I think about this a lot. We run on EC2 at $work, but I often daydream about running on Heroku. Yes it’s far more constrained but that has benefits too - if we ran on Heroku we’d get autoscaling (our current project), a great deploy pipeline with fast reversion capabilities (also a recentish project), and all sorts of other stuff “for free”. Plus Heroku would help us with application-level stuff, like where we get our Python interpreter from and managing it’s security updates. On EC2, and really any AWS service, we have to build all this ourselves. Yes AWS gives us the managed services to do it with but fundamentally we’re still the ones wiring it up. I suspect there’s an inherent tradeoff between this level of optimization and the flexibility you seek.

                                                                            Heroku is Ruby on Rails for infrastructure. Highly opinionated; convention over configuration over code.

                                                                            At Twitter/Google scale you’re always going to need SREs, but I’d claim that you don’t need 20K or 40K of them!

                                                                            Part of what I’m describing above is basically about economies of scale working better because more stuff is the same. I thought things like Borg and gRPC load balancing were supposed to help with this at Google though?

                                                                      2. 2
                                                                        1. Random failures that aren’t addressed
                                                                        2. Code and config changes (which are still happening, to some extent)

                                                                        It can coast for a long time! But eventually it will run into a rock because no one is there to course-correct. Or bills stop getting paid…

                                                                        1. 1

                                                                          I don’t have a citation for this but the vast majority of outages I’ve personally had to deal with fit into two bins as far as root causes go:

                                                                          • resource exhaustion (full disks, memory leak slowly eating all the RAM, etc)
                                                                          • human-caused (eg a buggy deployment)

                                                                          Because of the mass firing and exodus, as well as the alleged code freeze, the second category of downtime has likely been mostly eliminated in the short term and the system is probably mostly more stable than usual. Temporarily, of course, because of all of the implicit knowledge that walked out the doors recently. Once new code is being deployed by a small subset of people who know the quirks, I’d expect things to get rough for a while.

                                                                          1. 2

                                                                            You’re assuming that fewer people means fewer mistakes.

                                                                            In my experience “bad” deployments are much less because someone is constantly pumping out code with the same number of bugs per deployment but because the deployment breaks how other systems interact with the changed system.

                                                                            In addition fewer people under more stress, with fewer colleagues to put their heads together with, is likely to lead to more bugs per deployment.

                                                                            1. 1

                                                                              Not at all! More that… bad deployments are generally followed up with a fix shortly afterwards. Once you’ve got the system up and running in a good state, not touching it at all is generally going to be more stable than doing another deployment with new features that have potential for their own bugs. You might have deployed “point bugs” where some feature doesn’t work quite right, but they’re unlikely to be showstoppers (because the showstoppers would have been fixed immediately and redeployed)

                                                                        1. 6

                                                                          I recently started running into issues like this at work. There were two touch-screen laptops (Lenovo Yogas) that were part of a test harness and they would get phantom taps all over the screen sometimes. Display would go black occasionally and come back. There was a microcontroller that had run its firmware flawlessly that started having HardFaults (which is basically a segfault while handling a segfault) with a blown up stack and bizarre values in the fault registers. Finally I started realizing that this all seemed to start happening at the same time and suspected power. A regular $10 hardware store outlet tester didn’t indicate any issue, but with a multimeter the outlets had a measurable ~20V between the ground and neutral wires and the expected 120V between neutral and hot. This strongly implies that the building ground had become disconnected somewhere, probably at the breaker panel.

                                                                          This was all happening while we were getting ready to move anyway, so it’s going to remain a mystery forever. Taking the gear home (and then to the new shop) made all of the issues completely vanish.

                                                                          1. 27

                                                                            Author here. Containers always seemed a little magical to me. So I dug into how they work and then built a “container runtime” that only uses the change root sys call (Which has been in UNIX since the 70s).

                                                                            This was not only fun, but took a way a bit of the magic, so I could better understand what’s going on when I use real containers.

                                                                            Let me know what you think!

                                                                            Edit:

                                                                            Fun fact: In my first draft of the article, I had this idea that it would be about time travel. If I went back in time, how far back could I still implement something like containers, and so I was focused on just using chroot because it’s so old.

                                                                            If people like this type of stuff, I’ll do namespaces next but pick a less inflammatory name like “How X helped me understand containers.”

                                                                            1. 6

                                                                              Edit (again): Ignore me, I’m old and when I was administering a big freebsd shared hosting platform in 2005 a chrooted jail was a very specific thing (it was a chrooted jail!). It’s clear reading around that the term has taken on broader meaning to just refer to chroot. My apologies to author, this appears to be the accepted use.

                                                                              It feels like you might be misunderstanding the term “chrooted jail” which is referring to the combination of the use of chroot with FreeBSDs jail. The reason I mention that is because then in the footnote you mention cgroups offer more protections than chroot, but that’s also the point of jail. Anyway this part confused me 😳

                                                                              Edit:

                                                                              On the whole I liked the post as a kind of dive into a linux syscall and what you can do with it. However I’m a little bit concerned about simplifying the idea of a container a chroot. When I think of containers I explicitly think of them as a binding between a kind of chroot and some process controls that enforce behavior. Wether that’s BSDs chrooted jails, illumous zones, or linux’s chroot, and cgroups.

                                                                              1. 4

                                                                                I had the same initial reaction, and I thought that must be wrong, linux does not have jails. Then, I realized, of course, the reason is simply some linux people are not aware of jails at all.

                                                                                1. 8

                                                                                  There are dozens of us!

                                                                                2. 2

                                                                                  Oh, interesting.

                                                                                  I didn’t see that it’s used to refer to BSD jails, but to using chroot on Linux. Here is an example:

                                                                                  A chroot jail is a way to isolate a process and its children from the rest of the system. It should only be used for processes that don’t run as root, as root users can break out of the jail very easily.

                                                                                  The idea is that you create a directory tree where you copy or link in all the system files needed for a process to run. You then use the chroot() system call to change the root directory to be at the base of this new tree and start the process running in that chroot’d environment.

                                                                                  https://unix.stackexchange.com/questions/105/chroot-jail-what-is-it-and-how-do-i-use-it

                                                                                  Here is another

                                                                                  And you can find many explanations that seem to call the changed root a ‘chroot jail’.

                                                                                  1. 4

                                                                                    I updated my first comment, a quick search shows that you are very much using this term how folks expect. Unfortunately I worry this might be a case of getting old.

                                                                                    1. 2

                                                                                      Oh that makes me very sad. That seems like it has got to be a kind of false appropriation of one communities terminology into another right? 😰

                                                                                      1. 3

                                                                                        I’m not sure what the exact chronology of FreeBSD jails is, but I was definitely calling chroot environments “chroot jails” in the late 90s/early 00s. So maybe you’re both old and young 😁

                                                                                        Edit: neck-and-neck! Looks like they were released in March 2000

                                                                                        1. 2

                                                                                          Jails landed in FreeBSD in ‘99. The place I was using them I think opened in 2000-2001. Now I don’t believe anything about my own memory. Except that at the least in the circles I was running in a chrooted jail meant something different than being chrooted haha.

                                                                                          1. 6

                                                                                            https://www.cheswick.com/ches/papers/berferd.pdf This paper was written ~ 1992

                                                                                            On 7 January 1991 a cracker, believing he had discovered the famous sendmail DEBUG hole in our Internet gateway machine, attempted to obtain a copy of our password file. I sent him one. For several months we led this cracker on a merry chase in order to trace his location and learn his techniques.

                                                                                            This paper is a chronicle of the cracker’s “successes” and disappointments, the bait and traps used to lure and detect him, and the chroot “Jail” we built to watch his activities

                                                                                            I would guess (based on no evidence at all) that the name “jail” for the *BSD feature was probably inspired by existing usage. My recollection of the 1990s was that it was fairly common to refer to setting up chroot jails for things like anonymous ftp servers

                                                                                            1. 2

                                                                                              Nice! So the BSD jail would be have been like “now you can REALLY jail your jail” :-D

                                                                                  2. 3

                                                                                    sailor[0] and captain[1] implement pseudo-containers for NetBSD using chroot.

                                                                                    [0] https://gitlab.com/iMil/sailor [1] https://gitlab.com/jusavard/captain/

                                                                                    1. 2

                                                                                      This is a really cool article, thanks for writing it. Do you have thoughts on newer system calls and how they can be used to improve isolation? I’m thinking of pledge specifically from here but I know there are a ton of others as well.

                                                                                      1. 1

                                                                                        Thanks! I really don’t know much about pledge, but I think Andreas Kling has some videos on adopting it for Serenity that have been on my to-watch list.

                                                                                        https://awesomekling.github.io/pledge-and-unveil-in-SerenityOS/

                                                                                      2. 2

                                                                                        The difference between chroot and containers was the marketing splash, but that marketing spend could’ve been a dud. The tooling and arguably saner defaults around docker was what sold it. chroot (jails) were the province of the FreeBSD folks (and maybe Solaris) and linux only had chroot with less protections. chroot ended up being known more as a security flaw (due to misuse) than as a feature: https://cwe.mitre.org/data/definitions/243.html but docker, et al have kept enough sane defaults to make container escapes conference talk fodder but not large-scale malicious actor toolkit. Small scale state actors, who knows.

                                                                                      1. 2

                                                                                        A little late to the party here, but DOS World magazine used to do this all the time to “distribute” tiny utilities. See page 13 of this issue for an example: https://archive.org/details/dosworld020199503/page/n13/mode/2up

                                                                                        Trying to reverse engineer those little things really got me interested in how everything actually worked under the hood.

                                                                                        1. 7

                                                                                          This seems to be the same algorithm as Shamir’s Secret Sharing.

                                                                                          1. 2

                                                                                            That’s a fantastic observation! Connections like that tickle my brain in the most wonderful way, so thank you for that!

                                                                                          1. 3

                                                                                            I find the “S-Expressions enable macros” take fairly unconvincing. Parsing a language is generally not difficult (shoo, C++ and YAML), and working with an AST is just like any other data structure… Much of macro usage in Lisps also takes the “quasiquote and unquote” form which doesn’t depend on syntax (or “lack thereof”) nearly as much:

                                                                                            (defmacro do-and-release [block releaser]
                                                                                              ~(let [v ,block]
                                                                                                  (if v (,releaser v))))
                                                                                            

                                                                                            v.s.

                                                                                            macro do-and-release (block, releaser) {
                                                                                              ~{
                                                                                                let v = ,block;
                                                                                                if (v) {
                                                                                                  (,releaser)(v);
                                                                                                }
                                                                                              }
                                                                                            }
                                                                                            

                                                                                            Lisp macros are better! But not because of the syntax, but for other reasons:

                                                                                            • Full-powered macros can be embedded right next to the code that uses them without ceremony: no code-generation, janky declarative macro system, or separate compilation unit required
                                                                                            • Deep identification with the language: everyone uses them, all the tools support them well
                                                                                            • Language is interpreted so the macros can just be interpreted, instead of needing a slow compilation process
                                                                                            • etc.
                                                                                            1. 3

                                                                                              I’m both with you and not :).

                                                                                              Elixir is a great example of a language without s-expressions that handles macros quite beautifully. See: https://elixir-lang.org/getting-started/meta/macros.html

                                                                                              The beauty of macros-as-s-exps though is that you don’t have to do anything special to walk through the passed expression and manipulate it. The Elixir macro page shows this example:

                                                                                              {:if, [],
                                                                                              [{:!, [], [true]},
                                                                                               [do: {{:., [],
                                                                                                  [{:__aliases__,
                                                                                                    [], [:IO]},
                                                                                                   :puts]}, [], ["this should never be printed"]}]]}
                                                                                              

                                                                                              As the example output from the macro:

                                                                                                defmacro macro_unless(clause, do: expression) do
                                                                                                 quote do
                                                                                                   if(!unquote(clause), do: unquote(expression))
                                                                                                 end
                                                                                               end
                                                                                              

                                                                                              When called like this:

                                                                                              Unless.macro_unless true, do: IO.puts "this should never be printed"
                                                                                              

                                                                                              It explains that this is, in practice what the macro received as input:

                                                                                              macro_unless(true, [do: {{:., [], [{:__aliases__, [alias: false], [:IO]}, :puts]}, [], ["this should never be printed"]}])
                                                                                              

                                                                                              Once you’re familiar with how it all works, there’s no issue. The beauty of the Lisp macros is that the data structure that your function gets to process is exactly the same as what is written in your code. Since the parse tree maps virtually 1:1 to the source representation, you don’t have to try to grok and manipulate an intermediate AST form; you get to play with it exactly the way it appears in the code.

                                                                                            1. 1

                                                                                              No clue if the author is on here or not but I’d like to thank them for the systemd info about the sleep.conf file. I went through a different post the other day to configure hibernation but it ended at “sudo systemctl hibernate” without any instructions on how to configure hibernation to happen automatically when you close the lid or press the power button.

                                                                                              1. 11

                                                                                                I find it funny RST (restructured text) is never mentioned in these debates.

                                                                                                1. 10

                                                                                                  On the contrary, the only time I ever see it mentioned is precisely in these debates.

                                                                                                  1. 4

                                                                                                    Rst has its problems (terrible headers, no nested inline markup) but the extensibility you get is just wonderful.

                                                                                                    1. 2

                                                                                                      Yes, glad someone mentioned the headers.

                                                                                                      A lot of python stuff still uses RST.

                                                                                                      1. 1

                                                                                                        I don’t really care about extensibility if it means every time I want an in-line code block with part or all of it linked to another document I need to write my own role. Not supporting nested in-line markup is just brain-dead.

                                                                                                      2. 3

                                                                                                        It was a more robust and better designed option, retaining the essential same mindset as markdown. It is unfortunate that markdown won the popularity contest. But marketing and hype dictated the outcome.

                                                                                                        1. 8

                                                                                                          But marketing and hype dictated the outcome.

                                                                                                          It’s funny, but OG Markdown was just a dude with a then popular blog and a whole mess of Perl: https://daringfireball.net/projects/markdown/

                                                                                                          Markdown is a text-to-HTML conversion tool for web writers.

                                                                                                          The overriding design goal for Markdown’s formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdown’s syntax is the format of plain text email.

                                                                                                          (my emphasis)

                                                                                                          The list of filters (from https://daringfireball.net/projects/markdown/syntax#philosophy), links from the original:

                                                                                                          Gruber never intended MD to become the be-all and end-all of authoring. In fact I think that’s why he didn’t let CommonMark use the Markdown name.

                                                                                                          1. 2

                                                                                                            Gruber never intended MD to become the be-all and end-all of authoring. In fact I think that’s why he didn’t let CommonMark use the Markdown name.

                                                                                                            Yes, also because he didn’t want Markdown to be a single standard. Which is why he has no problems with “GitHub-flavored” markdown and didn’t want CommonMark using the markdown name.

                                                                                                        2. 3

                                                                                                          RST is mentioned as an inspiration to MD, see my comment down below https://lobste.rs/s/zwugbc/why_is_markdown_popular#c_o1ibid

                                                                                                          1. 1

                                                                                                            I recently fell down a bit of a Sphinx rabbit hole that got me into RST after years of Markdown and org. I really really appreciate how easy they make it to add your own plugins that can manipulate the parsed tree. That project is temporarily on the shelf but I’m hoping to get back into it when the snow falls more.

                                                                                                          1. 2

                                                                                                            I was thinking about similar stuff recently and I have to say that manifolds with non-euclidian 3D spaces can probably be made consistent (ie, without needing “surgery” and “portals”) by enveloping the 3D space of the game in a 4D space.

                                                                                                            The portals and surgeries become just detours through the fourth dimension, similarly to how a Möbius strip glues together in the same plane two surfaces.

                                                                                                            So far I haven’t seen any literature about people attempting to build this type of 4D environments[1], but I’m sure it would allow for pretty interesting gameplay elements.

                                                                                                            [1] The only game I know of using a homogenous 4D environment (ie, no seams, no portals, no surgery) is Miegakure and its mechanics rely on different types of 4D manipulation than I’m thinking of. Marc ten Bosch’s 4D toys are also a good playground for exploring 3D objects in a 4D environment.

                                                                                                            1. 2

                                                                                                              Bear with me, I think your idea is exceptionally interesting and I’m trying to think through the implications.

                                                                                                              (This might be the limiting thought that’s holding me back) To ultimately get rendered for the user, there’s going to be a series of projections to go from 4D to 3D to 2D on the screen. If we call the fourth dimension w, I have no problem contemplating a projection where different values of w can place you in entirely different places for the same x, y, z coordinate. Where my brain starts melting is when contemplating how you move smoothly through that world.

                                                                                                              Say I’m in a “flat” 3D ring that is actually a 4D spiral. A round tunnel with no Z component, but if you do a clockwise lap around it you don’t end up at the starting point, you end up at the same x, y, z coordinate but with a different w. How would I set up my player motion controls to do this? I have no problem conceptually with a 4D motion vector, just… figuring out how to implicitly transform the w component through that motion. Am I missing something simple here?

                                                                                                              Edit: doh, all it took was hitting the post button. The floor of a world like that is a flat plane, just like in 3D, but tilted relative to the xyz plane. So as you go around the ring in one direction you’re actually climbing in w and as you go around the other was you’re descending.

                                                                                                              1. 1

                                                                                                                Honestly I don’t think it’s as straightforward as that.

                                                                                                                Intuitively I was thinking of the problem by extrapolating how 2D surfaces in our 3D world behave for a 2D entity. For example the walls of a room form a 2D surface on which someone moving in a single direction would go around the whole room, but at each corner the properties of the world change in drastic ways (in this case the normal changes direction completely).

                                                                                                                How would that affect a 3D volume that is somehow mapped on a 4D? (the internet resources usually refer to this as a xyzt space - not xyzw)

                                                                                                                One way would be that a staircase that goes up, changes orientation suddenly and after a certain point would actually descend (similar to how the plane would change its normal after a corner).

                                                                                                                However I suspect this model is too simplistic to reflect an actual 4D environment, and honestly I can’t really articulate more than that. :)

                                                                                                                1. 2

                                                                                                                  Yeah, a full 4D environment is pretty brain melting :D. To make anything at all make sense, I’m mostly thinking of a restricted 4D environment that’s basically an orthogonal projection from 4D-to-3D and then a normal perspective projection from 3D-to-2D. Just like how a 3D orthogonal projection effectively hides the depth axis, the 4D ortho projection would “hide” the w axis from the user.

                                                                                                                  I hesitated to use t instead of w since that somewhat implies that time is the 4th axis but I didn’t want to confuse the matter by implying that it’s necessarily a time evolution of the 3D portion of the world.

                                                                                                                  One way would be that a staircase that goes up, changes orientation suddenly and after a certain point would actually descend (similar to how the plane would change its normal after a corner).

                                                                                                                  Ahhhhhhh, that’s super interesting, yeah, I hadn’t though of discontinuities like that at all; in fact, part of where my initial hang-up came from was trying to think of how to make everything continuous!

                                                                                                            1. 2

                                                                                                              I have had multiple Ubikeys for a long time as I am a frequent international traveler, and am afraid of Google locking me out suddenly. Having a Ubikey gives me peace of mind, but it losing the key can be a headache, as I need to buy another costly key, log into every service that uses the key, remove it, and add a new key. With this project (I hope, I haven’t tested), I may be able to simply gpg encrypt my fidokey and keep it safe in my computer rather than having to worry about my ubikeys.

                                                                                                              1. 3

                                                                                                                It’s a really interesting tradeoff. Nominally the value prop of the Yubikey is that it’s a separate physical device so that you know that “even if someone were to steal my laptop and have unfettered access to my password manager, they still wouldn’t be able to access these services”

                                                                                                                But the flip side is definitely a concern too… if I lose my (physical) keychain, I won’t be able to access these services either.

                                                                                                                In my own experience, I have had a laptop stolen but haven’t ever had my keys stolen nor have I lost my keys for any length of time. My wife, though, has the opposite experience.

                                                                                                                1. 1

                                                                                                                  I mean no, that’s not a good reason to use a Yubikey at all. It’s about having a smartcard that performs cryptographic operations for you without exposing the keys. Of course using it as a second authentication factor is a specific use case of that, also providing multidevice access as a side-effect of being a separate device. More to the point, compromising a device cannot leak the keys for further attack vectors.

                                                                                                              1. 1

                                                                                                                I remember being very confused about umbrella projects when i learnt Elixir… and now I remember I did enjoy writing Elixir code quite a lot and actually miss it 😅 OTP is one of these things that changed my perception of how a software can be architectured

                                                                                                                1. 1

                                                                                                                  My time with Elixir a few years ago has undoubtedly had a huge influence on how I write complex C++ code today, for the better!

                                                                                                                1. 1

                                                                                                                  Huh! Maybe it’s time for me to write a blog post… I did something similar this weekend with a tap device, but in C-flavoured-C++ and LwIP. I’m just gearing up to run LwIP on a microcontroller and wanted to try it in userland without having to worry about hardware quirks and interrupts getting in the way.

                                                                                                                  1. 5

                                                                                                                    Why make this a preprocessor feature? To me it sounds like a job for the linker — “please resolve this symbol to the contents of this binary file.”

                                                                                                                    Putting it in the preprocessor means the binary file is going to be converted to a huge ASCII list of comma-separated numbers, then parsed back down to a byte array, then written to the .o file. I’m sure that expansion can be cleverly optimized away with some work, but why is it even there?

                                                                                                                    1. 27

                                                                                                                      the whole point of this feature is so you don’t have to parse it - if you read the article, the author says he has to convince compiler authors that “a sufficiently clever compiler” is never going to be faster than copying a file. The intention is for implementors to turn this into some platform-specific linker directive.

                                                                                                                      1. 9

                                                                                                                        Well, any linker could support it without the help of the C standard, which pretty much only covers compilation. But having it dealt with by the preprocessor allows the compiler to be more intelligent about optimization and the like. I imagine that if it were a feature of the linker, the C standard would be hesitant to require that the C code could obtain the object’s size, for example, fearing that some linkers wouldn’t be able to easily provide more than a bare pointer. If it’s in the preprocessor, the compiler already knows it’s size (if the programmer wants that).

                                                                                                                        1. 6

                                                                                                                          To add to a good point, the compiler knows the size (very useful for some optimisations), and also the contents. The constexpr evaluations shown in the article aren’t possible otherwise.

                                                                                                                          1. 8

                                                                                                                            Even beyond optimization, being able to do sizeof(embedded_thing) is very useful!

                                                                                                                          2. 5

                                                                                                                            I didn’t really follow the standardisation process but IMHO this is, if not the, at least a correct answer. The linker approach is available in some compilers (if not all – I’ve “just included binary data” via the linker script countless times) but all it gives you is the equivalent of a void * (or, optimistically, a char *). The compiler doesn’t know anything about it. Just as bad, linters and other static analysis tools don’t know anything about it. If you want to do anything with that data short of sending it to a dumb display or whatever, the only appropriate comment above the code that does something with it is /* yolo */.

                                                                                                                          3. 6

                                                                                                                            Putting it in the preprocessor means the binary file is going to be converted to a huge ASCII list of comma-separated numbers,

                                                                                                                            The article mentions the “as if” rule in the standard. The compiler has to behave “as if” it did this, but it doesn’t actually have to do this.

                                                                                                                            1. 1

                                                                                                                              Yes, that’s what I meant by “cleverly optimized away with some work”. But it’s architecturally ugly — it means the preprocessor is overstepping its bounds and passing stuff to the parser that isn’t source code.

                                                                                                                              1. 6

                                                                                                                                The preprocessor already produces non-standard C. Just running echo hello | cpp gives you the following output:

                                                                                                                                # 1 "<stdin>"
                                                                                                                                # 1 "<built-in>"
                                                                                                                                # 1 "<command-line>"
                                                                                                                                # 31 "<command-line>"
                                                                                                                                # 1 "/usr/include/stdc-predef.h" 1 3 4
                                                                                                                                # 32 "<command-line>" 2
                                                                                                                                # 1 "<stdin>"
                                                                                                                                hello
                                                                                                                                

                                                                                                                                Obviously, it does this to communicate line and file information to the user for the most part. But I don’t see why it would be a much more severe layering violation to make #embed "foo.svg" preprocess to something like # embed "/path/to/foo.svg", which the compiler can then interpret, if the preprocessor already produces non-standard C with the expectation that the compiler supports the necessary extensions.

                                                                                                                                1. 2

                                                                                                                                  One of the compilers uses __builtin_string_embed(“base64==”), which does parse as valid source code.