Most of what I want out of a website is unformatted text and jump links. Given that, the appropriate (already existing) technology to deliver it with the minimum of fuss is not HTML+CSS but gophermaps! I’d love to see people use gopher+gophermaps instead of HTML+HTTP when formatting isn’t necessary, though that’s an even bigger leap than getting people to avoid CMSes for that use case.
Okay. gopher://i-logout.cz/1/en/bongusta/ and gopher://gopher.black/1/moku-pona are two feeds of gopher based blogs (aka phlogs). Read and enjoy.
If I thought that the advertisers who keep trying to buy space on my blog (for link spam mostly) actually read my blog I might consider doing this.
I get e-mails like that too. I got one from Casper to write a review. I think and I gave them some outrageous number (like $5,000) and never heard back.
Also the post you linked to from that post about e-mail and small business, I’ve got a pretty similar story:
https://penguindreams.org/blog/how-google-and-microsoft-made-email-unreliable/
There are services out there now like Sendgrid and Mailgun to at least help small businesses get mail out without it going to spam, or of course MailChip for mailing lists. I should really do a part II to that post at some point.
I run my own mail server and I have seen all the problems described in the post. From a German law perspective, the behaviour shown by Google and Microsoft probably qualifies as illegal under § 4 Nr. 4 UWG (Act against unfair competition, English translation). If anyone reading this runs an e-mail-based business in Germany, you should consider challenging them for the sake of the free e-mail exchange.
I can’t speak for the rest of the teams at work, only for my own team, which is somewhat unique to the company because our code actually interfaces with our customers, the various Monopolistic Phone Companies, in the call path of a phone call. With that out of the way …
Developer (me and two others) write the code. Bug fix, feature enhancement, new feature, what have you. Once we are happy, the ticket (Jira) is assigned to our QA engineer, who does testing in the DEV environment (programs generated by our build system Jenkins—ops can automatically push stuff from Jenkins to the various environments mentioned), Once that passes, it moves on to the QA environment and another round of testing. Once that passes, it moves on to STAGING for another round of testing. Once that passes, we then submit a request to our customer, the Monopolistic Phone Company stating our intent to upgrade our stuff. They have 10 business days to accept or reject the proposal. If they reject, we wait and try again later. If they accept, on the agreed upon day (actually, night), ops will push to PRODUCTION. During the push, the QA engineer and developer(s) in question will also be there (on the phone and via chat) to test and make sure things went okay in the deployment.
There have been two times when during the push to PRODUCTION, things weren’t working correctly, and I (the developer) called for a roll back, which is easy in our environment (both times dealt with parsing telephone numbers [1]).
The testing of our stuff about half automated. The “happy path” is automated, but the “less-so-happy-paths” aren’t, and they’re quite complex to set up (our business logic component makes queries to two different services, we need to handle the case when both time out, or one but not the other—that’s four test cases, only one (both reply in time) is easy to test (there’s a difference between the service being down, and its reply coming in too late)). As for writing the automated tests we do have, that has been my job (I started out as the QA engineer for the team in question, even before it was a team [2]). The QA engineer does write some code, but will come and ask me about the finer points of testing some of the scenarios.
Sadly, STAGING and PRODUCTION should match, but for our team, it doesn’t. And some stuff can’t be tested until it hits PRODUCTION, because how do you test an actual cell phone in a lab environment? Especially when your company can’t afford a lab environment like the Monopolistic Phone Company has?
[1] If you can avoid doing so, avoid it. I cannot. I need to parse numbers for the North American Numbering Plan, and just that is difficult enough. I’ve seen numbers that make me seriously question the competency of the Monopolistic Phone Company to manage their own phone networks.
[2] Long story.
Clients I work with uses a thing called femtocell to setup cell networks from one country on another, in order to run tests. It’s quite expensive, though (or so I hear), and they’re moving onto other alternatives as much as possible.
Everything dealing with the Monopolistic Phone company is expensive, even the equivalent of DNS queries [1].
[1] Take a phone number and lookup the name. Funny enough, this is done over DNS! (NAPTR records) Imagine having to pay for every DNS lookup. It’s insane.
The problem turns out to be some obscure FUSE mounts that the author had lying around in a broken state, which subsequently broke the kernel namespace system. Meanwhile, I have been running systemd on every computer I’ve owned in many years and have never had a problem with it.
Does this not seem a bit melodramatic?
From the twitter thread:
Systemd does not of course log any sort of failure message when it gives up on setting up the DynamicUser private namespace; it just goes ahead and silently runs the service in the regular filesystem, even though it knows that is guaranteed to fail.
It sounds like the system had an opportunity to point out an anomaly that would guide the operator in the right direction, but instead decided to power through anyways.
A lot like continuing to run in a degraded state is a plague that affects distributed systems. Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.
At $work we prefer degraded mode for critical systems. If they go down we make no money, while if they kind of sludge on we make less but still some money while we firefight whatever went wrong this time.
My belief is that inevitably you could be making $100 per day, would notice if you made $0, but are instead making $10 and won’t notice this for six months. So be careful.
We have monitoring and alerting around how much money is coming in, that we compare with historical data and predictions. It’s actually a very reliable canary for when things go wrong, and for when they are right again, on the scale of seconds to a few days. But you are right that things getting a little suckier slowly over a long time would only show up as real growth not being in line with predictions.
I tend to agree that hard failures are nicer in general (especially to make sure things work), but I’ve also been in scenarios where buggy logging code has caused an entire service to go down, which… well that sucked.
There is a justification for partial service functionality in some cases (especially when uptime is important), but like with many things I think that judgement calls in that are usually so wrong that I prefer hard failures in almost all cases.
Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.
So if the server is over capacity, kill it and don’t serve anyone?
Router can’t open and forward a port, so cut all traffic?
I guess that sounds a little too hyperbolic.
But there’s a continuum there. At $work, I’ve got a project that tries to keep going even if something is wrong. Honest, I’m not sure I like how all the errors are handled. But then again, the software is supposed to operate rather autonomously after initial configuration. Remote configuration is a part of the service; if something breaks, it’d be really nice if the remote access and logs and all were still reachable. And you certainly don’t want to give up over a problem that may turn out to be temporary or something that could be routed around… reliability is paramount.
And you certainly don’t want to give up over a problem that may turn out to be temporary
I think that’s close to the core of the problem. Temporary problems recur, worsen, etc. I’m not saying it’s always wrong to retry, but I think one should have some idea of why the root problem will disappear before retrying. Computers are pretty deterministic. Transient errors indicate incomplete understanding. But people think a try-catch in a loop is “defensive”. :(
So you never had legacy systems (or configurations) to support? I read Chris’ blog regularly, and he works at a university on a heterogeneous network (some Linux, some other Unix systems) that has been running Unix for a long time. I think he started working there before systemd was even created.
Why do you say that the FUSE mounts were broken? As far as we can see they were just set up in a uncommon way https://twitter.com/thatcks/status/1027259924835954689
It does look brittle that broken fuse mounts prevent the ntpd from running. IMO the most annoying part is the debugability of the issue.
Yes, it seems melodramatic, even to my anti-systemd ears. It’s a documentation and error reporting problem, not a technical problem, IMO. Olivier Lacan gave a great talk last year about good errors and bad errors (https://olivierlacan.com/talks/human-errors/). I think it’s high time we start thinking about how to improve error reporting in software everywhere – and maybe one day human-centric error reporting will be as ubiquitous as unit testing is today.
In my view (as the original post’s author) there are two problems in view. That systemd doesn’t report useful errors (or even notice errors) when it encounters internal failures is the lesser issue; the greater issue is that it’s guaranteed to fail to restart some services under certain circumstances due to internal implementation decisions. Fixing systemd to log good errors would not cause timesyncd to be restartable, which is the real goal. It would at least make the overall system more debuggable, though, especially if it provided enough detail.
The optimistic take on ‘add a focus on error reporting’ is that considering how to report errors would also lead to a greater consideration of what errors can actually happen, how likely they are, and perhaps what can be done about them by the program itself. Thinking about errors makes you actively confront them, in much the same way that writing documentation about your program or system can confront you with its awkward bits and get you to do something about them.
Speaking as a C programmer, this is a great tour of all the worst parts of C. No destructors, no generics, the preprocessor, conditional compilation, check, check, check. It just needs a section on autoconf to round things out.
It is often easier, and even more correct, to just create a macro which repeats the code for you.
A macro can be more correct?! This is new to me.
Perhaps the overhead of the abstract structure is also unacceptable..
Number of times this is likely to happen to you: exactly zero.
C function signatures are simple and easy to understand.
It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it, so makes certain lifetime expectations on it. Not one single piece of documentation I’ve seen in the last 5 years mentions this fact.
It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it
Which system? I’m pretty sure OpenBSD doesn’t.
Linux (that’s the manpage I linked to above). This was before I discovered OpenBSD.
Edit: I may be misremembering and maybe it was connect() that was the problem. It too seems fine on OpenBSD. Here’s my original eureka moment from 2011: https://github.com/akkartik/wart/commit/43366d75fbfe1. I know it’s not specific to that project because @smalina and I tried it again with a simple C program in 2016. Again on Linux.
I find it really hard to believe the kernel would keep that userspace pointer around so …
https://github.com/torvalds/linux/blob/e978de7a6d382ec378830ca2cf38e902df0b6d84/net/socket.c#L1665
https://github.com/torvalds/linux/blob/e978de7a6d382ec378830ca2cf38e902df0b6d84/net/socket.c#L188
Bind is there too: https://github.com/torvalds/linux/blob/e978de7a6d382ec378830ca2cf38e902df0b6d84/net/socket.c#L1487
Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.
I’ll dig up a simple test program later today.
Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.
bind and connect are syscalls, libc would only have a stub doing the syscall if anything at all since they are not part of the standard library.
Perhaps the overhead of the abstract structure is also unacceptable..
Number of times this is likely to happen to you: exactly zero.
I have to worry about my embedded C code being too big for the stack as it is.
Certainly. But is the author concerned with embedded programming? He seems to be speaking of “systems programming” in general.
Also, I interpreted that section as being about time overhead (since he’s talking about the optimizer eliminating it). Even in embedded situations, have you lately found the time overheads concerning?
I work with 8-bit AVR MCUs. I often found myself having to cut corners and avoid certain abstractions, because that would have resulted either in larger or slower binaries, or would have used significantly more RAM. On an Atmega32U4, resources are very limited.
Perhaps the overhead of the abstract structure is also unacceptable..
Number of times this is likely to happen to you: exactly zero.
Many times, actually. I see FSM_TIME. Hmm … seconds? Milliseconds? No indication of the unit. And what is FSM_TIME? Oh … it’s SYS_TIME. How cute. How is that defined? Oh, it depends upon operating system and the program being compiled. Lovely abstraction there. And I’m still trying to figure out the whole FSM abstraction (which stands for “Finite State Machine”). It’s bad enough to see a function written as:
static FSM_STATE(state_foobar)
{
...
}
and then wondering where the hell the variable context is defined! (a clue—it’s in the FSM_STATE() macro).
And that bind() issue is really puzzling, since that haven’t been my experience at all, and I work with Linux, Solaris, and Mac OS-X currently.
I agree that excessive abstractions can hinder understanding. I’ve said this before myself: https://news.ycombinator.com/item?id=13570092. But OP is talking about performance overhead.
I’m still trying to reproduce the bind() issue. Of course when I want it to fail it doesn’t.
I’m very happy with vgo after spending years frustrated with buggy, partially-broken third party tools: first glide (no way to upgrade just one package, randomly fails operations) then dep (100+ comment issue on not supporting private repos).
This comment from HN sums up my feelings on this post:
Go does not exist to raise the profiles of Sam Boyer and Peter Bourgon. Sam wanted to be a Big Man On Campus in the Go community and had to learn the hard way what the D in BDFL means. The state of dep is the same as it was before - an optional tool you might use or might not.
Lots of mentions in Peter’s post about things the “dep committee” may or may not have agreed with. Isn’t this the same appeal to authority he is throwing at Russ? When did the “dep committee” become the gatekeepers of Go dependency discussions and solutions? Looks like a self-elected shadow government, except it didn’t have a “commit bit”. Someone should have burst their balloon earlier, that is the only fault here. Russ, you are at fault for leading these people on.
Go is better off with Russ’s module work and I personally don’t care if Sam and Peter are disgruntled.
This is an extremely bad faith interpretation of events. Your words have an actual negative effect on people who have tried for a very long time to do the best they could to improve a bad situation.
had to learn the hard way what the D in BDFL means
Except Go is not (or at least doesn’t market itself as) BDFL-led. The core team has been talking about building and empowering the community for years (at least since Gophercon 2015, with Russ’ talk).
When did the “dep committee” become the gatekeepers of Go dependency discussions and solutions?
They were put in place by / with the blessing of the Go core team, so some authority on the subject was certainly implied.
Go is better off with Russ’s module work
You can certainly prefer Russ’s technical solution, that’s only part of the thing being discussed (and I think it’s fair to say it’s not the heart of the matter).
The rest of your quotes are just mean.
People don’t seem to realize that Go is not driven by the community, it’s driven by Google. It’s clear to me that Google doesn’t trust its programmers to use any advanced features, the code is formatted the same (again, don’t trust the programmer), everything is kept in one single repo and there is no versioning [1]. In my opinion, Google only released Go to convince a ton of programmers it’s the New Hotness (TM), get everybody using it so they can cut down on training costs and disappointed engineers looking for resume-worthy tech to work on [2].
So, any proposal for Go that violates Google’s work flow will be rejected [3]. Any proposal that is neutral or even helps Google, will probably be accepted. As far as I’m concerned, Go is a Google-proprietary language to solves problems Google has. The fact that it is available for others to use is intentional on Googles part, but in no way is it “communitty driven.”
[1] Because if you change the signature of a function, it is up to you to change all the call sites at the same time. Because the code is all formatted the same way, there does exist tooling to do this. At Google. Probably no where else.
[2] “What do you mean we got to use this proprietary crap language? I can’t put this on my resume! My skills will stagnate here! I hate you, Google!”
[3] Stonewalled. Politely told no.. But ultimately, it will be rejected.
To be fair, don’t trust the programmer, is a pretty good rule to follow when you design a language or API. Not because programmers are bad or incompetent but because they are human and thus predisposed to make mistakes over time.
hrm, I actually want to push back against this quite strongly. any BDFL making decisions in the absence of community input will quickly find themselves the BDFL of a project that has no users, or at least one that often makes poor technical choices. Also, framing this disagreement as a personal one where prestige and reputation are at stake rather than as a technical one is a characterization that nobody other than the involved parties can make, certainly not people uninvolved in the project at all. In particular, making character judgements about people you don’t know based on technical blog posts is something I expect from the orange website, but I’d like to think the community here is a bit better.
and as far as that technical disagreement goes, I’ve read through rsc’s rationale and I’m not any more convinced than I was in the beginning that jettisoning a well known package management path (SAT-solver) in favor of a bespoke solution is the correct decision. It is definitely the Golang thing to do, but I don’t know if it’s the best. Time will tell.
A realization I recently had:
Why don’t we abstract away all display affordances from a piece of code’s position in a file? That is, the editor reads the file, parses its AST, and displays it according to the programmer’s preference (e.g., elastic tabstops, elm-like comma-leading lists, newline/no-newline before opening braces, etc). And prior to save, the editor simply runs it through an uncustomized prettier first.
There are a million and one ways to view XML data without actually reading/writing pure XML. Why not do that with code as well?
This idea is floating around the interwebz for a long time. I recall it being stated almost verbatim on Reddit, HN, probably on /.
And once you take it a step further, it’s clear that it shouldn’t be in a text file in the first place. Code just isn’t text. If you store it as a tree or a graph in some sort of database, it becomes possible to interact with it in much more powerful ways (including displaying it any way you like). We’ve been hobbled by equating display representation with storage format.
This talk touches on this issue, along with some related ones and HCI in general: Bret Victor: The Future of Programming
God, I have been trying to recall the name of this talk for ages! Thank you so much, it is a great recommendation
Text is great when (not if) your more complicated tools fail or do something you can’t tolerate and you need to use tools which don’t Respect The Intent of designers who, for whatever reason, don’t respect your intent or workflow. Sometimes, solving a problem means working around a breakage, whether or not that breakage is intentional on someone else’s part.
Besides, we just (like, last fifteen or so years) got text to the point where it’s largely compatible. Would be a shame to throw that away in favor of some new AST-database-thing which only exists on a few platforms.
I’m not sure I get your point about about intent. Isn’t the same already true of, say, compilers? There are compiler bugs that we have to work around, there are programs that seem logical to us but the compiler won’t accept, and so on. Still, everybody seems to be mostly happy to file a compiler bug or a feature request, and live with a workaround for the present. Seems like it works well enough in practice.
I understand your concern about introducing a new format but it sounds like a case of worse-is-better. Sure, we get a lot of convenience from the ubiquity of text, but it would nevertheless be sad if we were stuck with it for the next two centuries.
With compilers, there are multiple of them for any given language, if the language is important enough, and you can feed the same source into all of them, assuming that source is text.
I’ve never seen anyone casually swap out the compiler for production code. Also, for the longest time, if you wrote C++ for Windows, you pretty much had to use the Microsoft compiler. I’m sure that there are many embedded platforms with a single compiler.
If there’s a bug in the compiler, in most casss you work around it, then patiently wait for a fix from the vendor.
So that’s hardly a valid counterpoint.
Re: swapping out compiler for production code: most if not all cross-platform C++ libraries can be compiled on at least llvm, gcc and msvc.
Yes, I’m aware of that, but what does it have to do with anything I said?
EDIT: Hey, I went to Canterbury :)
“I’ve never seen anyone casually swap out the compiler for production code” sounded like you were saying people didn’t tend to compile the same production code on multiple compilers, which of course anyone that compiles on windows and non-windows does. Sorry if I misinterpreted your comment!
My first comment is in response to another Kiwi. Small world. Pretty cool.
This, this, a thousand times this. Text is a good user-interface for code (for now). But it’s a terrible storage and interchange format. Every tool needs its own parser, and each one is slightly different, leaving begging the amount of cpu and programmer time we waste going from text<->ast<->text.
Yeah, it’s obviously wasteful and limiting. Why do you think we are still stuck with text? Is it just sheer inertia and incrementalism, or does text really offer advantages that are challenging to recreate with other formats?
The text editor I use can handle any computer language you can throw at it. It doesn’t matter if it’s BASIC, C, BCPL, C++, SQL, Prolog, Fortran 77, Pascal, x86 Assembler, Forth, Lisp, JavaScript, Java, Lua, Make, Hope, Go, Swift, Objective-C, Rexx, Ruby, XSLT, HTML, Perl, TCL, Clojure, 6502 Assembler, 68000 Assembler, COBOL, Coffee, Erlang, Haskell, Ocaml, ML, 6809 Assembler, PostScript, Scala, Brainfuck, or even Whitespace. [1]
Meanwhile, the last time I tried an IDE (last year I think) it crashed hard on a simple C program I attempted to load into it. It was valid C code [2]. That just reinforced my notion that we aren’t anywhere close to getting away from text.
[1] APL is an issue, but only because I can’t type the character set on my keyboard.
[2] But NOT C++, which of course, everybody uses, right?
To your point about text editors working with any language, I think this is like arguing that the only tool required by a carpenter is a single large screwdriver: you can use it as a hammer, as a chisel, as a knife (if sharpened), as a wedge, as a nail puller, and so on. Just apply sufficient effort and ingenuity! Does that sound like an optimal solution?
My preference is for powerful specialised tools rather than a single thing that can be kind of sort of applied to a task.
Or, to approach from the opposite direction, would you say that a CAD application or Blender are bad tools because they only work with a limited number of formats? If only they also allowed you to edit JPEGs and PDFs, they would be so much better!
To your point about IDEs: I think that might even support my argument. Parsing of freeform text is apparently sufficiently hard that we’re still getting issues like the one you saw.
I use other tools besides the text editor—I use version control, compilers, linkers, debuggers, and a whole litany of Unix tools (grep, sed, awk, sort, etc). The thing I want to point out is that as long as the source code is in ASCII (or UTF-8), I can edit it. I can study it. I might not be able to compile it (because I lack the INRAC compiler but I can still view the code). How does one “view” Smalltalk code when one doesn’t have Smalltalk? Or Visual Basic? Last I hear, Microsoft wasn’t giving out the format for Visual Basic programs (and good luck even finding the format for VB from the late 90s).
The other issue I have with IDEs (and I will come out and say I have a bias against the things because I’ve never had one that worked for me for any length of time without crashing, and I’ve tried quite a few over 30 years) is that you have one IDE for C++, and one for Java, and one for Pascal, and one for Assembly [1] and one for Lua and one for Python and man … that’s just too many damn environments to deal with [2]. Maybe there are IDEs now that can work with more than one language [3] but again, I’ve yet to find one that works.
I have nothing against specialized tools like AutoCAD or Blender or PhotoShop or even Deluxe Paint, as long as there is a way to extract the data when the tool (or the company) is no longer around. Photo Shop and Deluxe Paint work with defined formats that other tools can understand. I think Blender works with several formats, but I am not sure about AutoCAD (never having used it).
So, why hasn’t anyone stored and manipulated ASTs? I keep hearing cries that we should do it, but yet, no one has yet done it … I wonder if it’s harder than you even imagine …
Edited to add: Also, I’m a language maven, not a tool maven. It sounds like you are a tool maven. That colors our perspectives.
[1] Yes, I’ve come across several of those. Never understood the appeal …
[2] For work, I have to deal with C, C++, Lua, Make and Perl.
[3] Yeah, the last one that claimed C/C++ worked out so well for me.
For your first concern about the long term accessibility of the code, you’ve already pointed out the solution: a defined open format.
Regarding IDEs: I’m not actually talking about IDEs; I’m talking about an editor that works with something other than text. Debugging, running the code, profiling etc. are different concerns and they can be handled separately (although again, the input would be something other than text). I suppose it would have some aspects of an IDE because you’d be manipulating the whole code base rather than individual files.
Regarding the language maven post: I enjoyed reading it a few years ago (and in practice, I’ve always ended up in the language camp as an early adopter). It was written 14 years ago, and I think the situation is different now. People have come to expect tooling, and it’s much easier to provide it in the form of editor/IDE plugins. Since language creators already have to do a huge amount of work to make programs in their languages executable in some form, I don’t think it would be an obstacle if the price of admission also included dealing with the storage format and representation.
To your point about lack of implementations: don’t Smalltalk and derivatives such as Pharo qualify? I don’t know if they store ASTs but at least they don’t store text. I think they demonstrate that it’s at least technically possible to get away from text, so the lack of mainstream adoption might be caused by non-technical reasons like being in a local maximum in terms of tools.
The problem, as always, is that there is such a huge number of tools already built around text that it’s very difficult to move to something else, even if the post-transition state of affairs would be much better.
Text editors are language agnostic.
I’m trying to conceive of an “editor” that works with something other than text. Say an AST. Okay, but in Pascal, you have to declare variables at the top of each scope; you can declare variables anywhere in C++. In Lua, you can just use a variable, no declaration required. LISP, Lua and JavaScript allow anonymous functions; only the latest versions of C++ and Java allow anonymous functions, but they they’re restricted in that you can’t create closures, since C++ and Java have no concept of closures. C++ has exceptions, Java has two types of exceptions, C doesn’t; Lua kind of has exceptions but not really. An “AST editor” would have to somehow know that is and isn’t allowed per language, so if I’m editing C++ and write an anonymous function, I don’t reference variables outside the scope of said function, but that it can for Lua.
Okay, so we step away from AST—what other format do you see as being better than text?
I don’t think it could be language agnostic - it would defeat the purpose as it wouldn’t be any more powerful than existing editors. However, I think it could offer largely the same UI, for similar languages at least.
And that is my problem with it. As stated, I use C, C++ [1], Lua, Make and a bit of Perl. That’s at least what? Three different “editors” (C/C++, Lua/Perl (maybe), Make). No thank you, I’ll stick with a tool that can work with any language.
[1] Sparingly and where we have no choice; no one on my team actually enjoys it.
Personally, I’m not saying you should need to give up your editor of choice. Text is a good (enough for now) UI for coding. But it’s a terrible format to build tools on. If the current state of the code lived in some sort of event-based graph database for example, your changes could trigger not only your incremental compiler, but source analysis (only on what’s new), it could also maintain a semantic changelog for version control, trigger code-generation (again, only what’s new).
There’s a million things that are currently “too hard” which would cease to be too hard if we had a live model of the code as various graphs (not just the ast, but call graphs, inheritance graphs, you-name-it) that we could subscribe to, or even write purely-functional consumers that are triggered only on changes.
Inertia, arrogance, worse-is-better; Working systems being trapped behind closed doors at big companies; Hackers taking their language / editor / process on as part of their identity that needs to be defended with religious zeal; The complete destruction of dev tools as a viable business model; Methodologies-of-the-week…. The causes are numerous and varied, and the result is software dev is being hamstrung and we’re all wasting countless hours and dollars doing things computers should be doing for us.
I think that part of the issue is that we haven’t seen good structured editor support outside of Haskell and some Lisps.
Having a principled foundation for structured editor + a critical mass by having it work for a language like Javascript/Ruby, would go a long way to making this concept more mainstream. After which we could say “provide a grammar for favorite language X and get structured editor support!”. This then becomes “everything is structured at all levels!”
I think it’s possible that this only works for a subset of languages.
Structured editing is good in that it operates at a higher level than characters, but ultimately it’s still a text editing tool, isn’t it? For example, I think it should be trivial to pull up a list of (editable) definitions for all the functions in a project that call a given function, or to sort function and type definitions in different ways, or to substitute function calls in a function with the bodies of those functions to a given depth (as opposed to switching between different views to see what those functions do). I don’t think structured editing can help with tasks like that.
There are also ideas like Luna, have you seen it? I’m not convinced by the visual representation (it’s useful in some situations but I’m not sure it’s generally effective), but the interesting thing is they provide both a textual and a visual representation of the code.
Python has a standard library module for parsing Python code into an AST and modifying the AST, but I don’t know of any Python tools that actually use it. I’m sure some of them do, though.
Lisp, in fact. Smalltalk lives in an image, Lisp lives in the real world. ;)
Besides, Lisp already is the AST. Smalltalk has too much sugar, which is a pain in the AST.
Possibly, but I’m only talking about a single aspect of it: being able to analyse and manipulate the code in more powerful ways than afforded by plain text. I think that’s equally possible for FP languages.
Ultimately I think this is the only teneble solution. I feel I must be in the minority in having an extreme dislike of columnar-style code, and what I call “white space cliffs” where a column dictates a sudden huge increase in whitespace. But I realize how much it comes down to personal aesthetics, so I wish we could all just coexist :)
Yeah, I’ve been messing around with similar ideas, see https://nick.zoic.org/art/waste-web-abstract-syntax-tree-editor/ although it’s only vapourware so far because things got busy …
Many editors already do this to some extent. They just render 4-space tabs as whatever the user asks for. Everything after the indent, though, is assumed to be spaced appropriately (which seems right, anyway?)
You can’t convert to elastic-tabstop style from that, and without heavy language-grammar knowledge you can’t do this for 4-space “tabs” generally.
Every editor ever supports this for traditional indent style, though: http://intellindent.info/seriously/
To be clear, you can absolutely render a file that doesn’t have elastic tabstops as if it did. The way a file is rendered has nothing to do with the actual text in the file.
It’s like you’re suggesting that you can’t render a file containing a ton of numbers as a 3D scene in a game engine. That would be just wrong.
Regardless, my point is specifically that this elastic tabstops thing is not necessary and hurts code readability more than it helps.
The pefantics of clarifying between tabs and tabstops is a silly thing as well. Context gives more than enough information to know which one is being talked about.
It sounds like this concept is creating more problems than it solves, and is causing your editor to solve problems that only exist in the seveloper’s imagination. It’s not “KISS” at all, quite the opposite.
Because presentation isn’t just a function of the AST. Indentation usually is, but alignment can be visually useful for all kinds of reasons.
The historical reason why -Wall doesn’t enable all warnings is that warnings have been gradually added to the compiler over time. Adding new warnings to existing options could cause builds to fail after upgrading gcc.
Moreover, some pairs of warnings are incompatible (in the sense that any code accepted by one would be rejected by the other). An example of this is -Wstrict-prototypes and -Wtraditional.
The historical reason why -Wall doesn’t enable all warnings is that warnings have been gradually added to the compiler over time. Adding new warnings to existing options could cause builds to fail after upgrading gcc.
I’m aware of that, though I still find it wrong that -Wall doesn’t actually include the new warnings, a build breaking on upgrade with -Wall is in my opinion the more logical outcome. I would rather have flags like -Wall4.9 that would remain constant on upgrades so no one who’s just using that subset of the warnings breaks their build. -Wall can then remain true to its meaning. Seeing that the ship has sailed on that a long time ago, I still would like to have a -WliterallyAll (can be called something else) that would include -Wall -Wextra and others like -Wstrict-overflow.
Moreover, some pairs of warnings are incompatible (in the sense that any code accepted by one would be rejected by the other). An example of this is -Wstrict-prototypes and -Wtraditional.
These ones can’t be and don’t have to be included.
I really like the idea of -Wall-from=$VERSION, and you could even support -Wall-from=latest for people who truly are okay with their builds breaking whenever they upgrade their compiler.
clang supports -Weverything which I’ve tried, and it happily spews out contradictory warnings (“Padding bytes added to this structure”, “No padding bytes have been added to this packed structure!”) along with (in my opinion) useless warnings (“converting char to int without cast”).
These ones can’t be and don’t have to be included.
So your -WliterallyAll would not enable literally all warnings either? I’m not sure how that solves the problem.
I would rather have flags like -Wall4.9 that would remain constant on upgrades so no one who’s just using that subset of the warnings breaks their build. -Wall can then remain true to its meaning.
Now this is a neat idea that I can get behind.
Correction: I have been informed that new warnings actually have been added to -Wall on multiple occasions.
The better explanation for why -Wall leaves many warnings disabled is that many of them are just not useful most of the time. The manual states:
Note that some warning flags are not implied by -Wall. Some of them warn about constructions that users generally do not consider questionable, but which occasionally you might wish to check for; others warn about constructions that are necessary or hard to avoid in some cases, and there is no simple way to modify the code to suppress the warning.
In other words, it might be better to think of -Wall not as “all warnings”, but as “all generally useful warnings”.
Except the -Weffc++ warnings, those are really annoying and are not really about actual problems in your code.
I agree that make is too freaking hard. it’s a terrible tool and you don’t have to use it. It took me years to realize this. I deleted the makefiles from my projects. I no longer use makefiles.
Yup. I should also write a blog post on “invoking the compiler via a shell script”.
The main thing to know is that the .c source files and -l flags are order dependent. With a makefile, most people use separate processes to compile and link so I think it doesn’t come up as much.
I don’t use Make as a build tool, but I find it quite handy to collect small scripts and snippets with PHONY targets that don’t attempt any dependency tracking. Make is almost universally available, the simple constructs I use are portable between gmake and BSD make, and almost every higher-level tool out there understands Makefile — so coworkers using various IDEs and the command lin can all discover and run the “build”, “this test”, “download dependencies”, “run import process”, “lint”, etc tasks. If I need a task that’s more than two lines, I put it in a shell script.
Although some languages now come with tooling that understands scripts, such as Cargo or NPM, I still find a Makefile useful for polyglot projects or when it’s necessary to modify the environment before calling down to that language specific tooling.
Yes, I want to write about this too! You are using Make like a shell script :)
I use this pattern in shell:
# run.sh
build() {
...
}
test() {
...
}
"$@"
Then I invoke with
$ run.sh build
...
$ run.sh test
I admit that Make has a benefit in that the targets are auto-completed on most distros. But I wrote my own little auto-complete that does this. I like the simplicity of shell vs. make, and the syntax highlighting in the editor.
When I need dependency tracking, I simply invoke make from the shell script! Processes compose.
You’ll see this in thousands of lines of shell scripts (that I wrote) in the repo:
About two years ago I finally sat down and read the GNUMake manual. It’s very readable, and it’s more capable than just about any other make out there. For one project, the core of the Makefile is:
%.a :
$(AR) $(ARFLAGS) $@ $?
libIMG/libIMG.a : $(patsubst %.c,%.o,$(wildcard libIMG/*.c))
libXPA/src/libxpa.a : $(patsubst %.c,%.o,$(wildcard libXPA/src/*.c))
libStyle/libStyle.a : $(patsubst %.c,%.o,$(wildcard libStyle/*.c))
libWWW/libWWW.a : $(patsubst %.c,%.o,$(wildcard libWWW/*.c))
viola/viola : $(patsubst %.c,%.o,$(wildcard viola/*.c)) \
libIMG/libIMG.a \
libXPA/src/libxpa.a \
libStyle/libStyle.a \
libWWW/libWWW.a
The rest of it is defining the compiler and linker flags (CC, CFLAGS, LDFLAGS, LDLIBS) and some other targets (clean, depend (one command line to generate the dependencies), install, etc). And this builds a program that is 150,000 lines of code. I can even do a make -j to do a parallel build. I’m not entirely sure where all this make hate comes from.
I’ve read the GNU make manual (some parts multiple times) and written 3 significant makefiles from scratch. One of them is here:
https://github.com/oilshell/oil/blob/master/Makefile (note that it includes .mk fragments)
It basically works but I’m sure that there are some bugs in the incremental and parallel builds. I have to make clean sometimes and I’m not brave enough do parallel builds. How would I track these bugs down? I have no idea. I tried but I kept breaking other things, and I got no feedback about this.
In other words, it’s extraordinarily difficult to know whether your incremental build is correct, and whether your parallel build is correct. Make offers you no help there essentially.
There are a lot of other criticisms out there, but if you scroll down here you’ll see mine:
http://www.oilshell.org/blog/2017/05/31.html
(correctness, gcc -M, wrong defaults for .SECONDARY, etc.)
There is also a debugging incantation I use that I had to figure out with some hard experience. Basically I disable the builtin rules database and enable verbose mode.
Another criticism is that the builtin rules database can make builds significantly slower.
I’m not using Make for a simple problem, but most build problems are not simple! It is rare that you just want to build a few C files in a portable fashion. For that, it’s fine. But most systems these days are much more complex than that. Multiple languages and multiple OSes lead to an explosion and complexity, but the build system is the right place to handle those problems.
I somehow seem to miss these “complex builds that break Make.” I have a project that uses C, C++ and Lua in a single executable and make handled it fine (and that includes compiling the Lua code into Lua bytecode, then transforming that into a C file which is then compiled into an object file for final inclusion in the executable).
I don’t know. For as bad as make is made out to be, I’ve found the other supposed solutions to be worse.
Having PRIMARY and CLIPBOARD is a good thing and once you get used to it, it’s like having two clipboards.
Shame he never tells how to actually use them both. Afaict only the primary selection is usable with the default binds.
XTerm.VT100.translations: #override \n\
Ctrl Shift <Key>C: copy-selection(CLIPBOARD) \n\
Ctrl Shift <Key>V: insert-selection(CLIPBOARD)
Now if only I could get all the other software to support them both as well.
EDIT: Another tip. If you find the font sizes available in the menu to be ridiculous, they’re pretty easy to change.
XTerm*faceSize1: 8
XTerm*faceSize2: 10
XTerm*faceSize3: 13
XTerm*faceSize4: 16
XTerm*faceSize5: 20
XTerm*faceSize6: 26
faceSize1 corresponds to “Unreadable.”
Now would someone give me key binds to decrease/increase font size? :-)
You might want to read X Selections, Cut Buffers, and Kill Rings for how to use the PRIMARY and CLIPBOARD selections in X Windows.
It doesn’t, and can’t really explain how to use them because there is no way to use them in X. Instead, you have to use them in applications running under X and each application does its own thing. I still don’t know if there’s a way to copy to clipboard in xterm without creating a custom bind.
If by “X” you mean “the graphical interface that runs on Linux” then yes, it works, because that is X Windows.
Where did this X Windows meme even start?
Some lamer back in 1995 thinking it sounded cool and having it go viral on Usenet?
Where did this X Windows meme even start?
I don’t know. Probably people who think it’s the X-TREME version of Microsoft Windows.
I’m pretty sure “X Windows” is much older than that (as is MS Windows). I vaguely recall reading about “X Windows” in Byte magazine in 1993 or so.
The comp.windows.x newsgroup goes back to at least 1987 (https://groups.google.com/forum/message/raw?msg=comp.windows.x/TtNRIfTKqsw/i7hzWBiDfkgJ), a month after X11 was created. They even refer to it as “x-windows”.
Could it have been a different implementation? Cuz I remember doing the RTFM thing way back when, and it was very clear about not being “X Windows”, though didn’t specify why.
Sorry if this is explained in the link. Can’t be arsed with Google. Usenet used to come without opt-in spying.
Well, excuse me for using outdated terminology then. Would if have been better had I said “You might want to read X Selections, Cut Buffer, and Kill Rings for how to use the PRIMARY and CLIPBOARD selections in X”?
Now would someone give me key binds to decrease/increase font size? :-)
i have the following
*VT100*translations: #override \
Meta <Key> minus: smaller-vt-font() \n\
Meta <Key> plus: larger-vt-font() \n\
Super <Key> minus: smaller-vt-font() \n\
Super <Key> plus: larger-vt-font() \n\
and either meta/super keys work as expected.
I’m playing around with libtls (per advice. I’ve already proved to myself that it can be used in an event based server, and now I’m playing around with trying to get it integrated into our network flow, which in this case means writing a Lua wrapper for it. [1]
[1] There are two Lua modules for libtls that I’ve found, but neither one meets my criteria, namely, using the call back mechanism to control the network. The changes are extensive enough that I find it easier to write my own version.
At work, we log everything via sysog(), which is feed into splunk which means we can search in pretty much real time across the entire system. For each message we log via syslog(), a unique tag is used, for example:
T0011: Can not open configuration file %s: %s
T0180: Significant clock skew detected (%llu uS)
The tag (the T0011 or T0180) are just allocated sequentially as needed (the T component has 246 defined log messages). Each component has a separate prefix (they are not restricted to a single letter—I wrote a component with the prefix NIEnnnn for example). This makes it easy to search for a particular log message or for messages for a particular component.
The other thing we do is log key performance indicators (KPI). Receive a request? Log it. Experienced an error? Log it. I took the idea of statd from Etsy that makes it easy to log such events. For example, when a component I use experiences a DNS timeout, I run:
stat.incr("nie.dns.timeout")
which sends a message to the statd to increment the counter “nie.dns.timeout”. At regular intervals, statd will output all the information via syslog() (which leads into splunk). It was simple enough to write my own version of statd (the Etsy one does more than we wanted). It’s easy to just add new KPI entries (if statd receives a name it doesn’t have, it starts a counter for it) and I’ll add metrics just because.
Given all that, we have debugged some pretty hairy situations, like bad network routing, memory leaks, certain crashes, and the utter garbage requests we can’t parse from our customers (who I would have expected to know better).
The other thing we do is log key performance indicators (KPI).
You might want to have a look at Prometheus with Grafana for that.
I’ve looked at Prometheus (another team at work uses that) and the thing I didn’t like about it was that it polls for data instead of receiving data. Yes, there is a way to feed data into Prometheus, but from what I’ve read, it is extremely discouraged to use that, and instead, let Prometheus poll for the data. In my case, this means embedding a web server into several components that don’t use HTTP (or even TCP for that matter—they all use UDP) for network transfers.
Am I the only one who sees irony in the author putting all this thought into a problem that could be summarized as people simply being bad at their job?
No. My insight is this: all of his anecdotal examples are examples of people poorly managing time due to poorly prioritizing their work. This fault has many causes such as poor design goals, miscommunication, avoidant behavior, boredom, et al. This is a fault everyone is aware of. The author has recognized this problem and decides to spend time formalizing it, making anecdotal stories, accompanying graphics, and a blog post, when that time could be spent on some aspect of enriching his life or self improvement (unless he finds writing the post enriching which is entirely fine.)
To the extend of the author’s examples, these workers sound bad enough to warrant labeling them as either incompetent or just lazy. In which case, we should just call a spade a spade and force this worker to make a course correction or replace them with someone better. That would be the simple and most straightforward response to the problem, and I am simply pointing out the irony of the author’s response.
I read the article as an extended meditation on the Upton Sinclair quote, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
A story. Years ago, when I was in college, I did some consulting work at a company. My coworker was also a college student (same college, same department). One project was editing a printed manual to put up on an internal Website. The conversion from Microsoft Word to HTML was trivial (Microsoft word provided that much). But not linking each vocabulary word to its definition in the glossary section. There were perhaps a hundred words (it was a specialized industry) and there were some 100 pages to edit.
My coworker wanted to dive right in and do the editing by hand. That was a lot of work. Days worth of mind-numbing work. I wanted to take some time to think about the task and how best to approach it. An argument ensued. We ended up using lex (since we had access to some Unix workstations with everything preloaded) to add the appropriate HTML links to the glossary page for each web page and were done in maybe two hours or so.
Was I smart in finding a quick way to reliably edit the 100 pages? Or was my coworker smart by trying to get X extra days of work even if it was drudgery? [1]
(Yes, I see the relationship of my story to the article, but I’m having a hard time articulating the connection I see. It’s similar to doing badly at ops—the ops who “save the day” with heroics in getting the system back up get the kudos, while the ops who set things up to run smoothly with almost no down-time get laid off because they appear to be doing nothing. Incentives matter.)
[1] And it wasn’t like lex was an unknown tool—my coworker was a grad student in the CS department and had written a compiler using both lex and yacc.
It depends on the point of view. If you were doing it because you honestly thought it was the best solution to the problem at hand, well then you were right, and hopefully your coworker learned a lesson and this isn’t an imaginary problem. If you were doing it for yourself, for your own knowledge, believing that the the skills learned would pay off at a later date, well then that’s up to you to decide but not an imaginary problem.
The second scenario seems falsely conflated with the first. That is a problem, but a different one of management incorrectly valuing their employees.
I would say the act of spending the effort to connect these dots is an imaginary problem. A lot of these characteristics are human nature, which no amount of writing or philosophizing will fix (if the goal is to “fix” the problem).
I understand the meditation on the topic, perhaps in an attempt to clarify to himself a series of problems that his intuition tells him has some tangible connection, which is beneficial for one’s own peace of mind. And to be clear, I am not criticizing the post. I just thought the meta-connection between the post itself and its content was amusing. Contemplating the “metaness” of things is my personal imaginary problem pit.
Edit: A larger point I want to make is that these ARE very complex problems. Knowing which solution is going to be optimal is something that takes either lots of research or experience and intuition. Of course people will choose wrong occasionally and make the problem even worse. Hopefully they learn and make better choices next time. Trying to paper over this experiential process by making it a “problem” (in the first examples in the article) is foolish and can send a message that any mistakes made are self inflicted instead of being part of the process of self-development.
In your scenario, you saved time and money. Real problem.
If the task had instead been to change one web page’s header from h1 to h2 and you had to go out and buy a new sparcstation and compile Perl from scratch, etc., that’s imaginary problem territory.
Imaginary problems seem to be justified by “but what if?” What do the kids say? YAGNI. Perhaps the question can’t be fully resolved until afterwards, in hindsight, but we can at least keep track of which developers seem to correctly identify what ifs. I’d bet some people are prone to under building and some to over building. (And also never learning from those experiences, always erring in the same direction.)
I started doing this a few years ago when writing my blog and yes, it makes it way easier to edit. It took a while to get used to doing it, though.
Prelude: I worked myself into a state where I needed a two-hour break after about twenty minutes of keyboarding. Two keyboards with different layouts on different desks, changing between them every few minutes, a terrible project, and some ungood stress out of the office. But as luck would have it, that office was in a… well, not in a hospital, but on campus, so got to see a real specialist quickly. A friend in his department just brought me along. He spoke cluefully and I’ve followed his advice in the decades since. In brief: “Pay attention to your body. Learn what hurts and stop doing that. Don’t let anyone at you with a knife.” It’s served me well. I admit to a certain arrogance about it.
A colleague and I bought and tested about 10-15 keyboards, including some very expensive specials, and the ones we ended up using weren’t the most expensive one, which probably would get you into trouble with the bookkeepers if you were to try it. Arrogance helps.
I currently use an 88-key unlabelled WASD with dampening o-rings. Because it keeps me from moving my hands much while I type, and keeps me from looking at the keyboard, and I like the feeling of the keys. You should ignore the first sentence in this paragraph and focus on the second, because the brand name isn’t important, how your body reacts is vastly important. Does it feel okay? Then okay. Not? Then change.
The same applies to your chair and desk, because your body is one. The shoulders are tightly connected to your hands. (The chair comes first, btw. You get a chair for sitting on, then a desk that suits your body on that chair. http://rant.gulbrandsen.priv.no/arnt/ideal-office has more speechifying about chairs and stuff. I speechify too much.)
This reminds me that when I was younger, I spent a lot of time behind my computer sitting on a plain old stool and I remember it was much more comfortable than the more traditional big and heavy armchair I currently use at my daily job. I think not having a backrest forces me to keep my back in a straight and comfortable position.
I use synergy to share a single keyboard and mouse across computer systems. It works fine, and it can deal with Macs, Linux and Windows. You can even cut-n-paste across the systems as well (text only though).
Have they fixed the bugs I ran across when I tried that?
I haven’t encountered those issues, but the Mac is the server, and the Linux system is the client. I don’t know if that makes any difference.
Also, I think I started using it post 2011, so maybe those bugs don’t exist in the version I’m using.
A stressful job plays a larger role than you might think. I speak from experience. About 10 years ago, I had really bad pain typing - for months. I quit my job, moved somewhere else, got a new job and the pain went away. It still flares up from time to time if I overdo it, but goes away quickly.
Model M reporting in.
I’d like to find something new, but – and for reasons I haven’t investigated or discovered – even newer versions of the keyboard, where they claim to use the same switches, don’t feel the same. I wonder if it’s like leather shoes and I’ve simply developed a preference for the worn-in feeling.
The rest of the time, I use whatever keyboard is on my laptop (I have a MacBook Air 11” that is great for programming but aging, and a MacBook 12” with the shitty disgusting butterfly keyboard)
Another Model M user here—I use Model Ms on everything, including the Mac laptop at work (which rarely moves off the desk). It’s kind of funny to see the cable with two adapters, one to convert from DIN to PS/2, and then from PS/2 to USB.
I also have a stash of Model Ms at home that I’ve collected over the years, but frankly, the ones I use have yet to wear our, so I’m probably set for life.
(1) take advantage of modern hardware, (2) fit on a single floppy disk
So a floppy disk is modern hardware? wut?
Came here to comment on exactly that cognitive dissonance. Any mylar medium is antiquated.
Maybe CD-ROM? That’s pretty outdated, too.
SD card? Even they’re almost 20 years old, and the smallest spec still handles up to 2GB.
Don’t get me wrong – shave all the bits you want (though as others have pointed out, it sounds like he’d rather other people do the shaving for him). I just have a hard time, seeing that contrast, that the author has thought this through to any degree.
Fitting on a floppy disk is different from being distributed on a floppy disk (which is different from needing to be distributed on a floppy disk – a situation that a lot of hobby OSes with homebrewed bootloaders suffer from!)
Fitting on a floppy disk is a nice, solid benchmark for size – one that’s still used for Lua & other projects, even when those projects don’t get distributed that way. Right now, fitting on a floppy means being smaller than the average (popularity-weighted) web page, and also means fitting on any commonly-used and many uncommonly-used modern storage media. It also implies suitability (at least in size) for embedded contexts (which may make use of modern features while still having less RAM than you might naturally expect a microwave or elevator controller to ship with).
Fitting on a floppy disk is a nice, solid benchmark for size
How big is a floppy disk? 1.44MB, right? Or 800kB, depending on the era. Or maybe 113.75kB, the capacity of a one-sided 5 1/4” disk under Apple DOS?
No? It’s 1.722MB? Where the hell did that come from?
It is not a solid benchmark. It’s an arbitrary one.
I listed a few other, more recent media one could benchmark against, instead. Even those are arbitrary – SD cards range from 1MB to 2GB. CD ROMs vary from 600ish to 900ish MB (much narrower than the SD range, but still variable).
Basically, if you have a size in mind, there’s a physical medium that you can claim to be the benchmark.
make use of modern features while still having less RAM
On-disk size and RAM usage are, at best, loosely correlated. Plenty of small programs will intentionally use lots of RAM, and there are large programs which need minimal RAM. There are other ways to provide a standard for embedded devices.
1.722MB is Amiga. The Amiga hardware read an entire track at a time, so sectors were done purely in software. This meant you could use a bit more of the disk than on IBMs.
I do recall using Tom’s Root Boot Disk back in the day. It was enough of a usable Linux system to use for rescue work. I’ve even used it to install Linux on a seriously constrained system.
How big is a floppy disk?
Generally speaking, when people say “it fits on a floppy disk” they’re talking about a 1.44MB diskette. (It’s a commonly-understood claim, and one that fits with the era in which it first became popular, which is to say the late 90s.)
If you want to go out of your way to bother somebody, you can rules-lawyer them about single-density 5.25 inch floppies or the weird micro-diskettes that the Zenith MinisPort used or other variations. It just requires ignoring a general understanding in favor of being technically correct in the most irritating way possible.
Making a full-featured OS that fits in two megs is challenging enough to be interesting but hardly impossible. It’s also still desirable. The way that such a project would normally be described is “fits on a floppy disk” – even if it doesn’t quite. That description is still understood, even if a lot of us haven’t had a machine with a floppy drive in five or ten years.
I listed a few other, more recent media one could benchmark against, instead. Even those are arbitrary – SD cards range from 1MB to 2GB. CD ROMs vary from 600ish to 900ish MB (much narrower than the SD range, but still variable).
Making an OS fit in 600MB isn’t really much of a challenge. You can get a modern linux distro on a business card CD and still load it with shovelware – no coding required.
On-disk size and RAM usage are, at best, loosely correlated. Plenty of small programs will intentionally use lots of RAM, and there are large programs which need minimal RAM.
Sure. But, a monolithic kernel without modules generally gets its entire bulk dropped into memory. If you make the object code smaller (without demoscene-style compression techniques), the object code’s in-memory footprint will roughly match what’s on disk. That gives you more space for userland to play with, even on constrained systems.
It seems like an arbitrary benchmark for size tbh. I would have gone for something like “readable from flash in a time shorter than humans can perceive”,
“Fits on a floppy” is the kind of description I heard a lot, when I was in the hobby OS community. Granted, that was about ten years ago, but floppy drives had already been basically totally supplanted by multi-gigabyte flash already by that point. I think people still gravitate toward that benchmark as ‘natural’ in a way that they don’t toward “1.5 megs” or “2 megs”.
(Plus, has tech REALLY moved that much in the past 10 years? Sure doesn’t feel like it. We’re doing the same stuff only slower.)
Maybe CD-ROM? That’s pretty outdated, too.
I still use those heavily. I’ve even had some boxes that still performed well but didn’t boot from USB. Doing a CD is still easier than network boot. Just pop that sucker in there, do something else while it slow loads, and begin working with the dialogs.
I use them, too. And they’re more durable than magnetic media, which is why I don’t have any Apple II floppies any more, but I still have a FreeBSD 4.0 CD-ROM set.
But they’re still a bit outdated.
And it should be repeated over and over until pointy-haired management stops with the open plan office abuse, and people start demanding reasonable working conditions en masse.
Things won’t change until a very successful company or startup says their success was because of their not-open office plans. PHBs follow what the big, successful companies do.
We already have that. Microsoft always gave their employees offices. And they are somewhat successful. Yet no other company ever followed their lead. Go figure.
TBH if I had to tackle one of the management issues today, I’d choose overtime instead of open offices…
Yeah, is anyone ever arguing for those things? I don’t mind the one at my office, but we’re also a very small office with an average of six employees in it. I might like it better if we were even more isolated, but except for my coworkers’ typing, I hardly ever hear anything at all.
It’s not really a big problem until you’re surrounded by people who work on unrelated stuff who like to have loud conversations.
Yes, this is exactly the problem – not the open floor plan itself, but an open floor plan with lack of strategic desk placement.
Fun facts:
INC esp (increment stack pointer by one).POP [ptr] (pop a dword from the stack and move it to the specified address in memory) updates esp before resolving ptr (which can contain esp). In some sense it mixes decoding of the instruction and execution.LEA reg, [ptr] is an instruction that computes ptr and stores it in a register without referencing memory. Pointer arithmetic is very flexible in x86, being able to add a register with a shift, another register and a constant offset. For example, it can encode a = b + c -1 in a single instruction. LEA is also the only instruction that fails when encoded with a register operand (because LEA reg1, reg2 is invalid) but never fails when encoded with a memory operand.OP reg1, reg2 for most instructions.XCHG eax, r is encoded as 0x90+r, NOP is 0x90 or XCHG eax, eax.NOP hasn’t been XCHG EAX,EAX since the 80386—it causes an unnecessary dependency on EAX so it’s been special coded for quite a long time.
A much better example, IMO, are these Markov generated Tumbler posts, trained with Puppet documentation and a collection of H.P. Lovecraft stories.
Poetic.
And this one looks like one of those quotes that become historical, but almost no one that uses it knows what it means:
I like King James Programming. Example: Exercise 3.63 addresses why we want a local variable rather than a simple map as in the days of Herod the king
Truth.