1.  

    I use this website primarily to bring attention of my projects to a wider audience. I find that interesting things that require more than cursory knowledge on a topic tend to be overlooked here. As an example, if you look through my submitted stories, all of which I’ve authored, you’ll find that the submission with the most Internet points by a wide margin is not my machine code development tool, disassemblies using this tool, nor my Common Lisp library submission, but a book recommendation, because who can’t understand a book recommendation?

    I find, in general, websites that do nothing but collect links and discussions concerning them are rotten. They lack a special quality that permits topics to go for long periods of time, for one. This thread will last a few days, at most, in all likelihood. A voting system, even when one is forced to provide a reason, still has issues, but these issues are rather well known, I suppose; I’ve never given an upvote or a downvote here.

    So, asides from that, I occasionally scan the pages and, if I find an article I read and disagree with, I tear into it with a comprehensive critique. This usually goes rather well in collecting Internet points or starting subthreads with varied discussion, but occasionally I encounter that you simply get negative Internet points because people disagree with you and so label with ’‘troll’’ or ’‘incorrect’’. Still, I have my own ideas on programming and whatnot and you can view my posting history to see no lack of letting people know when I believe something is fundamentally misguided or stupid. There is no lack of dogma or ritualism here and I don’t intend to ’‘go with the flow’’, which is also what I’m doing with this comment.

    So, to conclude, I use this website to get a wider audience for my projects and articles and whatnot, but I’ve had much better experiences with other websites, such as anonymous imageboards, where threads last months or years, it’s not inconvenient to see which posts are new, and, as an acquaintance put it, people tend to have high ’‘trait openness’’ in contrast to a website such as this where people tend to have high ’‘trait agreeableness’’. Still, I keep my account because this website is invite-only and I may as well.

    1.  

      Promotion is a big draw for this type of site. Before I was a member, I had a peice linked from here that was on the front page for more than a day that drew considerable traffic. It’s a reliable way to get clicks for content marketing efforts, not that I personally do that.

    1. 2

      It would be odd to me for UNIX concepts to co-opt a ’‘shell’’ tag. Are Window’s Powershell or DOS’s various shells no longer ’‘true shells’’? There are also the shells from Multics, VMS, Genera, Oberon, and many others.

      If you want to mean only UNIX shells, then what’s wrong with the UNIX tag that already exists?

      1. 1

        My struggle with unix is that I’m not entirely sure it’s the right place for projects like prezto, oh-my-zsh, and zsh-utils… as well as more in depth shell scripting. I see your point… maybe “shell” isn’t the right tag, but it seems like a decent option and I don’t have any other alternatives.

        1. 1

          ‘windows’ + ‘cli’ vs ‘unix’ + ‘cli’ is a sufficient distinction isn’t it?

          1. 1

            Yeah, I suppose that does make sense

      1. 0

        I was only vaguely interested in reading this article to start with, mostly to see how git was visualized since I’m not familiar with it, but the page tells me this ’‘visual’’ guide requires JavaScript to display the visuals and so it’s not very visual for me.

        Asides from this unnecessary JavaScript, however, it seems to be a fairly simple page, so I wonder why it isn’t simply static with, say, VIDEO tags or GIFs or whatnot. The only thing I notice is the visuals will apparently morph as you go along, but that’s not worth JavaScript, if you can’t even get that effect with CSS or whatnot.

        Lastly, the end of the page has Google Analytics spyware, so that’s more than enough reason to not let the domain run JavaScript, if you were even considering it. I vehemently dislike when I come across what should be a static page, ofttimes with rather simple CSS, and yet it has this malware embedded in it, sometimes as the only JavaScript whatsoever. It’s disgusting.

        1. 1

          To each their own I guess. I’m not freaked out by modern technology and new ways of presenting information. I thought it was a very interesting way to teach a fairly dry topic, esp. one I’m not too keen on but feel like i have to know better.

        1. 4

          All microservices are is a rebranding of Object Orientation, which is based around objects sending messages, not unlike, say, the Internet.

          If you don’t have a program that benefits from entirely independent pieces that communicate or can’t be well decomposed from a monolith into such, it doesn’t make good sense to use the model.

          Part of this is probably underestimating just how fast a modern machine is; perhaps it’s that people are mistakenly more inclined to use ’‘microservices’’ than rewrite part of the program more efficiently. I’d be very skeptical of moving a program off a single machine, considering you could optimize it for quite a while before it actually becomes reasonable to do so, unless of course, again, you have a problem that very clearly benefits from the Object Orientation model.

          1. 2

            All microservices are is a rebranding of Object Orientation

            Sadly you do not scale or write into different languages your different classes…

            To be clear, I’m more in favor of having a monolith that is so boring to operate that it’s easy to extract few parts that make sense to extract. For example, Gitlab is a giant Rails app, and they decided to extract everything Git related (now a golang service called gitaly iirc), this does makes a lot of sense to me.

          1. 5

            I also agree with this notion. A website ostensibly concerned with programming isn’t affected much in a negative way by missing out on abstract product placement.

            For that matter, while this is an interesting article, do we want people posting such an article every time a journalist writes one? Perhaps to help counteract such things, there could be, say, a heuristic that helps submissions authored by someone here, omitting obvious cases such as the meta tag; of course, I’m not aware if such a heuristic is already in place.

            1. -2

              Nearly 300 people worked together, together writing over 9,000 commits and almost 100,000 lines of code, to bring you this release.

              Is that intended to be impressive? Is there any good reason for a window manager and set of related tools to be almost one hundred thousand lines, really worse since it sits on top of POSIX and Wayland?

              1. 11

                Wayland is just a protocol, and its implementation is quite small - and only handles client-server-communication-related problems.

                Those hundred thousand lines of code include what basically amounts to reimplementing the X server from scratch. We wrote almost all of the userspace graphics, rendering, input, and window management code. Even then, it’s an fifth the size of what it replaces: xorg-server+i3.

                1. 1

                  Have you (or anyone else) done any tests regarding power usage? Does this increase or decrease a laptops battery life, for example?

                  1. 2

                    We haven’t done any objective measurements, but anecdotally battery life is improved for me on sway and many other users report the same. It would make sense, we have optimizations in place specifically designed to be power efficient. Also check out swayidle for doing things like dimming/suspending after inactivity.

                2. 7

                  Half of it is their Wayland implementation, it seems.

                  https://github.com/swaywm/wlroots

                  So, yeah, I’m pretty impressed. It’s a full-featured and cross-platform application that’s meant to be treated as though it were infrastructure.

                1. -2

                  It’s truly amazing how utterly dysfunctional and poorly designed the WWW is.

                  HTTP itself is a wasteful protocol that leaks unnecessary details like a sieve and the misspelling of referrer is an excellent example of how it was poor from the start. HTML is a gimped XML, where semantics go to die unless you’re only using the trivial tags that are predefined; most WWW pages, this one included, are DIV soup; it’s amazing how you can take XML, which wasn’t great to start with, remove its main good qualities, and then add a far worse and wasteful version of some of it later.

                  CSS is accidentally Turing-complete, unnecessarily complicated, and unintuitive with poor defaults, along with being capable of finely grained spying because it can make network requests. Lastly, JavaScript was actually designed in a week, is clearly not designed for efficiency, and has so much access to WWW pages and the browser that it’s no wonder Google enjoys an efficient implementation to more effectively spy on people by determining window size, fonts, plugins, and all manner of other nonsense. There was a flaw not too long back where JavaScript could scan the bitmap of the display if it was in an IFRAME tag; Google puts an infinite loop in JSON so it can’t be taken by unauthorized programs in that way.

                  I can sit in comfort, knowing I’ll never design something so disgusting and malformed as the WWW.

                  1. 5

                    All this and more in tonight’s episode of “Massively Successful Phenomenon Is Flawed In Several Ways”!

                    1. 4

                      Eh… HTML predates XML by several years.

                      1. 3

                        30 years ago today, Tim Berners-Lee submitted a memo for what became the web. The challenge worth feeling satisfied for is not to recognize historical mistakes with the benefit of three decades of hindsight and billions of users. It’s to repair that medium, or to replace it. Making zero mistakes along the way is the stuff of fantasy; the realistic dream is to make fewer, smaller, or at least new mistakes on top of the incredible success of the web.

                        1. 2

                          I can sit in comfort, knowing I’ll never design something so disgusting and malformed as the WWW.

                          And we all are very glad of that!

                        1. 1

                          I briefly considered writing a concise program to transform a notation into HTML, but I didn’t know enough HTML and so abandoned the idea permanently.

                          My website is here: http://verisimilitudes.net/

                          I simply write it all by hand. This has, unfortunately, lead to me becoming more and more familiar with HTML and its various deficiencies, but has the advantage that my website looks fairly unique even with minimal formatting, as any new effect I want to accomplish I learn how to compose and then build in my own way, often using previous effects from other pages. The CSS is inlined with each page because I observed at least one tool that prevents CSS from being loaded does nothing to inlined CSS and I found that a worthwhile reason to avoid including it from elsewhere.

                          All of the SVGs are also written by hand, but I’ll be using a program of mine to process and transform from my own format soon enough. I simply need a tool for writing that other format, first.

                          Lastly, I include an HTML presentation of the output of one of my programs. I transform it manually, but writing the articles has made me reconsider having tooling that generates formats, such as HTML and TeX, to spare me from doing it.

                          An advantage of this style of website is all I need do to set it up is dump the files into the correct directory for the HTTP server.

                          1. 1

                            This was an interesting article concerning a topic I wasn’t well-versed in, as I don’t use Java. I liked the graph showing the differences and the article was easy enough to read with how you have it formatted.

                            My main suggestion is to avoid using ’‘meme’’ images in your articles, as it communicates little point, is something you don’t own the copyright to, and is something not particularly amusing to start with. Omitting the ’‘Simpsons’’ image would’ve made for a better article, but this is the main critique I’ve.

                            1. 1

                              i agree - it doesnt really add anything and it gives what is otherwise a good article a more jokey feel

                            1. 0

                              I vehemently disagree with this. Once one acknowledges the presence of arbitrary and ritualistic practices in computing, one starts to notice them everywhere. This eighty column ’‘standard’’ is one such tribal fire people will dance around.

                              This article makes no note whatsoever of viewing multiple files stacked vertically rather than side-by-side horizontally. My Emacs on a portrait-oriented 1080p monitor can display over one hundred columns just fine and I can easily view multiple files stacked vertically if I need to. On a landscape-oriented screen, I’m never really inconvenienced by horizontal viewing, anyway.

                              What incenses me about the eighty column convention is how arbitrary it is. What I do is use one hundred columns, as that’s a nice base-ten number, and for some purposes, such as Lisp programming, I’ll use one hundred and fifty columns. I don’t use two hundred columns because that doesn’t fit well on any of my screens and I’ve also never had a need for that.

                              In any case, one hundred columns is where I set my fill-column when composing email, normal text documents, and other such things. I’m also not going to sit happy with ragged right normal text, such as in documentation, when I can easily justify it and then permit it to be nicely read by others.

                              In closing, my point is you should always be looking to these ’‘standards’’ and other nonsense and mulling over whether you agree with them or not. If you don’t, I strongly encourage any of you to choose your own way, rather than following ’‘the ways things have always been done’’. Anything else is intellectual death and computing as a field already has too many idiots who see fit to encourage their own opinions and then hide behind a crowd that agrees, burning heretics whenever they catch them.

                              Next, consider character sets and other ’‘natural ways to do things’’. You may be surprised by something you come up with and may then even find it superior.

                              1. 2

                                and for some purposes

                                The number one takeaway I would love to see spread wider is that different kinds of code need different rules.

                                For HTML, I have extremely short lines - no more than three syntactic elements to a line. Having to scroll up/down to find other parts of the code isn’t a problem in HTML, since related DOM has to be written together. This approach (eg each attribute on its own line) makes diffs very easy to read.

                                When working with mutable state, vertical scrolling costs much more attention, so fitting more onto the screen at once becomes more important.

                              1. 1

                                This was a nice read. CHIP-8 suffers from a chronic lack of development tools and whatnot, but this has the nice advantage that one can write them himself; that’s what I’ve been doing lately, as well, and why this caught my attention. The only practical, modern, and easy to use CHIP-8 implementation I was familiar with was Octo, but I’m glad that I’ll be able to use this from now on once I inspect it.

                                1. 13

                                  I may as well join in.

                                  I’ve had a light conversation with SirCmpwn before and he doesn’t care for macros either, which I find foolhardy, but I’ll focus on just this article.

                                  The inertia of “what I’m used to” comes to a violent stop when they try to use Go. People affected by this frustration interpret it as a problem with Go, that Go is missing some crucial feature - such as generics. But this lack of features is itself a feature, not a bug.

                                  I use a number of wildly different languages, including Common Lisp, APL, and most recently Ada; each of these languages is lacking things the other has, but it also vastly more suited to other tasks than the rest. I’ve never used Go. Unlike these three languages I’ve mentioned, which have perfectly good reasons for lacking whatever it is they lack, Go very often has poor reasons or perhaps even no reasons, although I don’t skulk around the mailing lists or whatnot.

                                  For a good example, take a look at this; it’s my understanding Go lacked a proper mechanism for determining time and many people critiqued this, but daddy Google didn’t care until someone important was hit by it. This is a good example of the problems caused by a language that is not only uncustomizable by the users, but is designed by people who don’t care and won’t care. Unless you’re someone, Google doesn’t care about what you think and the language certainly doesn’t, considering it is designed at every point to take away programmer choice.

                                  Go strikes me as one of the most conservative programming languages available today. It’s small and simple, and every detail is carefully thought out. There are very few dusty corners of Go - in large part because Go has fewer corners in general than most programming languages.

                                  This isn’t equivalent to a language that is good for writing programs in. Ofttimes, a lack of edge cases in the world of the language doesn’t correspond to a lack of edge cases in real use. Take a look at Ada for a counterexample; the rules may not have a nice technical explanation, but the corresponding real world explanation is very simple, because it’s usually to prevent some manner of error.

                                  I feel that this applies to generics. In my opinion, generics are an imperfect solution to an unsolved problem in computer science.

                                  Dynamic typing as in Lisp is one solution. Ada has a nice generic system, but again, Ada was designed not for theoretical prettiness, but to actually make large systems easier to write without flaws, so generics were of course there because otherwise you get people copying and pasting code, which makes maintenance and everything else harder because you can’t tell if one of the copies is wrong or otherwise changed easily or quickly.

                                  I used to sneer at the Go maintainers alongside everyone else whenever they’d punt on generics. With so many people pining after it, why haven’t they seen sense yet? How can they know better than all of these people?

                                  Have you ever considered these people don’t know better than anyone else. Have you considered that Go is just an extension of the UNIX and C religion and people like Rob Pike are just playing their part as a priest over scared people who don’t know any better and want a panacea and a movement to join?

                                  I don’t think programming languages should compete with each other in an attempt to become the perfect solution to every problem. This is impossible, and attempts will just create a messy kitchen sink that solves every problem poorly.

                                  I’d prefer to think that’s common sense. APL and its family is the clear choice for array problems, but will fall flat against many other types of problems. What is Go actually good for? I find that poor languages, typically ALGOL clones, tend to differentiate themselves on purpose rather than anything intrinsic. You see this with Perl being a ’‘scripting’’ language, Ruby being for ’‘web services’’, Python being ’‘glue code’’, and, what, Go being for ’‘scalable programs with a focus on internet-connected services’’? The key detail to observe is these languages are all rather the same and, utterly lacking originality, attempt to dominate in a particular usage, because that’s the only way they can really be differentiated.

                                  If you disagree with this, compare Perl to PHP to Go to Python and compare those differences to those between comparing Common Lisp to Forth to APL to Ada.

                                  If you’re fighting Go’s lack of generics trying to do something Your Way, you might want to step back and consider a solution to the problem which embraces the limitations of Go instead. Often when I do this the new solution is a much better design.

                                  I felt something similar when I was writing an Ada program and, wanting to use the package system properly, was forced to structure my program in a different, albeit natural and better way. Tell me if there’s a document that lists all of Go’s design decisions and why they were taken, or am I only going to find the typical UNIX and C response of ’‘We know better. It’s better this way. Don’t consider other ways. Our way is the one true way.’’?

                                  So it’s my hope that Go will hold out until the right solution presents itself, and it hasn’t yet. Rushing into it to appease the unwashed masses is a bad idea.

                                  Go was designed for the ’‘unwashed masses’’, I mean those ’‘not capable of understanding a brilliant language, but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.’’, straight from Rob Pike’s mouth. Go is designed to use programmers as unintelligent implementing machines, which is why it’s so opinionated. Its opinions have little or nothing to do with good programs and apparently everything to do with preventing the damage any single fool they want to use can cause or, worse, prevent a new hire who isn’t a fool from writing a good program that makes them more valuable than their peers and harder to fire. There’s no macros in Go, only the same thing, everywhere, no matter how poorly suited it is to the problem. If everyone writes Go the same, it’s easy to fire and interchange employees without friction.

                                  I could keep going on about how Go is just a continuation of UNIX, C, and so also Plan9, UTF-8, and whatever else those malign idiots create, but I believe this gets the point across well enough. The only ’‘philosophy’’ these things espouse is that the computer isn’t a tool for leveraging the mind.

                                  1. 10

                                    ’‘We know better. It’s better this way. Don’t consider other ways. Our way is the one true way.’’

                                    Hilariously, a pro-Go commenter just said to me that Go is an anti-“we know better” language.

                                    Go is just an extension of the UNIX and C religion

                                    And yet it goes against everything in the actual modern Unix world. Go likes static linking (because Linux distros), has custom syscall wrappers, a custom assembler (!), custom calling convention and weird stack setup… As a result, calling non-Go code requires either overhead (cgo) or ridiculous hacks (c2goasm), LD_PRELOAD hooks don’t work, and porting the main official Go implementation to a new OS/CPUarch combo is utter hell.

                                    1. 9

                                      Go being for ’‘scalable programs with a focus on internet-connected services’’?

                                      Two comments: the two wins go has over other languages is

                                      • (1) build/link - that its build system is fast, and it produces reasonably small static binaries, suitable for deploying into containers. This requires a fair bit of fiddling in other languages and with fairly large binaries in the outcome. Not infeasible, but certainly more than plug and play.

                                      • (2) it aligns with the sensibilities of python and ruby programmers in general, but in a typed manner, so improved maintainability with a fairly simple semantic.

                                      I’m not a go fan, but these are key good things for go.

                                      Rather write in something like Haskell or Common Lisp, but ce la vie…

                                      1. 9

                                        I had you until the last paragraph. What the heck do you find bad in UTF-8?

                                        1. 1

                                          I don’t want that to turn into its own discussion, but I have reasons aplenty and I’ll list all those that currently come to mind.

                                          Firstly, I have my own thoughts about machine text. I find the goal of Unicode, being able to have all languages in one character set, to be fundamentally misguided. It’s similar to the general UNIX attitude: ’‘Should we have the ability to support multiple standards and have rich facilities for doing so transparently? No, we should adopt a single, universal standard and solve the problem that way. The universal standard is the one true way and you’re holding back progress if you disagree!’’.

                                          Operating systems can support multiple newline conventions, as VMS did, and it would be trivial to have a format for incorporating multiple character sets into a single document without issue, but that’s not what is done. Instead, Unicode is forced and there are multiple Unicode encodings. Unicode is also filled with dead languages, emojis, and graphics-building characters, the latter being there, I think in part, because GUIs under UNIX are so poor and so turning the character set into the GUI toolkit is such an easy ’‘solution’’. I’m fully aware the other reasoning is likely to encompass graphics from other character sets, however.

                                          UTF-8 is a large, variable-length character set that can have parsing errors, which I find unacceptable. It’s backwards compatible with ASCII, which I also dislike, but at least ASCII has the advantage of being small. UTF-8 takes pains to avoid containing the zeroeth character, so as to avoid offending C’s delicate sensibilities, since C is similarly designed to not accommodate anything and expect everything to accommodate it instead. It is as if Ken Thompson thought: ’‘I haven’t done enough damage.’’

                                          UTF-8 disadvantages other languages, such as Japanese and Chinese (This isn’t even mentioning the Eastern character controversy.), by being larger than a single-minded encoding, leading several such peoples to prefer their own custom encodings, anyway. You can only add UTF-8 support to a program transparently in trivial cases, as anything more such as a text editor will break in subtle ways.

                                          There’s also that Unicode makes the distinction between characters, graphemes, and other such things that turn a simple problem into an unmanageable one. I use Common Lisp implementations that support Unicode characters, but don’t actually support Unicode, because there are so many combining characters and other such things that have no meaning to Common Lisp and so can’t be implemented ’‘correctly’’, as they would violate the semantics of the language.

                                          There are other reasons I can list, but this is sufficient.

                                          1. 10

                                            multiple standards and have rich facilities for doing so transparently

                                            Well, looks like getting everyone to agree on a way of selecting encodings turned out to be way harder than getting everyone to agree on one encoding :)

                                            And sure — we have Content-Type: what/ever;charset=MyAwesomeEncoding on the web, we can have file formats with specified encodings inside, but there’s nothing you can do about something as fundamental as plain text files. You could never get everyone to agree to use something like extended FS attributes for this, and to make it work when moving a file across filesystems… that’s just not happening.

                                            format for incorporating multiple character sets into a single document without issue

                                            Again, some format that software has to agree on. Plain, zero-metadata text fields and files are a thing that’s not going away, as much as you’d like it to.

                                            UTF-8 disadvantages other languages, such as Japanese and Chinese

                                            They often include ASCII pieces like HTML tags, brand names, whatever; you should use an actual compressor if you care about the size so much; and every character in these languages conveys more information than a Latin/Greek/Cyrillic/etc character anyway.

                                            1. 7

                                              It seems like you don’t actually know what UTF-8 is. UTF-8 is not Unicode. Rob Pike did not design Unicode, or have anything really do to with Unicode. Those guys designed UTF-8, which is an encoding for Unicode, and it’s an encoding that has many wonderful properties.

                                              One of those properties is backwards compatibility. It’s compatible with ASCII. You ‘dislike’ this, apparently. Why? It’s one of the most important features of UTF-8! It’s why UTF-8 has been adopted into network protocols and operating systems seamlessly and UTF-16 hasn’t.

                                              UTF-8 doesn’t ‘disadvantage’ other languages either. It doesn’t ‘disadvantage’ Japanese or Chinese at all. Most web pages with Japanese and Chinese text are smaller in UTF-8 than UTF-16, despite the actual Japanese and Chinese text taking up 3 bytes instead of 2, because all the other bytes (metadata, tags, etc.) are smaller.

                                              The fact is that anyone that says that Unicode ‘makes the distinction between characters, graphemes, and other such things that turn a simple problem into an unmanageable one’ doesn’t know what they’re talking about. Unicode did not create those problems, Unicode simply represents that problem. That problem exists regardless of the encoding. Code units, code points, characters, graphemes.. they’re all inherently different things.

                                              Unicode does not have any GUI characters.

                                              1. 2

                                                Could you maybe elaborate the following quote?

                                                UTF-8 disadvantages other languages, such as Japanese and Chinese (This isn’t even mentioning the Eastern character controversy.)

                                                1. 6

                                                  I reckon it refers to the controversial Han unification, which was in China’s favour.

                                                2. 1

                                                  It’s similar to the general UNIX attitude: ’‘Should we have the ability to support multiple standards and have rich facilities for doing so transparently? No, we should adopt a single, universal standard and solve the problem that way. The universal standard is the one true way and you’re holding back progress if you disagree!’’.

                                                  What precisely does UNIX force you into? Are you sure this isn’t also the LISP attitude as well? For example, Lispers usually sternly glare over the interwebs if you dare you use anything but EMACS and SLIME.

                                                  Operating systems can support multiple newline conventions, as VMS did, and it would be trivial to have a format for incorporating multiple character sets into a single document without issue, but that’s not what is done.

                                                  You’re confusing multiple newlines in a single character encoding with multiple newlines across character encodings. You say that it would be trivial to have multiple character sets in a single document, but you clearly have not tried your hand at the problem, or you would know it to be false.

                                                  Give me twenty individual sequences of bytes that are all ‘invalid’ in twenty different character encodings, and then give me 200 individual sequences of bytes that are all ‘invalid’ in 200 different character encodings. Otherwise there is ambiguity on how to interpret the text and what encoding is used.

                                                  This problem can be seen by the people who are trying to revamp the c2 wiki. Reworking it has stalled because there are around 150 files with multiple different character encodings, and they cannot be identified, separated, and unified by the machine.

                                                  Unicode is also filled with dead languages, […]

                                                  Right, because Unicode is supposed to be a superset of all encodings. The fact it supports languages that are not used anymore is a feature, not a bug. It is important to people working in linguistics (you know, that field outside of computer science…) that any computer encoding format has a method of displaying the text that they are working with. This is important to language archival efforts.

                                                  UTF-8 disadvantages other languages, such as Japanese and Chinese (This isn’t even mentioning the Eastern character controversy.

                                                  This is outright false, but someone else has already mentioned that.

                                                  I use Common Lisp implementations that support Unicode characters, but don’t actually support Unicode, because there are so many combining characters and other such things that have no meaning to Common Lisp and so can’t be implemented ’‘correctly’’, as they would violate the semantics of the language.

                                                  Unicode allows language implementations to disallow some sets of characters for ‘security’ reasons: http://www.unicode.org/reports/tr31/

                                                  This entire rant remined me of Steve Yegge’s post “Lisp is not an acceptable Lisp”:

                                                  But what’s wrong with Common Lisp? Do I really need to say it? Every single non-standard extension, everything not in the spec, is “wrong” with Common Lisp. This includes any support for threads, filesystem access, processes and IPC, operating system interoperability, a GUI, Unicode, and the long list of other features missing from the latest hyperspec.

                                                  Effectively, everything that can’t be solved from within Lisp is a target. Lisp is really powerful, sure, but some features can only be effective if they’re handled by the implementation.

                                              2. -4

                                                I could keep going on about how Go is just a continuation of UNIX, C, and so also Plan9, UTF-8, and whatever else those malign idiots create

                                                The difference is that Plan9, UNIX, C, UTF-8 and Go are all absolutely wonderful and have all had far more positive effect on the world than anything you will ever create. That’s not because they got lucky, it’s because they were designed by people that actually understand what makes things successful.

                                                1. -5

                                                  He’s just a butthurt Lisper who’s mad his elegant, beautiful language is ignored by people who actually get stuff done. UNIX-haters and that.

                                              1. 5

                                                Humans make mistakes, that’s part of being human. If we want to produce artifacts with few mistakes we need a system –that is, a machine made out of humans– to attempt to reliably find these mistakes and make them rare enough to be acceptable for our desired use case.

                                                Other engineering disciplines have these systems; see how many double-checks, tests, sign-offs and certifications go into building a bridge. Software developers can do this too, there’s a great write-up somewhere about how NASA develops software. We know how to do it, it’s just that actually getting reliable software is an order of magnitude or two more work (and money) than getting a bunch of coders together to flail away at a problem, and frankly isn’t as fun, so people seldom do it. And there’s the popular mythos of the rock-star hacker that works against the spread of having a culture of resilient systems engineering; Neal Stephenson never wrote any books about rockstar civil engineers.

                                                Rust and other automatic systems that look for human mistakes, such as fuzzers and unit tests, are useful for building resilient systems more cheaply and easily. So by making them easier to use and using them more widely, we raise the floor of how crappy the worst code can be.

                                                1. 1

                                                  If we want to produce artifacts with few mistakes we need a system –that is, a machine made out of humans– to attempt to reliably find these mistakes and make them rare enough to be acceptable for our desired use case.

                                                  I don’t disagree entirely, but it’s not impossible to have flawless software, which this implies.

                                                  Other engineering disciplines have these systems; see how many double-checks, tests, sign-offs and certifications go into building a bridge.

                                                  Software is not constrained to the physical world as these systems are.

                                                  Software developers can do this too, there’s a great write-up somewhere about how NASA develops software. We know how to do it, it’s just that actually getting reliable software is an order of magnitude or two more work (and money) than getting a bunch of coders together to flail away at a problem, and frankly isn’t as fun, so people seldom do it.

                                                  This is a mistaken angle to look at it. It’s entirely possible to use better languages, which Rust is not, to write software for critical infrastructure. If you think Rust is an example of this, I suggest you take a look at Ada and what it does.

                                                  And there’s the popular mythos of the rock-star hacker that works against the spread of having a culture of resilient systems engineering; Neal Stephenson never wrote any books about rockstar civil engineers.

                                                  It’s my experience the best hackers use good languages and have a good head, whereas the average programmer may have one or the other, and the poor programmer has neither and pretends it’s impossible.

                                                  Rust and other automatic systems that look for human mistakes, such as fuzzers and unit tests, are useful for building resilient systems more cheaply and easily. So by making them easier to use and using them more widely, we raise the floor of how crappy the worst code can be.

                                                  See here for my thoughts on fuzzing and how it is naught but an extra system for exhaustive testing, just because it’s available. If you rely on a fuzzer to find bugs, you don’t know what your program actually does.

                                                  In closing and to repeat, look at Ada for an example of a language actually designed to prevent common bugs. Rust’s symbol vomit syntax is another point against it, while I think of it.

                                                1. 0

                                                  This is interesting, but also unsurprising, as writing correct C or C++ may as well be impossible for anything but trivial programs.

                                                  I don’t see why this post is tagged with rust, considering Ada is a much better language and, unlike rust, has actually been used in critical software with real repercussions, along with having real standards that don’t constantly change. Ada has an ’‘always safe unless you explicitly use a subprogram that can violate the rules’’ attitude, whereas rust has an attitude of ’‘you can do whatever you want in the strait jacket and if that’s too restrictive you’re on your own’’.

                                                  It’s possible to write a large and featureful Ada program without using anything unsafe, whereas it’s my understanding this isn’t practical at all with rust.

                                                  Ada is used in rockets and automobiles and other situations where people die unintentionally if things go wrong. Rust is used in Firefox. That says it all, really.

                                                  1. 12

                                                    Are there stats on how widely Ada’s used? My understanding was the people who use it really like it, but most automobile code is still in C.

                                                    Rust is used in Firefox. That says it all, really.

                                                    Rust is much younger than Ada, so we can’t infer much from this.

                                                    A more interesting question: how much Ada code in the wild is Ada12 vs an older version?

                                                    1. 8

                                                      I agree it is irrelevant to tag it rust, I’ve removed the tag.

                                                      Some notes:

                                                      whereas it’s my understanding this isn’t practical at all with rust

                                                      that isn’t true, you can write large and featureful programs without it.

                                                      Ada is used in rockets and automobiles and other situations where people die unintentionally if things go wrong. Rust is used in Firefox. That says it all, really.

                                                      that feels fairly dishonest to me. Also, the browser is a security-critical component deployed on essentially every user-facing computer manufactured. I can’t think of a more important place to have very safe tools. Reminds me of that (possibly apocryphal) story about memory leaks and cruise missiles.

                                                      1. 7

                                                        It’s definitely possible to write large Rust programs with very few or no uses of the unsafe keyword. It is the case though that the Rust standard library sometimes uses unsafe-marked code with safe abstractions wrapped around it. Ideally these safe abstractions should actually be safe, but it’s possible in principle that there’s a bug in one of those abstractions that lets the memory-unsafe behavior leak in certain circumstances.

                                                        I’m curious if this is the case for Ada as well - does Ada also wrap unsafe abstractions in ostensibly-safe ones that might prove unsafe because of a bug, or does it have some way of guaranteeing the safety of those safe abstractions themselves?

                                                        1. 4

                                                          I think it’s mostly that Ada puts safety at the top, and will add checks to code, whereas the Rust devs have convinced themselves that they can get C++ programmers to switch if and only if there’s never any runtime penalty for safety.

                                                          1. 11

                                                            Yes, Rust aims slightly different as Ada. If Ada checks all your boxes and the runtime properties are fine with you, by all means, use it.

                                                            I also heard from some Adacore people that they enjoy having Rust as a competitor, as it brings their topics on the table again and raises awareness.

                                                            1. 1

                                                              I was thinking AdaCore should become a supplier of Rust IDE’s, a second compiler, a provable subset like SPARK, and corporate libraries. They might even help improve Rust language with their experience.

                                                              They can continue supporting Ada for legacy customers while simultaneously having a lucrative transition path to Rust. Maybe even automated tooling they make some consulting revenue on, too.

                                                              1. 1

                                                                I’m not trying to badmouth Rust, I’m a big fan. I’d be using it right now if there was any sort of stable UI solution that was made within Rust (not bindings to things like QT and friends). For now it seems there’s a lot of churn in that space, with too many projects too young.

                                                                1. 2

                                                                  All good, I didn’t understand it as such. I just wanted to make the point that we see Ada as a competition we can grow with.

                                                              2. 1

                                                                Do existing Ada implementations have moderately-quick compile times?

                                                              3. 1

                                                                it’s possible in principle that there’s a bug in one of those abstractions that lets the memory-unsafe behavior leak in certain circumstance

                                                                No, it actually happened, not long ago, in the standard library.

                                                                1. 14

                                                                  Yes, and we’re quite upfront about that. I hate it when some people paint it in another way. There is a point where you need to take a fundamentally unsafe thing (memory and processors) and start building a base of a safe system.

                                                                  There are advantages in Rust, e.g. being able to fix that abstraction in one go and and update, by people sharing it, but the danger is there.

                                                                  That’s also the reason why there is a large ongoing project that formalizes lifetime, borrowing and multiple types that we consider as important as primitives (e.g. Mutex). They regularly find subtle bugs. Not day-by-day, but like, every quarter.

                                                          2. 1

                                                            This is an unfair comparison as Ada is what 40 years old whereas Rust is 10 years old.

                                                            1. 13

                                                              Rust, in a fashion that can be adopted, is ~4 years old. Before that, it can be considered a research language.

                                                              Also, for that reason, I have no problem with it currently not being used in high-safety-critical places, this is a thing that has to build over time. And I want people to take that time.

                                                              1. 3

                                                                I applaud your honesty in saying that.

                                                              2. 6

                                                                I see that only as a point in Adas favor in this context, where”tried and true” Actually means something.

                                                                I’m not neophobic; I love exciting new technology as much as the next guy. I even like seeing new developments in the “safer alternatives to C” category, which is something that sorely needs constant research and new inventions. I just think that all else being equal, the older tech has a leg up simply for being older, in this area.

                                                                Rust is a fantastic breeding ground for ideas, and I’m convinced that at some point, Rust (or something like it) will be so obviously superior to the Ada dinosaur that it will be better dispute its age difference. We’re just not there yet.

                                                                1. 1

                                                                  Isn’t that “leg up” the reason why Adam said it wasn’t a fair comparison?

                                                              3. 0

                                                                I don’t agree with either language in this context. I like Rust well enough, and Ada might be cool though I don’t know much about it, but there’s zero chance of Microsoft moving any of their critical codebases to either. Getting them to stop active development and get serious about static analysis, fuzzing, etc might be possible. Gradually converting some things over to a safe subset of C++ might be possible. A total rewrite of their major products in an unproven language is a non-starter.

                                                              1. 13

                                                                listen to everything and ignore 90% of it

                                                                This is probably good advice. I’ll apply it to this article.

                                                                1. Avoid comments.

                                                                This varies heavily by language. If the language features documentation strings, as Common Lisp does, then it’s fine advice and that’s how I operate; if it lacks them but still has many first-class structures for documenting intent and whatnot, such as Ada, then I’ll write less comments, but perhaps one or two where I think it’s fine.

                                                                It’s just my preference, but I’ve never been fond of names such as getUserAndAPITokenFromLocalStorage and would rather read the comments, but I’d rather avoid reading JavaScript at all, so this could be seen as a superfluous aside.

                                                                1. “Never use a plugin you would not be able to write yourself.” (credit to Jamis Buck).

                                                                I don’t see any issue with this advice, amusingly. This seems like programmer common sense, perhaps, or should be. I’d prefer to use libraries for what I don’t want to write, not what I can’t, such as implementing a complex protocol I need to interact with but don’t otherwise care about.

                                                                1. Use simple tools.

                                                                Without any malice, this is the worst advice in the article and I vehemently oppose telling new programmers this. Ostensibly, the computer is a machine that is to reduce work on the human and aid in tasks; this advice runs counter to that. Firstly, Emacs is just one tool that enables one to use a remote machine with local tools with its Tramp mode; there are undoubtedly others. Secondly, I’m of the opinion that a programmer can and should gradually replace tools they use with ones they’ve written themselves, although I understand others would see differently. I’m doing this in my own way, gradually, but at least I’m making some progress.

                                                                I don’t in any way support the idea that whatever dumbassery the UNIX or Microsoft gods decided decades ago is the baseline and anything on top of that is superfluous. That is stagnation. That is awful.

                                                                About 10 years ago, I remapped Caps Lock to Ctrl on my keyboard. I like that position so much better. Now I can’t use anyone else’s computer without either getting frustrated or going into their settings and remapping the keys on their computer. Is my enjoyment worth this? Probably not.

                                                                Is this a work machine you’re using for hours every day or a toy? Would, in any other setting, a professional who spends years or decades of his life doing something be faulted for customizing his tools to work best for him and remove strain or friction?

                                                                I suggest one reads what Erik Naggum had to write about this general idea: https://www.xach.com/naggum/articles/3065048088243385@naggum.no.html

                                                                1. Working code is good code.

                                                                I’d argue that working isn’t the only qualification for good code. Good code is eventually finished and requires no further changes to accomplish its purpose elegantly.

                                                                1. Don’t be afraid to delete code.

                                                                I have no argument here, really.

                                                                1. Prefer data.

                                                                This isn’t bad advice, with how it’s phrased. I suppose I would’ve used this: ``Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.’’ –Fred Brooks

                                                                1. Always do the hard thing.

                                                                To me, this is the most original advice in the article, but perhaps I’m simply differently read. I’d be wary of a system that is poor at the outlook, though, such as tens of thousands of lines of some poor language, or really tens of thousands of lines of anything.

                                                                This sums my thoughts on the article; perhaps someone will find this a pleasant read on its own or a good critique. I’d be glad for my articles I submit here to receive the same attention, certainly.

                                                                1. 3

                                                                  “Never use a plugin you would not be able to write yourself.” (credit to Jamis Buck).

                                                                  I don’t see any issue with this advice, amusingly. This seems like programmer common sense, perhaps, or should be. I’d prefer to use libraries for what I don’t want to write, not what I can’t, such as implementing a complex protocol I need to interact with but don’t otherwise care about.

                                                                  This literally goes against human nature, this recent Human Brian podcast[0] explores why humans depend on others before them to solve/understand things that they can build upon.

                                                                  My immediate reaction to this comment though was: if you wanted to make a new car seat, would you (or do you need to) know how to make the engine, frame, wheels, etc? I would argue “no”, and that depending on systems you don’t understand is nominally ok. Another silly example: I depend on my local grocery store to sell me food, without me knowing about how to make most of it myself..

                                                                  1. https://www.npr.org/2019/01/11/684435633/how-science-spreads-smallpox-stomach-ulcers-and-the-vegetable-lamb-of-tartary
                                                                  1. 2

                                                                    To “be able to write something yourself” covers a very wide range of possibilities. Do I need to be able to write it myself in a reasonable timeframe? Do I have to already have the knowledge to write it now, or can I take time to study it when I actually need to write it? Given enough time, I should be able to learn most things, so that would actually rule very little out.

                                                                    I think a more useful focus is “Can I fix problems in a dependency if I discover them?”. For me, this means preferring open source dependencies, as without the source you are at the mercy of the vendor (who may cease to exist). It also means preferring dependencies with simple, well designed code, which I should be able to understand with a little studying, but without necessarily having to have the expertise right now.

                                                                  2. 2

                                                                    I really appreciate your thoughts, even though we’re likely to disagree on many things. I only want to reply to one thing – about the keyboard. I remapped Caps Lock to Ctrl on every machine I use, including my work machine I use for hours each day. If I could go back 10 years and tell myself not to, I would. I use other people’s computers a lot more than most people, though; I pair-program with students on their computers all the time.

                                                                    I really like your opinion that programmers should replace tools with ones they’ve written themselves. It’s bold and solid advice.

                                                                    1. 5

                                                                      I pair-program with students on their computers all the time

                                                                      Yeah, that’s a bit of a special case. I use other people’s computers… approximately never, so I’m very happy with the Colemak layout and a CapsLock that acts as both Ctrl and Escape and shifts that act as parens on their own.

                                                                    2. 1

                                                                      Upon reading blog post’s point 3. I got the “somebody is wrong on the Internet” rash.

                                                                      Thank you for posting a soothing response, I lathered it in generously.

                                                                      1. 1

                                                                        You are right about 3.

                                                                        If we would not progress with the tools the recommendation would still be “use sh instead of the fancy korn shell or crazy bash”.

                                                                        The key is - as always - balance. Be careful about the bleeding edge but don’t limit yourself to the largest common denominator just to be perfectly safe all the time.

                                                                      1. 2

                                                                        I’m of the opinion that fuzzing isn’t particularly useful and is only popular due to poor practices. If you need a machine to vomit into your program to find bugs, I’d argue you don’t really understand what the program is doing.

                                                                        It’s trivial to exhaustively handle, say, every possible octet that can be input. Of course, modern systems and many standards aren’t designed around making things pleasant, which doesn’t at all help.

                                                                        Within minutes, afl was finding a number of subtle and interesting bugs in my program, which was incredibly useful. It even discovered an unlikely stale pointer bug by exercising different orderings for various memory allocations. This particular bug was the turning point that made me realize the value of fuzzing.

                                                                        I can believe that fuzzing is useful for finding trivial memory errors in a C program, but that tells me much more about C than it does fuzzing. If you use a language such as Ada or Lisp, you really don’t need fuzzing to check such trivial matters and maybe find a damning flaw in a program. In Lisp, you can’t even have pointer bugs such as this and you’d need to try to write one in Ada.

                                                                        You can make the argument that fuzzing is good as yet another stress test to put a program under, just because it’s available, but it should by no means be considered something for really finding trivial errors.

                                                                        1. 3

                                                                          It’s nice to see some more book reviews. I appreciate reviews that aren’t just blurbs with a star rating.

                                                                          Some thoughts.

                                                                          • It’s not necessary to take the reader through the contents of each chapter. When you do this, the review reads like the beginning of a textbook.
                                                                          • Your job as the reviewer is to give the reader (of the review) a feel for the content and organization of the book, and writing ability of the author. The book may have useful information with tedious, difficult prose. It might be poorly organized. It might not have much of anything to say. Shift your focus from talking about the content.
                                                                          1. 2

                                                                            I appreciate the feedback. I agree with your thoughts, but as this is a review I wrote years ago and have now adapted from a different medium, I wanted to experiment. The original review was much more concise and less detailed than this one, as I’d originally wanted them to be simpler and more informal recommendations.

                                                                            With that written, I may change this review further or I may simply leave it as an experiment. I have three more reviews I’ll be adapting before I have no more prior work to recycle.

                                                                          1. 2

                                                                            Since the end of October, I’ve been reading Programming in Ada 2012 by John Barnes. I’ve read to chapter seventeen so far, but I’ve barely been reading it in December at this point.

                                                                            I plan to finish the book and write my first Ada program in 2019. This will be a simpler reimplementation of my Meta-Machine Code tool taking advantage of what I now know. This will also be the implementation of the tool I distribute widely and build other targetings from. So, I’d like to soon target other machine codes with my tool, which requires learning or becoming more familiar with them. I’m torn between MIPS, ARM, and 6502 as my next target. I certainly know which would be simplest.

                                                                            1. 1

                                                                              I’m curious how do you decide what to learn next?

                                                                              What I’m more interested in knowing is the thought process there (if any). How do you figure something like this out, given you would ideally also like to align that with the path you want your career to take?

                                                                              I have a few systems and goals that I yearn to bring into existence. For almost two years I’ve been working on one such system (It’s mentioned in my profile.), mostly design and whatnot influenced by implementation of it, and I have a later system that can be helped by this that I intend to start on next.

                                                                              So, it’s fair for me to write that I decided on a few major goals, roughly organized them, and have started down that path obsessively. This does still lead to unexpected turns, as I’ve written several libraries for this and I’m currently learning Ada in part to create a better and simpler reimplementation distinct from the first language I used, which will have me writing more libraries for that language.

                                                                              I figure these plans and wants extend at least five years or so into the future, but I also have things of lesser importance I can pick from and these interact in ways that inevitably lead to yet more ideas and you get the gist by now.

                                                                              1. 1

                                                                                This was an interesting and brief read; the last line is also what I want to focus on:

                                                                                But the best way to have a future is to be part of a team that values progress over politics, ideas over territory and initiative over decorum.

                                                                                This is a good example of how it is primarily Free Software that will lead to good software, in the general cases. I’m not claiming Free Software does it often or that proprietary software hasn’t reached very high qualities necessary for critical purposes, but it is Free Software that has this fascination with hacking.

                                                                                You will only have a good program if you’re willing to start for a good program itself and hack at it until it’s further improved and you won’t usually get that with proprietary software.

                                                                                1. 1

                                                                                  I think that the fundamental issue is one of incentive alignment; hacking the quality of your proprietary software up is orthogonal to the needs of the business, and where there is conflict, the hacking must lose to the business. Free software, or more generally, software untethered to business directives, won’t face this problem. Whether or not that means “better” software of course is determined by which axis you’re using to measure.

                                                                                  1. 1

                                                                                    are you claiming free software is generally untethered to business objectives?

                                                                                    1. 2

                                                                                      No, I’m saying that they’re orthogonal concerns.