1. 1

    Is Julia basically being used as a MAPLE/R replacement? Or are people using it for more general purpose development as well?

    1. 1

      Maple is a computer algebra system with an embedded scripting language, and Julia is definitely not a replacement for it. It can replace R, assuming you can find replacement for the R libs, and can probably replace MATLAB/Octave under the same assumptions. It is suitable for general purpose development, too.

    1. 28

      That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).

      Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

      1. 19

        Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.

        Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524

        1. 13

          I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.

          1. 8

            Medium started with an illusion of simplicity and gradually got more and more complex.

            1. 3

              I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.

              I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.

          2. 3

            That is a very reductionist view of what people use the web for.

            I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.

            1. 19

              Useful.

              algernon hides

              1. 5

                YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.

                I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.

                Minimalist Slack has been around for decades, it’s called IRC.

                1. 2

                  It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.

                  Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.

                  1. 2

                    The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.

                  2. 2

                    That seems to be of extreme bad faith though.

                    1. 11

                      In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.

                      1. 2

                        Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.

                        1. 3

                          Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.

                          The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.

                          1. 3

                            The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly

                            Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?

                            1. 4

                              They are, in fact, downloading an application and running it locally.

                              That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.

                              1. 3

                                As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.

                                1. 1

                                  A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.

                                  The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.

                                  1. 3

                                    JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language

                                    (a == b) !== (a === b)

                                    but only some times…

                                    1. 3

                                      Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.

                                      (And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)

                                      Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).

                              2. 1

                                Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.

                                The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.

                              3. 1

                                I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.

                                1. 4

                                  There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.

                                  If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).

                                  All of these have well-maintained bindings to all popular scripting languages.

                                  1. 1

                                    QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.

                                    The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.

                                    1. 5

                                      Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).

                                      Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.

                                      I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.

                                      Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.

                                      (But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)

                                    2. 1

                                      I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.

                                      I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.

                                      I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).

                                      The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.

                                      1. 3

                                        I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.

                                        TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.

                                        When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.

                                        1. 5

                                          It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.

                                          It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.

                                          1. 2

                                            Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.

                                            If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.

                        2. 1

                          Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.

                          Google docs would be a desktop program.

                          Slack would be IRC.

                          1. 1

                            What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.

                            Google docs would be a desktop program.

                            This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.

                      1. 4

                        This is really interesting to get an idea of how people are taking advantage of BSD! I now have a much nicer idea of why people are going to it (and am a bit tempted myself). That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though

                        1. 3
                          1. 1

                            I mean “someone talks to me about an application and I’m interested in trying it out on my system”?

                            I feel like the link to the CVE database is a bit of an unwarranted snipe here. I’m not talking too much about security updates, just “someone released some software and didn’t bother to confirm BSD support so now I’m going to need to figure out which ways this software will not work”.

                            To be honest I don’t really think that having all userland software come in via OS-maintained package managers is a great idea in the first place (do I really need OS maintainers looking after anki?). I’m fine downloading binaries off the net. Just nicer if they have out of the box support for stuff. I’m not blaming the BSDs for this (it’s more the software writer’s fault), just that it’s my impression that this becomes a bit of an issue if you try out a lot of less used software.

                            1. 4

                              As an engineer that uses and works on a minority share operating system, I don’t really think it’s reasonable to expect chiefly volunteer projects to ship binaries for my platform in a way that fits well with the OS itself. It would be great if they were willing to test on our platform, even just occasionally, but I understand why they don’t.

                              Given this, it seems more likely to expect a good experience from binaries provided by somebody with a vested interest in quality on the OS in question – which is why we end up with a distribution model.

                              1. 2

                                Yep, this makes a lot of sense.

                                I’m getting more and more partial to software relying on their host language’s package manager recently. It’s pretty nice for a Python binary to basically always work so long as you got pip running properly on your system, plus you get all the nice advantages of virtual environments and the like letting you more easily set things up. The biggest issue being around some trust issues in those ecosystems.

                                Considering a lot of communities (not just OSes) are getting more and more involved in distribution questions, we might be getting closer to getting things to work out of the box for non-tricky cases.

                                1. 8

                                  software relying on their host language’s package manager

                                  In general I’m not a fan. They all have problems. Many (most?) of them lack a notion of disconnected operation when they cannot reach their central Internet-connected registry. There is often no complete tracking of all files installed, which makes it difficult to completely remove a package later. Some of the language runtimes make it difficult to use packages installed in non-default directory trees, which is one way you might have hoped to work around the difficulty of subsequent removal. These systems also generally conflate the build machine with the target machine (i.e., the host on which the software will run) which tends to mean you’re not just installing a binary package but needing to build the software in-situ every time you install it.

                                  In practice, I do end up using these tools because there is often no alternative – but they do not bring me joy.

                                  Operating system package managers (dpkg/apt, rpm/yum, pkg_add/pkgin, IPS, etc) also have their problems. In contrast, though, these package managers tend to at least have some tools to manage the set of files that were installed for a particular package and to remove (or even just verify) them later. They also generally offer some first class way to install a set of a packages from archive files obtained via means other than direct access to a central repository.

                                  1. 3

                                    For development I use the “central Internet-connected registry.”, for production I use DEB/RPM packages in a repository:

                                    • forces you to limit the number of dependencies you use, otherwise too much work to package them all;
                                    • force you to choose high quality dependencies that are easy to package or already packaged;
                                    • makes sure every dependency is buildable from source (depending on language);
                                    • have an “offline” copy of the dependencies, protect against “left-pad” issues;
                                    • run unit tests of the dependencies during package build, great for QA!;
                                    • have (PGP) signed packages that uses the distribution’s tools to verify.

                                    There are probably more benefits that escape me at the moment :)

                          2. 1

                            That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though

                            What kind of desktop software do you install from these non-OS sources?

                            1. 2

                              Linux is moving more and more towards Flatpak and Snap for (sandboxed) application distribution.

                              1. 2

                                I remember screwing around with Flathub on the command line in Fedora 27, but right now on Fedora 28, if you enable Flatpak in the Gnome Software Center thingy, it’s actually pretty seamless - type “Signal” in the application browser, and a Flatpak install link shows up.

                                With this sort of UX improvements, I’m optimistic. I feel like Fedora is just going to get easier and easier to use.

                          1. 3

                            The problem turns out to be some obscure FUSE mounts that the author had lying around in a broken state, which subsequently broke the kernel namespace system. Meanwhile, I have been running systemd on every computer I’ve owned in many years and have never had a problem with it.

                            Does this not seem a bit melodramatic?

                            1. 9

                              From the twitter thread:

                              Systemd does not of course log any sort of failure message when it gives up on setting up the DynamicUser private namespace; it just goes ahead and silently runs the service in the regular filesystem, even though it knows that is guaranteed to fail.

                              It sounds like the system had an opportunity to point out an anomaly that would guide the operator in the right direction, but instead decided to power through anyways.

                              1. 8

                                A lot like continuing to run in a degraded state is a plague that affects distributed systems. Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.

                                1. 3

                                  At $work we prefer degraded mode for critical systems. If they go down we make no money, while if they kind of sludge on we make less but still some money while we firefight whatever went wrong this time.

                                  1. 8

                                    My belief is that inevitably you could be making $100 per day, would notice if you made $0, but are instead making $10 and won’t notice this for six months. So be careful.

                                    1. 4

                                      We have monitoring and alerting around how much money is coming in, that we compare with historical data and predictions. It’s actually a very reliable canary for when things go wrong, and for when they are right again, on the scale of seconds to a few days. But you are right that things getting a little suckier slowly over a long time would only show up as real growth not being in line with predictions.

                                  2. 2

                                    I tend to agree that hard failures are nicer in general (especially to make sure things work), but I’ve also been in scenarios where buggy logging code has caused an entire service to go down, which… well that sucked.

                                    There is a justification for partial service functionality in some cases (especially when uptime is important), but like with many things I think that judgement calls in that are usually so wrong that I prefer hard failures in almost all cases.

                                    1. 1

                                      Running distributed software on snowflake servers is the plague to point out.

                                      1. 1

                                        Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.

                                        So if the server is over capacity, kill it and don’t serve anyone?

                                        Router can’t open and forward a port, so cut all traffic?

                                        I guess that sounds a little too hyperbolic.

                                        But there’s a continuum there. At $work, I’ve got a project that tries to keep going even if something is wrong. Honest, I’m not sure I like how all the errors are handled. But then again, the software is supposed to operate rather autonomously after initial configuration. Remote configuration is a part of the service; if something breaks, it’d be really nice if the remote access and logs and all were still reachable. And you certainly don’t want to give up over a problem that may turn out to be temporary or something that could be routed around… reliability is paramount.

                                        1. 2

                                          And you certainly don’t want to give up over a problem that may turn out to be temporary

                                          I think that’s close to the core of the problem. Temporary problems recur, worsen, etc. I’m not saying it’s always wrong to retry, but I think one should have some idea of why the root problem will disappear before retrying. Computers are pretty deterministic. Transient errors indicate incomplete understanding. But people think a try-catch in a loop is “defensive”. :(

                                    2. 4

                                      So you never had legacy systems (or configurations) to support? I read Chris’ blog regularly, and he works at a university on a heterogeneous network (some Linux, some other Unix systems) that has been running Unix for a long time. I think he started working there before systemd was even created.

                                      1. 3

                                        Why do you say that the FUSE mounts were broken? As far as we can see they were just set up in a uncommon way https://twitter.com/thatcks/status/1027259924835954689

                                        1. 3

                                          It does look brittle that broken fuse mounts prevent the ntpd from running. IMO the most annoying part is the debugability of the issue.

                                          1. 2

                                            Yes, it seems melodramatic, even to my anti-systemd ears. It’s a documentation and error reporting problem, not a technical problem, IMO. Olivier Lacan gave a great talk last year about good errors and bad errors (https://olivierlacan.com/talks/human-errors/). I think it’s high time we start thinking about how to improve error reporting in software everywhere – and maybe one day human-centric error reporting will be as ubiquitous as unit testing is today.

                                            1. 2

                                              In my view (as the original post’s author) there are two problems in view. That systemd doesn’t report useful errors (or even notice errors) when it encounters internal failures is the lesser issue; the greater issue is that it’s guaranteed to fail to restart some services under certain circumstances due to internal implementation decisions. Fixing systemd to log good errors would not cause timesyncd to be restartable, which is the real goal. It would at least make the overall system more debuggable, though, especially if it provided enough detail.

                                              The optimistic take on ‘add a focus on error reporting’ is that considering how to report errors would also lead to a greater consideration of what errors can actually happen, how likely they are, and perhaps what can be done about them by the program itself. Thinking about errors makes you actively confront them, in much the same way that writing documentation about your program or system can confront you with its awkward bits and get you to do something about them.

                                          1. 3

                                            Everytime I see a post for Nim I am hoping for a Golang competitor that can actually bring something new to the table. But then I look at the library support and community and walk back disappointed. I am still hoping for nim to take off and attract Python enthusiasts like me to a really fast compiled language.

                                            1. 11

                                              But then I look at the library support and community and walk back disappointed.

                                              It’s very hard to get the same momentum that Go achieved, just by the sheer fact that it is supported and marketed by Google. All I can say is: please consider helping Nim grow its community and library support, if everyone sees a language like Nim and gives up because the community is small then all new mainstream languages will be owned by large corporations like Google and Apple. Do you really want to live in a world like that? :)

                                              1. 3
                                                1. 1

                                                  Have tried it; GC is way to optimistic so under high loads you would see memory being wasted. I love the syntax and power of language but it still stands shy when you can’t compile single binary (like golang) and end up with weird cross compile issues. Nim is way more efficient in terms of memory and GC overhead.

                                                  1. 1

                                                    Cannot compile single binary? What do you mean by that?

                                                    1. 1

                                                      Let me rephrase; binary is not standalone with everything static linked (LibSSL and some dependencies). I had to recompile my binaries on server to satisfy the dynamic linked libraries with particular version.

                                                      1. 5

                                                        I think that’s more a result of Go having the manpower to develop and maintain an SSL library written in Go. As far as I understand, if you were to write an SSL library in 100% Crystal you wouldn’t have this problem.

                                                        By the way, Nim goes a step further. Because it compiles to C you can actually statically embed C libraries in your binary. Neither Go nor Crystal can do this as far as I know and it’s an awesome feature.

                                                        1. 3

                                                          Is there a distinction between “statically embed C libraries in your binary” and “statically link with C libraries”? Go absolutely can statically link with C libraries. IIRC, Go will still want to link with libc on Linux if you’re using cgo, but it’s possible to coerce Go into producing a full static executable—while statically linking with C code—using something like go install -ldflags "-linkmode external -extldflags -static".

                                                          1. 2

                                                            There is a difference. Statically linking with C libraries requires a specially built version of that library: usually in the form of a .a or .lib file.

                                                            In my experience, there are many libraries out there which are incredibly difficult to statically link with, this is especially the case on Windows. In most cases it’s difficult to find a version of the library that is statically linkable.

                                                            What I mean by “statically embed C libraries in your binary” is: you simply compile your program’s C sources together with the C sources of all the libraries you depend on.

                                                            As far as Go is concerned, I was under the impression that when you’re creating a wrapper for a C library in Go, you are effectively dynamically linking with that library. It seems to me that what you propose as a workaround for this is pretty much how you would statically compile a C program, i.e. just a case of specifying the right flags and making sure all the static libs are installed and configured properly.

                                                        2. 2

                                                          I suppose you built with --static?

                                                          1. 2

                                                            You have to jump through quite a few hoops to get dynamic linking in go.

                                                            By default it statically links everything, doesn’t have a libc, etc.

                                                          2. 1

                                                            It’s not uncommon or difficult in go to compile a webapp binary that bakes all assets (templates, images, etc) into the binary along with a webserver, HTTPS implementation (including provisioning its own certs via ACME / letsencrypt), etc.

                                                            1. 1

                                                              only have a passing familiarity with go’s tooling, how do you bake in assets?

                                                              1. 1

                                                                There are different approaches, https://github.com/GeertJohan/go.rice for example supports 3 of them (see “tool usage”)

                                                          3. 1

                                                            I think he mentions the ability to statically build [1] binaries in Golang. I’d note that this is a feature that is not so common and hard to achieve. You can do this with C/C++ (maybe Rust), but it has some limits, and it’s hard to achieve with big libraries. Not having statically built binaries often means that you need a strong sense of what you need and to what point or using good packaging/distribution workflows (fpm/docker/…).

                                                            It’s a super nice feature when distributing software (for example tooling) to the public, so it feels like “here you are your binary, you just have to use it”.

                                                            [1] https://en.wikipedia.org/wiki/Static_build

                                                      2. 1

                                                        The “programming by duct taping 30 pip packages together” method of development is pretty new, and it isn’t the only way to program. Instead, you grow the dependencies you need as you build your app, and contribute them back once they’re mature enough.

                                                        More time consuming, but you have total control.

                                                      1. 2

                                                        The non-syncing of spec code with implementation code really feels like the big barrier to making this usable in general.

                                                        One idea I had to tackle this issue in a language like Python would be to allow for executable doc-strings within the code that could let you write specs inline, and have those be parsed out (but by default it would use the actual in-code implementation)

                                                        That way you could write simplifying specs for certain parts of the code (say, the result of input will be any string instead of waiting on stdin when checking), while still avoiding duplication because most code is straightforward

                                                        Though to be honest this might be very hard to get right. I feel like it’s a bit like the ORM/Type System issue, where type systems are usually rigid and don’t give much “type-check-time” flexibility, but ORMs are usually defined dynamically (relative to the type system)

                                                        1. 6

                                                          This is why I ended up spending less time with TLA after learning it. However, learning it was an incredibly useful exercise that has dramatically informed the way I build systems. It made me start to ask why I can’t write TLA style invariants and check executions of concurrent and distributed algorithms I build in general purpose languages.

                                                          I realized I actually can get similar results on real code if I build systems carefully: schedule multithreaded interleavings at cross-thread communication points, simulate distributed clusters with buggy networks in a single process at accelerated speed a la discrete event simulation, things that use files are communicating with future instances of themselves and you can record logs of file operations and arbitrarily truncate them and ensure invariants hold after restart.

                                                          my main project right now is trying to make the above ideas into nice libraries that let people run their code in more realistic ways before opening pull requests, and integrating those tools into the construction process of the sled database.

                                                          1. 3

                                                            An idle thought which can go on my list of side projects to start “one day” (probably right after the bus accident): probably symbolic execution can be used to demonstrate, if not enforce, the synchronisation of TLA+-type models with code. A symbolic executor can show the different cases a program will execute based on its input and the outputs that result; those can be compared with the cases discovered by the model-checking tool.

                                                            Hooray, I’m not the first person to have that idea! You can combine formal methods with symbolic execution and meet in the middle.

                                                            1. 2

                                                              One idea I had to tackle this issue in a language like Python would be to allow for executable doc-strings within the code that could let you write specs inline, and have those be parsed out (but by default it would use the actual in-code implementation)

                                                              While this example was pretty close to the code implementation, TLA+ (and most specification languages) are too flexible to allow easy embedding. Here the processes were actual threads, but they could just as easily be servers, or human agents, or an abstracted day/night cycle. In one spec I wrote, one process represented two separate interacting systems that, at that level of detail, were assumed to have perfect communication.

                                                            1. 12

                                                              A realization I recently had:

                                                              Why don’t we abstract away all display affordances from a piece of code’s position in a file? That is, the editor reads the file, parses its AST, and displays it according to the programmer’s preference (e.g., elastic tabstops, elm-like comma-leading lists, newline/no-newline before opening braces, etc). And prior to save, the editor simply runs it through an uncustomized prettier first.

                                                              There are a million and one ways to view XML data without actually reading/writing pure XML. Why not do that with code as well?

                                                              1. 4

                                                                This idea is floating around the interwebz for a long time. I recall it being stated almost verbatim on Reddit, HN, probably on /.

                                                                1. 6

                                                                  And once you take it a step further, it’s clear that it shouldn’t be in a text file in the first place. Code just isn’t text. If you store it as a tree or a graph in some sort of database, it becomes possible to interact with it in much more powerful ways (including displaying it any way you like). We’ve been hobbled by equating display representation with storage format.

                                                                  1. 7

                                                                    This talk touches on this issue, along with some related ones and HCI in general: Bret Victor: The Future of Programming

                                                                    1. 2

                                                                      God, I have been trying to recall the name of this talk for ages! Thank you so much, it is a great recommendation

                                                                    2. 5

                                                                      Text is great when (not if) your more complicated tools fail or do something you can’t tolerate and you need to use tools which don’t Respect The Intent of designers who, for whatever reason, don’t respect your intent or workflow. Sometimes, solving a problem means working around a breakage, whether or not that breakage is intentional on someone else’s part.

                                                                      Besides, we just (like, last fifteen or so years) got text to the point where it’s largely compatible. Would be a shame to throw that away in favor of some new AST-database-thing which only exists on a few platforms.

                                                                      1. 1

                                                                        I’m not sure I get your point about about intent. Isn’t the same already true of, say, compilers? There are compiler bugs that we have to work around, there are programs that seem logical to us but the compiler won’t accept, and so on. Still, everybody seems to be mostly happy to file a compiler bug or a feature request, and live with a workaround for the present. Seems like it works well enough in practice.

                                                                        I understand your concern about introducing a new format but it sounds like a case of worse-is-better. Sure, we get a lot of convenience from the ubiquity of text, but it would nevertheless be sad if we were stuck with it for the next two centuries.

                                                                        1. 1

                                                                          With compilers, there are multiple of them for any given language, if the language is important enough, and you can feed the same source into all of them, assuming that source is text.

                                                                          1. 2

                                                                            I’ve never seen anyone casually swap out the compiler for production code. Also, for the longest time, if you wrote C++ for Windows, you pretty much had to use the Microsoft compiler. I’m sure that there are many embedded platforms with a single compiler.

                                                                            If there’s a bug in the compiler, in most casss you work around it, then patiently wait for a fix from the vendor.

                                                                            So that’s hardly a valid counterpoint.

                                                                            1. 1

                                                                              Re: swapping out compiler for production code: most if not all cross-platform C++ libraries can be compiled on at least llvm, gcc and msvc.

                                                                              1. 1

                                                                                Yes, I’m aware of that, but what does it have to do with anything I said?

                                                                                EDIT: Hey, I went to Canterbury :)

                                                                                1. 1

                                                                                  “I’ve never seen anyone casually swap out the compiler for production code” sounded like you were saying people didn’t tend to compile the same production code on multiple compilers, which of course anyone that compiles on windows and non-windows does. Sorry if I misinterpreted your comment!

                                                                                  My first comment is in response to another Kiwi. Small world. Pretty cool.

                                                                      2. 1

                                                                        This, this, a thousand times this. Text is a good user-interface for code (for now). But it’s a terrible storage and interchange format. Every tool needs its own parser, and each one is slightly different, leaving begging the amount of cpu and programmer time we waste going from text<->ast<->text.

                                                                        1. 2

                                                                          Yeah, it’s obviously wasteful and limiting. Why do you think we are still stuck with text? Is it just sheer inertia and incrementalism, or does text really offer advantages that are challenging to recreate with other formats?

                                                                          1. 7

                                                                            The text editor I use can handle any computer language you can throw at it. It doesn’t matter if it’s BASIC, C, BCPL, C++, SQL, Prolog, Fortran 77, Pascal, x86 Assembler, Forth, Lisp, JavaScript, Java, Lua, Make, Hope, Go, Swift, Objective-C, Rexx, Ruby, XSLT, HTML, Perl, TCL, Clojure, 6502 Assembler, 68000 Assembler, COBOL, Coffee, Erlang, Haskell, Ocaml, ML, 6809 Assembler, PostScript, Scala, Brainfuck, or even Whitespace. [1]

                                                                            Meanwhile, the last time I tried an IDE (last year I think) it crashed hard on a simple C program I attempted to load into it. It was valid C code [2]. That just reinforced my notion that we aren’t anywhere close to getting away from text.

                                                                            [1] APL is an issue, but only because I can’t type the character set on my keyboard.

                                                                            [2] But NOT C++, which of course, everybody uses, right?

                                                                            1. 0

                                                                              To your point about text editors working with any language, I think this is like arguing that the only tool required by a carpenter is a single large screwdriver: you can use it as a hammer, as a chisel, as a knife (if sharpened), as a wedge, as a nail puller, and so on. Just apply sufficient effort and ingenuity! Does that sound like an optimal solution?

                                                                              My preference is for powerful specialised tools rather than a single thing that can be kind of sort of applied to a task.

                                                                              Or, to approach from the opposite direction, would you say that a CAD application or Blender are bad tools because they only work with a limited number of formats? If only they also allowed you to edit JPEGs and PDFs, they would be so much better!

                                                                              To your point about IDEs: I think that might even support my argument. Parsing of freeform text is apparently sufficiently hard that we’re still getting issues like the one you saw.

                                                                              1. 9

                                                                                I use other tools besides the text editor—I use version control, compilers, linkers, debuggers, and a whole litany of Unix tools (grep, sed, awk, sort, etc). The thing I want to point out is that as long as the source code is in ASCII (or UTF-8), I can edit it. I can study it. I might not be able to compile it (because I lack the INRAC compiler but I can still view the code). How does one “view” Smalltalk code when one doesn’t have Smalltalk? Or Visual Basic? Last I hear, Microsoft wasn’t giving out the format for Visual Basic programs (and good luck even finding the format for VB from the late 90s).

                                                                                The other issue I have with IDEs (and I will come out and say I have a bias against the things because I’ve never had one that worked for me for any length of time without crashing, and I’ve tried quite a few over 30 years) is that you have one IDE for C++, and one for Java, and one for Pascal, and one for Assembly [1] and one for Lua and one for Python and man … that’s just too many damn environments to deal with [2]. Maybe there are IDEs now that can work with more than one language [3] but again, I’ve yet to find one that works.

                                                                                I have nothing against specialized tools like AutoCAD or Blender or PhotoShop or even Deluxe Paint, as long as there is a way to extract the data when the tool (or the company) is no longer around. Photo Shop and Deluxe Paint work with defined formats that other tools can understand. I think Blender works with several formats, but I am not sure about AutoCAD (never having used it).

                                                                                So, why hasn’t anyone stored and manipulated ASTs? I keep hearing cries that we should do it, but yet, no one has yet done it … I wonder if it’s harder than you even imagine …

                                                                                Edited to add: Also, I’m a language maven, not a tool maven. It sounds like you are a tool maven. That colors our perspectives.

                                                                                [1] Yes, I’ve come across several of those. Never understood the appeal …

                                                                                [2] For work, I have to deal with C, C++, Lua, Make and Perl.

                                                                                [3] Yeah, the last one that claimed C/C++ worked out so well for me.

                                                                                1. 1

                                                                                  For your first concern about the long term accessibility of the code, you’ve already pointed out the solution: a defined open format.

                                                                                  Regarding IDEs: I’m not actually talking about IDEs; I’m talking about an editor that works with something other than text. Debugging, running the code, profiling etc. are different concerns and they can be handled separately (although again, the input would be something other than text). I suppose it would have some aspects of an IDE because you’d be manipulating the whole code base rather than individual files.

                                                                                  Regarding the language maven post: I enjoyed reading it a few years ago (and in practice, I’ve always ended up in the language camp as an early adopter). It was written 14 years ago, and I think the situation is different now. People have come to expect tooling, and it’s much easier to provide it in the form of editor/IDE plugins. Since language creators already have to do a huge amount of work to make programs in their languages executable in some form, I don’t think it would be an obstacle if the price of admission also included dealing with the storage format and representation.

                                                                                  To your point about lack of implementations: don’t Smalltalk and derivatives such as Pharo qualify? I don’t know if they store ASTs but at least they don’t store text. I think they demonstrate that it’s at least technically possible to get away from text, so the lack of mainstream adoption might be caused by non-technical reasons like being in a local maximum in terms of tools.

                                                                                  The problem, as always, is that there is such a huge number of tools already built around text that it’s very difficult to move to something else, even if the post-transition state of affairs would be much better.

                                                                                  1. 1

                                                                                    Text editors are language agnostic.

                                                                                    I’m trying to conceive of an “editor” that works with something other than text. Say an AST. Okay, but in Pascal, you have to declare variables at the top of each scope; you can declare variables anywhere in C++. In Lua, you can just use a variable, no declaration required. LISP, Lua and JavaScript allow anonymous functions; only the latest versions of C++ and Java allow anonymous functions, but they they’re restricted in that you can’t create closures, since C++ and Java have no concept of closures. C++ has exceptions, Java has two types of exceptions, C doesn’t; Lua kind of has exceptions but not really. An “AST editor” would have to somehow know that is and isn’t allowed per language, so if I’m editing C++ and write an anonymous function, I don’t reference variables outside the scope of said function, but that it can for Lua.

                                                                                    Okay, so we step away from AST—what other format do you see as being better than text?

                                                                                    1. 1

                                                                                      I don’t think it could be language agnostic - it would defeat the purpose as it wouldn’t be any more powerful than existing editors. However, I think it could offer largely the same UI, for similar languages at least.

                                                                                      1. 1

                                                                                        And that is my problem with it. As stated, I use C, C++ [1], Lua, Make and a bit of Perl. That’s at least what? Three different “editors” (C/C++, Lua/Perl (maybe), Make). No thank you, I’ll stick with a tool that can work with any language.

                                                                                        [1] Sparingly and where we have no choice; no one on my team actually enjoys it.

                                                                                      2. 1

                                                                                        Personally, I’m not saying you should need to give up your editor of choice. Text is a good (enough for now) UI for coding. But it’s a terrible format to build tools on. If the current state of the code lived in some sort of event-based graph database for example, your changes could trigger not only your incremental compiler, but source analysis (only on what’s new), it could also maintain a semantic changelog for version control, trigger code-generation (again, only what’s new).

                                                                                        There’s a million things that are currently “too hard” which would cease to be too hard if we had a live model of the code as various graphs (not just the ast, but call graphs, inheritance graphs, you-name-it) that we could subscribe to, or even write purely-functional consumers that are triggered only on changes.

                                                                              2. 4

                                                                                Inertia, arrogance, worse-is-better; Working systems being trapped behind closed doors at big companies; Hackers taking their language / editor / process on as part of their identity that needs to be defended with religious zeal; The complete destruction of dev tools as a viable business model; Methodologies-of-the-week…. The causes are numerous and varied, and the result is software dev is being hamstrung and we’re all wasting countless hours and dollars doing things computers should be doing for us.

                                                                                1. 2

                                                                                  I think that part of the issue is that we haven’t seen good structured editor support outside of Haskell and some Lisps.

                                                                                  Having a principled foundation for structured editor + a critical mass by having it work for a language like Javascript/Ruby, would go a long way to making this concept more mainstream. After which we could say “provide a grammar for favorite language X and get structured editor support!”. This then becomes “everything is structured at all levels!”

                                                                                  1. 3

                                                                                    I think it’s possible that this only works for a subset of languages.

                                                                                    Structured editing is good in that it operates at a higher level than characters, but ultimately it’s still a text editing tool, isn’t it? For example, I think it should be trivial to pull up a list of (editable) definitions for all the functions in a project that call a given function, or to sort function and type definitions in different ways, or to substitute function calls in a function with the bodies of those functions to a given depth (as opposed to switching between different views to see what those functions do). I don’t think structured editing can help with tasks like that.

                                                                                    There are also ideas like Luna, have you seen it? I’m not convinced by the visual representation (it’s useful in some situations but I’m not sure it’s generally effective), but the interesting thing is they provide both a textual and a visual representation of the code.

                                                                                2. 1

                                                                                  Python has a standard library module for parsing Python code into an AST and modifying the AST, but I don’t know of any Python tools that actually use it. I’m sure some of them do, though.

                                                                                3. 1

                                                                                  Smalltalk. The word you’re looking for is Smalltalk. ;)

                                                                                  1. 2

                                                                                    Lisp, in fact. Smalltalk lives in an image, Lisp lives in the real world. ;)

                                                                                    Besides, Lisp already is the AST. Smalltalk has too much sugar, which is a pain in the AST.

                                                                                    1. 1

                                                                                      Possibly, but I’m only talking about a single aspect of it: being able to analyse and manipulate the code in more powerful ways than afforded by plain text. I think that’s equally possible for FP languages.

                                                                                  2. 1

                                                                                    Ultimately I think this is the only teneble solution. I feel I must be in the minority in having an extreme dislike of columnar-style code, and what I call “white space cliffs” where a column dictates a sudden huge increase in whitespace. But I realize how much it comes down to personal aesthetics, so I wish we could all just coexist :)

                                                                                    1. 1

                                                                                      Yeah, I’ve been messing around with similar ideas, see https://nick.zoic.org/art/waste-web-abstract-syntax-tree-editor/ although it’s only vapourware so far because things got busy …

                                                                                      1. 1

                                                                                        Many editors already do this to some extent. They just render 4-space tabs as whatever the user asks for. Everything after the indent, though, is assumed to be spaced appropriately (which seems right, anyway?)

                                                                                        1. 1

                                                                                          You can’t convert to elastic-tabstop style from that, and without heavy language-grammar knowledge you can’t do this for 4-space “tabs” generally.

                                                                                          Every editor ever supports this for traditional indent style, though: http://intellindent.info/seriously/

                                                                                          1. 1

                                                                                            To be clear, you can absolutely render a file that doesn’t have elastic tabstops as if it did. The way a file is rendered has nothing to do with the actual text in the file.

                                                                                            It’s like you’re suggesting that you can’t render a file containing a ton of numbers as a 3D scene in a game engine. That would be just wrong.

                                                                                            Regardless, my point is specifically that this elastic tabstops thing is not necessary and hurts code readability more than it helps.

                                                                                            The pefantics of clarifying between tabs and tabstops is a silly thing as well. Context gives more than enough information to know which one is being talked about.

                                                                                            It sounds like this concept is creating more problems than it solves, and is causing your editor to solve problems that only exist in the seveloper’s imagination. It’s not “KISS” at all, quite the opposite.

                                                                                        2. 1

                                                                                          Because presentation isn’t just a function of the AST. Indentation usually is, but alignment can be visually useful for all kinds of reasons.

                                                                                        1. 2

                                                                                          But is Java’s BigDecimal really noticeably slower than Cobol’s? Benchmark in this article compares Java’s floats with Java’s BigDecimals, not Java’s BigDecimals with Cobol’s BigDecimals. Uniquesoft’s article also didn’t come to conclusion: they were migrating from mainframes to common x86 hardware.

                                                                                          I thought primary use of Cobol is legacy banking and business data applications, not rocket engines and physics simulations (unlike Fortran), so why performance of fixed-point numbers have such importance?

                                                                                          Python (and for that matter Java) do not have fixed point built in and COBOL does. In order to get Python to do fixed point I needed to import the Decimal module.

                                                                                          Does this really matter? Moving features from core language to standard library is usually considered good thing.

                                                                                          1. 3

                                                                                            In a related part of the article, it’s mentioned the systems in this are dealing with millions/billions of ops/second. any Decimal operation in Python requires at least 2 dict lookups because you have to pierce the instance attribute dict and class attribute dict to get to most operations.

                                                                                            That’s not to say you shouldn’t measure things, but I wouldn’t be surprised at order of magnitude differences here.

                                                                                            At $WORK, we’ve had to deal with similar sorts of math-related issues (order-dependent calcs and global rounding state make stuff pretty hard). What we end up doing is trying to isolate the math so we can work on the surounding code. So you have doMathStuff(blackBoxData). This let us clear the way for refactoring and cleanup around this.

                                                                                            I don’t know the COBOL FFI story, but if I were in charge of handling some legacy COBOL transition I would likely go a similar route. Ultimately the math is important, but a small/isolated part. So trying to replace the top layer of these aplications with more modern languages (and the tooling that comes with), and avoiding the math stuff would let us get some very nice advantages.

                                                                                            By the end, if you’ve isolated just the math, you could have a bunch of in-process COBOL VMs just running calculations, and your other tooling calling out to it via some pointers or w/e. And by that point you now have a stable foundation to actually replace with some custom C code or the like to replicate fixed point logic. Plus infrastructure to let you make your systems a bit more decentralized.

                                                                                            I bet the undercurrent is a lot of the COBOL machines are running tasks that are hard to parallelize simply because the tooling isn’t present. Hence the need for speed/one machine to run millions of ops a second.

                                                                                          1. 14

                                                                                            This idea comes up from time to time. It’s an old idea. Here are two rms articles that address it.

                                                                                            https://www.gnu.org/licenses/hessla.html

                                                                                            https://www.gnu.org/philosophy/programs-must-not-limit-freedom-to-run.html

                                                                                            Basically: if you’re evil enough to do evil stuff, violating copyright is something you won’t think is very evil at all. So even without the argument about how impossible it is to define evil, a copyright-based license isn’t going to stop anyone from doing evil.

                                                                                            1. 5

                                                                                              I don’t think this holds up in countries with established rule of law. It’s easy to forget that you can sue the government in court in the US and then the government will stop (at least most of the time). It’s just the overton window has shifted so much that we only think of “evil” in terms of things that don’t happen in this day and age, when terrible things are happening all the time and continue to be enabled by technology.

                                                                                              If there’s anything that unities most people, it’s the fear of having all their assets frozen. And the spectrum of evil stops way before “evil mastermind with 1000 offshore accounts and 20 fake identies”.

                                                                                              1. 4

                                                                                                I agree. Julian Sanchez made this point about the NSA/CIA recently, that the bulk of their abuses of power inside the USA are either legal. If they’re not obviously legal, they often fall into a legal grey area, but the right person in the chain of command said that they were legal.

                                                                                                Career government officials tend to have a habit of following most rules of the organizations they inhabit, but may do a lot of shady things that don’t obviously violate those rules.

                                                                                                1. 1

                                                                                                  I think I should have read my own links. rms also argues that such restrictions on use based on copyright are likely unenforcible. I don’t recall ever reading about a case where someone violated a license’s conditions on usage (e.g. using Java in a nuclear reactor) and was thus found to be violating copyright. Has that happened?

                                                                                                  1. 1

                                                                                                    Also: trying to sue the US government for copyright infringement because they used some software to facilitate torture (for example) doesn’t seem to me like a fruitful approach. Maybe with some optimism something could be done about human rights abuses in the US, but going the copyright infringement path doesn’t seem likely to work.

                                                                                                  2. 2

                                                                                                    Glad to hear it’s been addressed already by gnu. I was thinking along similar lines, like “Eh, I see the problem, but I don’t think giving up freedom zero is the answer.” Of course, I don’t have a good solution either, other than a better more democratic government with well-informed citizens and a functioning justice system.

                                                                                                  1. 2

                                                                                                    I wonder what the shortest path is to get from something like this (a lisp environment) to “be able to run GCC”

                                                                                                    I know in the past Pascal had a sort of mini-language that you could implement in order to get a stage 0 compiler and end up with the full system, but nowadays I bet most of the useful tooling has so much platform-specific gunk that it would be hard to wade through

                                                                                                    1. 4

                                                                                                      I wonder what the shortest path is to get from something like this (a lisp environment) to “be able to run GCC”

                                                                                                      There is a lot involved in doing this. The tcc compiler can build old versions of gcc which can build new versions of gcc. But much of that requires supporting tooling, like a working libc, binutils, m4, makefiles, a shell, configure scripts.

                                                                                                      We can only do so much in an alien lisp based operating system like this, perhaps the ultimate goal would be to build a POSIX OS like sortix in which to build a GNU/linux distro.

                                                                                                      I know in the past Pascal had a sort of mini-language that you could implement in order to get a stage 0 compiler and end up with the full system

                                                                                                      I’d like to track that down! good concept.

                                                                                                      1. 1

                                                                                                        The details are a bit tricky but I believe the Pascal-P system was the core of this idea (also with some virtual machine stuff involved)

                                                                                                      2. 1

                                                                                                        Check out the bootstrapping page for ideas.

                                                                                                      1. 10

                                                                                                        I live in South Korea. After Fukushima, government decided to phase out nuclear energy. Just after two years of this policy, proportion of coal in energy mix, both in absolute and relative term, skyrocketed. This means there is no hope South Korea will meet its Paris Agreement obligation.

                                                                                                        I do nuclear energy evangelism in my spare time. So far it doesn’t seem effective, which makes me depressed. People seem to care more about irrational fear of nuclear energy than global warming.

                                                                                                        1. 10

                                                                                                          I was a pretty big nuclear energy proponent for a lot of reasons, but seeing a lot of stats on solar in particular in terms of pricing makes me feel like the political uphill battle of defending nuclear doesn’t feel worth it relative to advocating for large scale rollouts of renewables.

                                                                                                          If you can find spaces accepting of nuclear (whatever happened to thorium anyways) that’s great, of course. At this point I no longer try to argue against the 100% renewable positions though. Perhaps with enough willpower they can happen

                                                                                                          1. 2

                                                                                                            I think it’s a bit different in South Korea. South Korea has continued to build nuclear power until recently and there was no cessation at all. South Korean coal consumption increased sharply (more than 10%) in 2017, almost entirely due to nuclear phaseout. I am not especially in favor of nuclear energy, I just think nuclear energy is the most feasible option (yes, more than solar) against coal in South Korea.

                                                                                                        1. 10

                                                                                                          This is the same tired argument in favor of static typing that you see in every blog. The problem is that while the arguments sound convincing on paper, there appears to be a serious lack of empirical evidence to support many of the benefits ascribed to the approach. Empiricism is a critical aspect of the scientific method because it’s the only way to separate ideas that work from those that don’t.

                                                                                                          An empirical approach would be to start by studying real world open source projects written in different languages. Studying many projects helps average out differences in factors such as developer skill, so if particular languages have a measurable impact it should be visible statistically. If we see empirical evidence that projects written in certain types of languages consistently perform better in a particular area, such as reduction in defects, we can then make a hypothesis as to why that is.

                                                                                                          For example, if there was statistical evidence to indicate that using Haskell reduces defects, a hypothesis could be made that the the Haskell type system plays a role here. That hypothesis could then be further tested, and that would tell us whether it’s correct or not. This is pretty much the opposite of what happens in discussions about static typing however, and it’s a case of putting the cart before the horse in my opinion.

                                                                                                          The common rebuttal is that it’s just too hard to make such studies, but I’ve never found that to be convincing myself. If showing the benefits is truly that difficult, that implies that static typing is not a dominant factor. One large scale study of GitHub projects fails to show a significant impact overall, and shows no impact for functional languages. At the end of the day it’s entirely possible that the choice of language in general is eclipsed by factors such as skill of the programmers, development practices, and so on.

                                                                                                          I think it’s important to explore different approaches until such time when we have concrete evidence that one approach is strictly superior to others. Otherwise, we risk repeating the OOP hype when the whole industry jumped on it as the one true way to write software.

                                                                                                          1. 6

                                                                                                            One large scale study of GitHub projects fails to show a significant impact overall, and shows no impact for functional languages.

                                                                                                            That is not the language used by the authors of the paper:

                                                                                                            The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than un-managed.

                                                                                                            1. 2

                                                                                                              Look at the actual results in the paper as opposed to the language.

                                                                                                            2. 3

                                                                                                              Annecdote but Typescript exists purely to make an existing language use static types. It has near universal appeal among those who have tried it, and in my experience an entire class of errors disappeared overnight while being having almost no cost at all apart from the one-time transition cost

                                                                                                              Meanwhile “taking all projects will average things out” is unlikely to work well. Language differences are rarely just about types, and different languages have different open source communities with different skill levels and expectations

                                                                                                              1. 3

                                                                                                                As much as I like empiricism and the “there’s not actually that much difference” hypothesis, that article has flaws. In particular, it has sloppy categorization, fex classifying bitcoin as “typescript”. Also, some of its conclusions set off my “wait what” meter, such as Ruby being much safer than python and typescript being the safest language of all.

                                                                                                                1. 3

                                                                                                                  The study has many flaws, and by no means does it provide any definitive answers. I linked it as an example of people trying to approach this problem empirically. My main point is that this work needs to be done before we can meaningfully discuss the impacts of different languages and programming styles. Absent empirical evidence we’re stuck relying on our own anecdotal experiences, and we have to be intellectually honest in that regard.

                                                                                                                2. 2

                                                                                                                  That link doesn’t seem to be working. Is this the same study?: http://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf

                                                                                                                  I think you make very good points (even though I currently have a preference for static types). I’d love to see more empirical evidence.

                                                                                                                  1. 1

                                                                                                                    Thanks, and that is the same study. It’s far from perfect, but I do think the general idea behind it is on the right track.

                                                                                                                    1. 2

                                                                                                                      I only skimmed the study, but doesn’t it actually show a small positive effect for functional languages? From the study:

                                                                                                                      Result 2: There is a small but significant relationship between language class and defects. Functional languages have a smaller relationship to defects than either procedural or scripting languages.

                                                                                                                      I realise that overall language had a small effect on defect rate, and they noted that it could be due to factors like the kind of people attracted to a particular language, rather than language itself.

                                                                                                                      1. 4

                                                                                                                        The results listed show a small positive effect for imperative languages, and no effect among functional ones. In fact, Clojure and Erlang appear to do better than Haskell and Scala pretty much across the board:

                                                                                                                        lang/bug fixes/lines of code changed
                                                                                                                        Clojure  6,022 163
                                                                                                                        Erlang  8,129 1,970
                                                                                                                        Haskell  10,362 508
                                                                                                                        Scala  12,950 836
                                                                                                                        
                                                                                                                        defective commits model
                                                                                                                        Clojure −0.29 (0.05)∗∗∗
                                                                                                                        Erlang −0.00 (0.05)
                                                                                                                        Haskell −0.23 (0.06)∗∗∗
                                                                                                                        Scala −0.28 (0.05)∗∗∗
                                                                                                                        
                                                                                                                        memory related errors
                                                                                                                        Scala −0.41 (0.18)∗
                                                                                                                        0.73 (0.25)∗∗ −0.16 (0.22) −0.91 (0.19)∗∗∗
                                                                                                                        Clojure −1.16 (0.27)∗∗∗ 0.10 (0.30) −0.69 (0.26)∗∗ −0.53 (0.19)∗∗
                                                                                                                        Erlang −0.53 (0.23)∗
                                                                                                                        0.76 (0.29)∗∗ 0.73 (0.22)∗∗∗ 0.65 (0.17)∗∗∗
                                                                                                                        Haskell −0.22 (0.20) −0.17 (0.32) −0.31 (0.26) −0.38 (0.19)
                                                                                                                        

                                                                                                                        The study further goes to caution against overestimating the impact of the language:

                                                                                                                        One should take care not to overestimate the impact of language on defects. While these relationships are statistically significant, the effects are quite small. In the analysis of deviance table above we see that activity in a project accounts for the majority of explained deviance. Note that all variables are significant, that is, all of the factors above account for some of the variance in the number of defective commits. The next closest predictor, which accounts for less than one percent of the total deviance, is language.

                                                                                                                        This goes back to the original point that it’s premature to single out static typing as the one defining feature of a language.

                                                                                                                1. 1

                                                                                                                  That’s a pretty interesting way to describe dynamic programming. I would probably have wrapped this up somehow so that the caller isn’t having to carry around a reference to the cache (instead having a (function, cache) pair that knows how to operate on things properly), but the image of the function operating directly on the user’s array is fun.

                                                                                                                  1. 23

                                                                                                                    Basically what the web was before the recent trend of “minimalism” where links and buttons look like text, all text is light gray on light gray background, nothing work without JavaScript enabled, etc.

                                                                                                                    If they have to call this “new” trend “brutalism”, why not. I’d call that common sense.

                                                                                                                    1. 6

                                                                                                                      the first computer programs with mainstream success were word processors.

                                                                                                                      The early web was filled with “huge images to do designs that HTML doesn’t support”, and Flash existed for a reason. People have always tried to lay stuff out in different ways

                                                                                                                      The brutalist web might have always existed in a certain subset of the web, but it stopped being the web ever since image tags and tables were a thing.

                                                                                                                      1. 5

                                                                                                                        It’s the only way to sell it. And if re-branding the price, I’d take it.

                                                                                                                        1. 6

                                                                                                                          problem is people will create the same design but make it with 10 layers of css and javascript to slow it down.

                                                                                                                      1. 4

                                                                                                                        I remember in college a classmate was a big openSUSE advocate, so I worked in that system for a while. Felt very different from the Ubuntu world, and I almost never hear of them in general chatter. Good to see they’re still moving forward well

                                                                                                                        1. 3

                                                                                                                          I’ve used openSUSE extensively and think it’s an excellent distribution. It’s also one of the few high quality distributions that still has KDE as a first class citizen rather than an afterthought, with significant testing going into the KDE workspace.

                                                                                                                          In the past, software.opensuse.org combined with their one-click-install tool in YaST makes it easy to get modern or uncommon software installed.

                                                                                                                          I think one of the reasons openSUSE doesn’t get featured a lot is because they are the smaller player in the enterprise field (compared to Red Hat) and are eclipsed by Ubuntu in the hobbyist / personal use space.

                                                                                                                          1. 1

                                                                                                                            I have it good authority from a consultancy gig that it’s big in Germany, especially in enterprise.

                                                                                                                            I was also told this is, at least in part, because of very long support times for old releases. Which is fine for enterprise, but can lead to interesting situations when upgrades would be in order.

                                                                                                                          2. 2

                                                                                                                            I have used opensuse on a pet server for a while. Zypper package manager was very convinient in terms of insight into security updates necessary, reboots necessary upfront before the update. I changed to CentOS later on because the hosting only supported that, and it felt backwards. (I have been a longtime redhat/fedora user)

                                                                                                                            1. 1

                                                                                                                              Personally, I’ve never been able to get into OpenSUSE.

                                                                                                                            1. 1

                                                                                                                              Isn’t Efail an issue because PGP doesn’t consider HTML to be a thing in the first place? character escaping isn’t exactly a new technology (though of course things are hard). It just feels like the way PGP works is really hacky and just not great.

                                                                                                                              In an alternate universe PGP emails have to escape HTML chars, and if the escaping isn’t present (like < shows up anywhere in the email) the email isn’t rendered.

                                                                                                                              1. 1

                                                                                                                                Is it easy to target the Erlang VM? I’m really surprised at the number of languages targetting this machine given that it doesn’t seem to have nearly the same execution model as more classical languages (compared to something like the JVM)

                                                                                                                                1. 1

                                                                                                                                  Not so easy, no. However its not so different to JVM at a conceptual level, as it’s also a byte code interpreter with JIT-ish features. Erlang has an unusual feature, a per-process heap, specifically designed to enable soft-realtime systems guarantees. This is (IMO) the best introduction to the BEAM internals: https://happi.github.io/theBeamBook/ and a general JVM:BEAM comparison, albeit a few years old now http://ds.cs.ut.ee/courses/course-files/To303nis%20Pool%20.pdf

                                                                                                                                1. 2

                                                                                                                                  As terrible as this is, I bet the company didn’t lose a single sale from this. Part of why IoT is so horrible, there is just no reason to make a secure system when the general public won’t care.

                                                                                                                                  1. 1

                                                                                                                                    The devil’s advocate is that the lock is still roughly as secure as some random Masterlock that kids use on lockers.

                                                                                                                                    Most locks exist more as sign postage and preventing errant access. You definitely don’t want to be using this to protect against motivated actors… but that was true even without these exploits?

                                                                                                                                    That being said the random Masterlock at least requires someone to fidget with it physically to get it open.

                                                                                                                                    1. 2

                                                                                                                                      I think the main difference is non IoT locks actually require some effort to unlock. If these IoT locks take over, someone will just make an app that automatically scans the area for devices and lets you hack them with a button press.

                                                                                                                                      1. 1

                                                                                                                                        yeah this is a very real possibility. I’m a strong believer in the gradient of security but the idea of just walking down and being able to unlock all the doors is v scary

                                                                                                                                        (also: why does this even need to be on the internet?? We made electronic devices before bluetooth low energy, it really feels like we should be able to make a lot of this stuff in an offline way)

                                                                                                                                      2. 1

                                                                                                                                        2018/06/16: Tapplock got the API down after pressure because it was exposing GDPR data.

                                                                                                                                        That’s why I actually like GDPR. I bet before that law came into life the vendor would not react at all. Now they face a huge fine and most of all are obliged by law to inform about the potential breach of customer data.

                                                                                                                                    1. 1

                                                                                                                                      These UIs look so nice! There’s something about this aesthetic that really comes off as clean.

                                                                                                                                      Maybe lack of color prevents bad color choices

                                                                                                                                      1. 2

                                                                                                                                        Reminds me of 80s Mac games - they had a tastefulness and crispness in graphics that other home computers at the time lacked, helped by the monochrome yet high-res constraints.

                                                                                                                                      1. 1

                                                                                                                                        Does anyone know of a good introduction to property based testing?

                                                                                                                                        1. 1

                                                                                                                                          At what level? Lectures by the quickcheck guy are usually a good basic introduction.

                                                                                                                                          1. 1

                                                                                                                                            http://propertesting.com/ is a nice one, targeted at Erlang.

                                                                                                                                            1. 1

                                                                                                                                              This blog for the hypothesis Python library has a lot of great articles about how to use this stuff in “enterprise-y” software.

                                                                                                                                              To be honest it was way more convincing to me than most other articles as to the utility of this stuff for higher-level applications