1. 1

    Funny coincidence that I would be reading this now when I first heard of LAPACK the other day, because it’s one of those Fortran programs still hanging around.

    1. 1

      Super hyped for web asm. I’m so sick of JS heavy websites slowing down to a crawl on my phone but as a dev I love the flexibility having a single page app and a separate backend gives me.

      1. 1

        I love the flexibility having a single page app and a separate backend gives me.

        In a previous life, I did some web development (never as my primary job, but I had to write a web-based frontend to a bigger project because there was no one else to do it). I’m sure I didn’t invent the concept, but the app had a generic JSON-RPC interface and the web app was simply a JSON-RPC client running in the browser.

        Made testing much simpler, and since the app had to be accessible to a lot of different things (not just browsers), the universal RPC interface really helped.

        (This was long, long ago. I left that job in…2009? 2010? Long before WebAssembly was anything more than a “wouldn’t it be neat if…”.)

        1. 2

          I’m sure I didn’t invent the concept, but the app had a generic JSON-RPC interface and the web app was simply a JSON-RPC client running in the browser.

          Yeah, that’s the idea with pump.io too, but you were 2 years ahead of that. :-)

          Mastodon is pretty much that too – so much so that Pleroma can just implement the API and run the Mastodon frontend as an alternative frontend.

      1. 1

        A bit orthogonal but still related to the overall theme: are there enough performance/portability benefits of JITs to continue running server applications targeting bytecode platforms? JVM was originally built for safely running web applications, and the CLR seems to exist to allow for portability across the wide range of hardware running Windows.

        Are fancy VMs and tiered JITs necessary in a world where we can cross-compile to most platforms we’d want to run on? Languages/run times like Go and Haskell have backends that target a wide range of architectures, and there’s no need get intimately familiar with things like the JVM instruction set and how to write “JIT friendly code”.

        1. 2

          IBM i (nee OS/400) has an interesting solution where software is delivered compiled for a virtual machine, then compiled from virtual machine bytecode to native code on installation. I would like to see that model expand to other platforms as well.

          1. 2

            OS/400 is just…so different in so many ways. So many really interesting ideas, but it’s very different from just about every other operating system out there.

            I wish there were a way I could run it at home, just as a hobbyist.

        1. 2

          Jef Raskin is not exactly an unsung genius anymore, but I’d still say “undersung”. Something of a tragic hero in the classical sense.

          The Canon Cat is a little rare and expensive when you can find it, but a Swyftcard replica for your vintage Apple II is pretty affordable still. Also the Cat software (written in Forth!) is fully emulated in the MAME suite. If you find this stuff interesting, I’d strongly suggest at least reading Raskin’s book The Humane Interface. Wikipedia’s page on Archy has some interesting tidbits too.

          Apart from some good ideas on human interface design, there is a broader lesson to be learned about the real political and economic reasons why technical projects succeed or fail. Smalltalk, Oberon, Lisp machines, BeOS, NeXT, the Newton… all worth study.

          Lobsters, what are your favorite coulda-been systems? I’d especially love to hear from the old-timers among us.

          1. 1

            Mainstream PowerPC Amiga. :’(

          1. 3

            I love how it claims to be based on “reason” and still resorts to “bs” to get things done. Political commentary as code.

            1. 3

              Reason is a syntax for OCaml. BuckleScript is the compiled bridge to JS.

              1. 1

                Yes. I just like the way the names play out.

            1. 6

              So what went wrong?

              Nothing!

              Worse is better. Billions of dollars have gone into the cryptocurrency system and we have a new dotcom boom.

              Most of the money will be spent on the wrong things, but there is orders of magnitude more money going into actually solving the issues displayed by Bitcoin than there would have been had it not escaped the lab. There are orders of magnitude more enthusiasts coming up with ideas around it than there would have been if this stuff had been trapped in academia for a decade.

              1. 5

                We have to accept that mistakes will be made and just hope that none of them are crippling. After all, if developers are cursing our design decisions years from now, that means we succeeded!

                A lovely way of framing it.

                1. 4

                  Reminds me of a Stroustroup quote: There are only two kinds of programming languages, the ones people complain about and the ones nobody uses.

                  1. 1

                    And that reminds me of Alan Kay saying the Macintosh (I believe it was) was the first computer worth criticizing.

                1. 1

                  I am always happy to see alternatives to gitflow (which I think is overly complicated for many projects). This is a nice idea, but perhaps it works best with specific types of development. A few thoughts:

                  If two developers are working on separate features that affect the same piece of code

                  if (featureA) {
                    // changes from developer A
                  } else if (featureB) {
                    // changes from developer B
                  } else {
                    // old code
                  }
                  

                  How do you rename or remove a variable as part of refactoring, in a way that makes all four combinations of feature flags still work?

                  I guess it will depend on the types of changes, and how the developers communicate, if this is easier than feature branches (where the conflict resolution is deterministic at the end, and diff works), or if this leads to feature flag spaghetti, and time wasted adapting your changes to be able to run alongside another developers changes, which might end up not being merged.

                  Also, what if you add a file? then I guess your build system will need feature flags. What if your build system uses globbing and you remove or rename a file? Some changes can’t both be there and not be there.

                  1. 1

                    I know people are going to feel differently about this, but I lean heavily toward explicit being better than implicit, and the presence of magic should be minimal, even if it means some redundant work. Redundant work can be verified and semi-automated to keep explicit things up-to-date.

                    How do you rename or remove a variable as part of refactoring, in a way that makes all four combinations of feature flags still work?

                    A feature branch just delays this question to the big bang merge conflict. Forcing you to do this work upfront means you talk about this with the other people working in the same code region.

                    Like you say, the other side of the coin is that the feature branch might never be merged. Early merging optimizes for the happy path. But then again, if you merge your work early and discuss with others, it can’t remain in the twilight state of not merged, not discarded, which may improve coder efficiency.

                    Also, what if you add a file? then I guess your build system will need feature flags.

                    Just adding a file shouldn’t affect anything unless some other file references it.

                    What if your build system uses globbing

                    Please don’t, especially if you also automatically siphon those files into some automatic extension of functionality.

                    Merge conflicts are annoying, but clean merges that have semantic conflicts are even worse.

                    Of course plugin systems are super useful – when they are user accessible and are used for deployment. But then the API would be well-defined, restricted and conservative. Probably the plugins would even be in separate repos and the whole branch vs flag point is moot.

                    Testing plugin interactions is probably worth an article series of its own.

                  1. 4

                    First, to call itself a process could [simply] execute /proc/self/exe, which is a in-memory representation of the process.

                    There’s no such representation available as a file. /proc/self/exe is just a symlink to the executable that was used to create the process.

                    Because of that, it’s OK to overwrite the command’s arguments, including os.Args[0]. No harm will be made, as the executable is not read from the disk.

                    You can always call a process with whatever args[0] you like. No harm would be done.

                    1. 4

                      Although /proc/self/exe looks like a symbolic link, it behaves differently if you open it. It’s actually more like a hard link to the original file. You can rename or delete the original file, and still open it via /proc/self/exe.

                      1. -4

                        No harm will be made, as the executable is not read from the disk.

                        the executable is definitely read from the disk

                        Again, this was only possible because we are executing /proc/self/exe instead of loading the executable from disk again.

                        no

                        The kernel already has open file descriptors for all running processes, so the child process will be based on the in-memory representation of the parent.

                        no that’s not how it works, and file descriptors aren’t magic objects that cache all the data in memory

                        The executable could even be removed from the disk and the child would still be executed.

                        that’s because it won’t actually be removed if it’s still used, not because there’s a copy in memory

                        <3 systems engineering blog posts written by people who didn’t take unix101

                        1. 12

                          Instead of telling people they are idiots, please use this opportunity to correct the mistakes that the others made. It’ll make you feel good, and not make the others feel bad. Let’s prop up everyone, And not just sit there flexing muscles.

                          1. 3

                            Sorry for disappointing you :)

                            I got that (wrongly) from a code comment in Moby (please check my comment above) and didn’t check the facts.

                            1. 2

                              I’m not saying that the OP was correct, I’m just saying that:

                              /proc/self/exe is just a symlink to the executable

                              ,,, is also not completely correct.

                          2. 3

                            Thanks for pointing out my mistakes! I just fixed the text.

                            I made some bad assumptions when I read this comment [1] in from from Docker and failed to validate it. Sorry.

                            By the way, is it just by bad English or that comment is actually wrong as well?

                            [1] https://github.com/moby/moby/blob/48c3df015d3118b92032b7bdbf105b5e7617720d/pkg/reexec/command_linux.go#L18

                            1. 1

                              that comment is actually wrong as well?

                              I don’t think it’s strictly correct, but for the purpose of the code in question it is accurate. That is, /proc/self/exe points to the executable file that was used to launch “this” process - even if it has moved or been deleted - and this most likely matches the “in memory” image of the program executable; but I don’t believe that’s guaranteed.

                              If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes.

                              1. 3

                                but I don’t believe that’s guaranteed.

                                I think it’s guaranteed on local file systems as a consequence of other behavior. I don’t think you can open a file for writing when it’s executing – you should get ETXTBSY when you try to do that. That means that as long as you’re pointing at the original binary, nobody has modified it.

                                I don’t think that holds on NFS, though.

                                1. 1

                                  If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes

                                  Actually, scratch that. You won’t be able to write to the executable since you’ll get ETXTBUSY when you try to open it. So, for pretty much all intents and purposes, the comment is correct.

                                  1. 1

                                    Interesting. Thank you for your insights.

                                    In order to satisfy my curiosity, I created this small program [1] that calls /proc/self/exe infinitely and prints the result of readlink.

                                    When I run the program and then delete its binary (i.e., the binary that /proc/self/exe points to), the program keeps successfully calling itself. The only difference is that now /proc/self/exe points to /my/path/proc (deleted).

                                    [1] https://gist.github.com/bertinatto/5769867b5e838a773b38e57d2fd5ce13

                              1. 4

                                Never heard of ATS, and it’s not linked to in the post. Here it is:

                                http://www.ats-lang.org

                                ATS is a statically typed programming language that unifies implementation with formal specification. It is equipped with a highly expressive type system rooted in the framework Applied Type System, which gives the language its name. In particular, both dependent types and linear types are available in ATS.

                                1. 1

                                  I really like this, especially that release isn’t a branch in its own right (except infrequently and temporarily).

                                  Trying to keep the deploy history in the source code repo is an unnecessary hassle; Just keep the commit hashes in some deploy tool somewhere instead. Or maybe that’s the git repo of your infrastructure-as-code code.

                                  Maintaining release becomes a hassle in particular when you need to revert to an older candidate, because there are definitely ways to screw that up. Not a huge hassle, but a hassle.

                                  No, wait, I especially like the condemnation of feature branches. But I knew that. :-)

                                  1. 1

                                    Personally I’m looking at Android and it sounds like Electrum is the ticket right now, that’s the old and tried wallet that has added this feature.

                                    Samourai is in early release, and GreenBits is some multisig client server hybrid thingie that I would need to learn more about.

                                    1. 2

                                      Electrum is the furthest ahead right now. Its Segwit addresses use the new bech32 format which I don’t think many (any?) other wallets know about yet.

                                      1. 1

                                        Oh goodie, somebody who seems to know something.

                                        Bech32 is something I first heard of when reading this reddit post. Am I understanding correctly that it is something that can be used on the blockchain itself, so it saves bytes in the transaction? What are the security implications?

                                        It seems that if you use them, you would need explicit support in the sender, so I hope the Electrum client offers both a bc* and a 3* address to send to?

                                        1. 2

                                          There’s good info here.

                                          There are three types of wallet addresses:

                                          • Legacy P2PKH which begin with the number 1
                                            • “Pay to public key hash”
                                            • Supported everywhere.
                                            • Doesn’t allow larger blocks via segwit.
                                          • Older Segwit P2SH type starting with the number 3
                                            • “Pay to script hash”, sort of a hack which proxies through a script.
                                            • Supported everywhere, but it’s only cheaper if making segwit-to-segwit transactions.
                                            • Allows larger blocks but isn’t the most space-efficient.
                                          • Newer Segwit Bech32 type starting with the string bc1
                                            • Can only receive payments from wallets that support Bech32.
                                            • Smallest transaction sizes (and therefore the cheapest fees).
                                          1. 1

                                            I installed Electrum and initialized a SegWit wallet. It only uses bc* addresses. The sounds pretty limited – I’m expecting most places not to understand those. :-(

                                    1. 9

                                      Don’t miss the sister spec:

                                      http://tenver.drone-ver.org

                                      Your product is always version 10.

                                      Who else uses TenVer?

                                      • Microsoft: Windows 10
                                      • Apple: OSX

                                      Literally snorted.

                                      1. 6

                                        This is an interesting post.

                                        However, I’m not sure why IOTA being written in Java instead of C++ is a strike against it.

                                        1. 1

                                          Yeah, that’s just misguided. I would rather it be written in Java than in C++.

                                          And then there are accusations that the article is just written from ignorance and that every allegation is misguided.

                                          https://twitter.com/petertoddbtc/status/942206322023587840 (the replies to it)

                                          But then again, I don’t know if the accusations against the allegations are any better informed, or if they’re just cult members. It’s hard to tell, really.

                                          This is the most authoritative-and-level-headed-sounding response:

                                          https://www.reddit.com/r/Iota/comments/7kdpw8/a_troubling_article_regarding_iota/drdhnym/

                                        1. 3

                                          Scheme is a LISP.

                                          1. 0

                                            yup :)

                                            1. 0

                                              C++ is an ALGOL.

                                            1. 2

                                              I am still a bit confused by the Nix vs. Guix thing. Not that I am against having two similar projects per se, but I don’t know.

                                              1. 11

                                                Guix took parts of the Nix package manager and retooled them to use Scheme instead of the Nix language. This works because the Nix build process is based off a representation of the build instructions (“derivations”) and has nothing to do with the Nix language. Guix is also committed to being fully FOSS, whereas the Nix ecosystem will accept packages for any software we can legally package.

                                                1. 9

                                                  Also there is of course accumulated divergence as people with some particular idea happen to be in the one community and not the other.

                                                  Nix has NixOps and Disnix, but there still is no GuixOps.

                                                  On the other hand I believe the guix service definitions are richer, and guix has more importers from programming-language-specific package managers, but then on the third hand the tooling for Haskell and Node in particular is better in nix.

                                                  Nix supports OSX, FreeBSD and cygwin, guix supports the Hurd.

                                              1. 2

                                                … is anyone really doing that?

                                                Stop teaching other people to do this.

                                                If this actually happens in real life then yes, stop doing that.

                                                If you are deploying from git at all you obviously have some specific commit that is the thing you want to deploy. You don’t want deployment to be something that is almost what you asked for.

                                                1. 3

                                                  I wouldn’t be surprised if a lot of people were under the assumption that pull just downloads and checks out a branch – after all, this is what many people use it for during development.

                                                  1. 2

                                                    People do really do this. If you ever get bored, search for files named .git on Google; you’ll find a reasonable number of sites that not only use Git to deploy, but that accidentally vend the .git used as world-readable.

                                                    1. 1

                                                      Isn’t it nice that you can just git-clone them? 😉

                                                      edit: Ok, git clone seems not enough. Need to wget-mirror the .git directory, then git-checkout.

                                                      Here is a google search to find them.

                                                      1. 1

                                                        I think you need to run git update-index on a repo to make it http-clonable.

                                                    2. 1

                                                      Hi, yes.

                                                      I’m still doing that on a unsupported business app somewhere. It hasn’t gotten any updates in month (probably shutdown too) but it’s deployed by doing a git pull from remote and recompiling manually.

                                                      Quite horrible indeed.

                                                    1. 8

                                                      I can’t find anything that the SFLC has published about this, other than the timing on a blog post entitled A New Era for Free Software Non-Profits, wherein Moglen details changes in the IRS 501(c)3 organization determination process. Apparently, when the IRS was using the same overstepping-its-bounds process to review FOSS project Form 1023 submissions – the one used to file for a 501(c)3 – as it was Tea Party and other politically-right organizations. Projects trying to form their own non-profit entities were getting rejected amid very acerbic questioning. The SFLC seems to then have steered rejected clients towards the SFC and other, similar organizations. Then, some IRS directors got fired amid that scandal and apparently their replacements put in new processes that also made it a lot easier for FOSS project non-profits to get positive determination outcomes.

                                                      One paragraph at the end of that blog post is particularly relevant:

                                                      This arrangement is a clear advantage over the compromises between tax-deductibility and true organizational independence that we had to strike in the era of “condominiums” and “conservancies.” Such organizations will continue to serve good purposes for the software projects whose special conditions require them. But from now on, for the foreseeable future, every free software project that wants to govern itself in a secure, independent, tax-deductible federal charity can do so, while working with other organizations to get the asset management, fiscal administration, tax filing and regulatory compliance services that it needs from fiduciaries who are legally required to put its interests first.

                                                      I speculate that the SFLC may believe that the SFC is no longer good for business and wants to distance itself from its spawn for business reasons. Non-profits are still businesses.

                                                      Why do I think these events are related? The blog post was published on September 21, 2017. According to the USPTO case file, the cancellation petition was filed September 22, 2017. The post linked in this thread leads to SFC’s response.

                                                      It seems very, very strange that the SFLC wouldn’t try to resolve the process first without involving the USPTO. I think there are details to which we the public are not yet privy.

                                                      1. 3

                                                        The SFLC has responded.

                                                        http://softwarefreedom.org/blog/2017/nov/06/conservancy-stmt/

                                                        Some of the wording makes me feel that this is a continuation of Moglen’s displeasure with the SFC suing VMWare. He wants to be friends with the Linux Foundation, the Foundation want to be friends with VMWare, who are a sponsor.

                                                        1. 5

                                                          Ah, excellent. Bruce Perens captures the whole thing succinctly, yet with more details than I remembered or even heard about.

                                                          https://lwn.net/Articles/738109/

                                                          1. 2

                                                            The Software Freedom Conservancy generally recommends that if you need a short form for their name, you say “Conservancy” and not “SFC” since the latter is too easy to confuse with a great many other orgs.

                                                            1. 1

                                                              In this context, I think most people can separate SFC, SFLC and unrelated orgs with similar names. Or I’m just rationalizing being lazy. :-)

                                                              But yeah, that sounds like good advice.

                                                            2. 1

                                                              That’s a frustrating response, basically saying, ~“We tried to talk to them about this but they refused to engage for 2.5 years.” I wonder what the Conservancy’s riposte will be.

                                                              1. 2

                                                                Predictably it was “[no they didn’t]”. Apparently (according to Conservancy) Law Center approached them several times to discuss things that Conservancy thought was moot, so didn’t meet, but the trademark thing wasn’t explicitly mentioned.

                                                                1. 1

                                                                  Communication is hard.

                                                                  And here we are.

                                                          1. 3

                                                            New features include:

                                                            • Installer iso (GuixSD) (previously only USB image)
                                                            • Friendlier and more helpful guix package (informs early about installed size, ambiguities etc.)
                                                            • Networked guix-daemon!

                                                            For info on how guix-daemon --listen (networked daemon) is used, check out https://guix-hpc.bordeaux.inria.fr/blog/2017/11/installing-guix-on-a-cluster/

                                                            1. 3

                                                              Of course, Bitcoin was trading at a significantly higher price of $1000 to DASH’s $11, so any percentage changes will always favour DASH.

                                                              NO! The exchange rate in isolation means NOTHING!

                                                              DASH has a smaller market cap, so an inflow of some absolute number of dollars will affect it more. But comparing exchange rates is completely meaningless.