1. 21

    I detest paying for software except when it occupies certain specialized cases or represents something more akin to work of art, such as the video game Portal.

    I detest this attitude. He probably also uses an ad blocker and complains about how companies sell his personal information. You can’t be an open source advocate if you detest supporting the engineers that build open source software.

    But only when it’s on sale.

    I’m literally disgusted.

    1. 8

      It’s reasonable to disagree with the quote about paying for software. But how on earth does this defense of the advertising industry come in?

      Certainly it’s possible to be an open source advocate and use an ad blocker and oppose the selling of personal information.

      1. 2

        Certainly. Actually, I would describe myself in that way. But you can’t believe that, and also believe you’re entitled to free-as-in-beer software. Especially high quality “just works” software the author describes. It’s a contradiction.

        Alternative revenue streams like advertising exist to power products people won’t pay for. I don’t know many software engineers that want to put advertising in their products, rather they have to in order to avoid losing money. That’s why I happily pay for quality software like Dash and Working Copy, and donate to open source projects.

        1. 1

          But you can’t believe that, and also believe you’re entitled to free-as-in-beer software.

          I don’t get that sort of vibe from this article. He doesn’t seem to be entitled at all.

      2. 4

        “free as in free beer”!

        1. 1

          I can’t afford to have a different attitude.

          1. 16

            Meh, what’s a factor 10 when you’re still under a request per second? Does this actually matter?

            I’m curious whether any attempt was made to contact the Bing team before the change & post.

            1. 3

              Seems very unnecessarily frequent. If you had multiple search engines checking in this frequently it’d add up quickly

            1. 4

              Wikipedia on ROP: https://en.wikipedia.org/wiki/Return-oriented_programming

              Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses such as executable space protection and code signing.

              1. 5

                See Vala (programming language) on Wikipedia. It’s a language for GNOME that’s C#-like and compiles to C. I would like to recommend to the author to ask a native speaker to proof-read the book. It’s quite readable, but the lack of articles is a bit distracting (I know this is hard).

                1. 2

                  Also worth noting is that it compiles to C with GObject at the center of it’s object system. It has it’s benefits like quite easy interfacing with dynamic languages, but for me GObject is too crazy. Maybe I’m prejudiced, but it’s like painfully manual C++ although more dynamic. At this point for me it would be better idea to just write in C++.

                  However Vala hides all this, so mostly one sees good parts.

                  1. 1

                    Yes, but I don’t want to throw technical details at first without even writting a simple hello, world program.

                    Yes. Hence I submitted link over here. Someone who has good experience with Vala can take a look.

                  1. 2

                    Regarding the IDE side of things, it seems there would be some value to having the parser depend on the previous parse. E.g. if the user is editing in one “block”, cut that block out and parse it individually in some way, so unbalanced parentheses or string literals don’t affect the whole file.

                    (Why is this tagged haskell?)

                    1. 1

                      I’ve seen parsers implemented as monoids, where you cut the source at semicolons or matched braces or whatever is your chosen chunk delimiter. Seems to work pretty well.

                      While your suggestion of depending on previous successful parses sounds like a bunch of work, I can see where it would be very powerful. You could find where the previous source is the same and incrementally parse changes and that kind of thing.

                      Also, I think it’s tagged haskell because the blog post uses Haskell syntax? Probably not enough reason for the tag though.

                      1. 5

                        This seems like a step in the right direction, with a more accessible user interface.

                        What does “released” mean, though? How do I upgrade my existing nix install? nix-channel --update ; nix-env -i nix doesn’t get me 2.0.

                        1. 4

                          Did some more digging. I am on nixpkgs-unstable (not on NixOS though). But it seems nixpkgs-unstable didn’t get the 2.0-release of nix yet, either. There’s nixUnstable though, which is at version 2.0pre5968_a6c0b773: https://hydra.nixos.org/build/69873027.

                          Digging further, it’s just not in nixpkgs yet: https://github.com/NixOS/nixpkgs/commits/master/pkgs/tools/package-management/nix. And the relevant PR for switching nixpkgs’s nix to 2.0: https://github.com/NixOS/nixpkgs/pull/34636

                          (And as an aside, this kind of need to dig to figure out the answer to a pretty straightforward question is typical for my nix experience – it always ends up making sense in some way, but you need so much knowledge and have to dig so deep to find out how/what/why. git grep in a nixpkgs clone seems part of the required tool set for a nix user.)

                          1. 1

                            Yeah, I think there have been great strides on documentation over the past year, but there’s a long distance still to go.

                          2. 2

                            I’ll keep talking to myself here. Following a suggestion from the announcement post on news.ycombinator, I got the new nix:

                            $ git clone https://github.com/nixos/nixpkgs
                            $ cd nixpkgs
                            $ git checkout origin/nix-2.0
                            $ nix-env -i $(nix-build --no-out-link . -A nix)
                            

                            Which… doesn’t seem to know about nixpkgs? At least nix search doesn’t find anything. And nix log nixpkgs.hello doesn’t find nixpkgs.

                            1. 1

                              Hey, that was my suggestion, so I perhaps should also mention that nix log nixpkgs.hello depends on you having something like NIX_PATH=nixpkgs=/path/to/your/nixpkgs/checkout.

                            2. 2

                              Nix ≠ NixOS. But in NixOS, Nix 2.0 would be present in 18.03 release, which would be released in March this year. But you can also freely upgrade to their “unstable” channel (I forgot the proper name though) which has these all nice things with cutting edge versions available.

                            1. 0

                              Yet, this issue is still open? https://github.com/NixOS/nixpkgs/issues/18995

                              1. 2

                                This is a new release of nix the package manager, not nixpkgs the repository of packages. It doesn’t seem like that bug is a bug in the package managing part, or is it?

                                1. 2

                                  I was responding to this question:

                                  have you ever used a Nix as a sole package/environment manager, or NixOS as a Linux distribution (production, desktop…)? Do you have any stories or opinions related to it?

                                  I stopped using nixpkg and went back to pkgsrc because of that particular issue.

                              1. 4

                                They should have used canonical S-expressions instead of JSON: they are simpler to parse & emit; they are better-suited to handing encryption; and they readily handle binary data.

                                It’s a matter of taste, but I also think that they’re a lot more attractive:

                                (request
                                 (using ietf.org/rfc/smap-core ietf.org/rfc/smap-mail)
                                 (method-calls
                                  (method1 ((arg1 arg1data) (arg2 (arg2data))) "#1")
                                  (method2 ((arg1 arg1data)) "#2")
                                  (method3 () "#3")))
                                

                                vs.:

                                {
                                  "using": [ "ietf.org/rfc/jmap-core", "ietf.org/rfc/jmap-mail" ],
                                  "methodCalls": [
                                    ["method1", {"arg1": "arg1data", "arg2": "arg2data"}, "#1"],
                                    ["method2", {"arg1": "arg1data"}, "#2"],
                                    ["method3", {}, "#3"]
                                  ]
                                }
                                
                                1. 2

                                  Any particular reason not to go for ... (method1 (arg1data arg2data) "#1") ...?

                                  Then, the attractiveness thing sort of gets lost when you go for actual canonical S-expressions with binary data, doesn’t it? No more looking at the raw expressions in a text editor.

                                  And is JSON really that hard to parse?

                                  1. 1

                                    Any particular reason not to go for ... (method1 (arg1data arg2data) "#1") ...?

                                    I was just following the original style directly. Certainly a more common way to write that in a Lisp would be the way you indicated.

                                    Then, the attractiveness thing sort of gets lost when you go for actual canonical S-expressions with binary data, doesn’t it? No more looking at the raw expressions in a text editor.

                                    I dunno, this looks pretty good to me:

                                    (cert
                                         (issuer (hash sha1 |TLCgPLFlGTzgUbcaYLW8kGTEnUk=|))
                                         (subject (hash sha1 |Ve1L/7MqiJcj+LSa/l10fl3tuTQ=|))
                                         …
                                         (not-before "1998-03-01_12:42:17")
                                         (not-after "2012-01-01_00:00:00"))
                                    

                                    The only bits which are binary are the hashes, and the rest of the expression is fine. The whole thing can be edited in a text editor, if necessary.

                                    And is JSON really that hard to parse?

                                    No, not really — but it’s still more complex than S-expressions.

                                1. 10

                                  I, too, first thought that this bar was for site expense. I think it wouldn’t hurt to make “Adopt Lobsters Emoji” text visible, at least on desktop, as right now it’s just a number within the progress bar.

                                  As for making it hideable, I don’t really get the purpose of this proposal — the bar takes less space than a single story. In fact, this very thread takes more space on the front page than the element it proposes to collapse, and unlike the bar, this thread doesn’t even give the warm glow.

                                  As much as I hate the obscure UI elements that obstruct and slow down my UX when browsing the different sites (especially as they may pop in and out), I have absolutely zero objection against this tiny bar on the front page here, which is implemented as static HTML/CSS in less than 400 characters. In fact, I do object to getting it bloated with all the logic that the hiding would require.

                                  1. 8

                                    It’s certainly not tiny, and while it’s not that large, it is by far the heaviest element on the front page.

                                    I definitely support, in decreasing order of preference:

                                    • Getting rid of it
                                    • Making it hideable directly (rather than requiring users to block parts of the page)
                                    • Making it smaller and less contrasty to reduce visual weight
                                    1. 4

                                      That’s a good point about it being the visually heaviest element on the page - and for such a light, text-only site, it really stands out. (I made a similar point a while ago about a different feature.) I’ve taken most of the color out of the progress bar and reset it to the default font size so it fits in a little more smoothly.

                                      1. 1

                                        Thank you, it’s much better now.

                                  1. 2

                                    There is not actually anything specific to NixOS in this article, you can follow along fully anywhere that has plain nix installed.

                                    That said, I’m not convinced of baking development tools like hindent and hlint into a per-project nix expression. I’d leave nix to do the building only. Maybe I’m just not disciplined enough, but I’m sure I’d find myself running vim from a non-nix-shell terminal and wondering why the tools are missing.

                                    1. 3

                                      This was a lot of fun, well done on the interactive article. I wish the solver-assisted final version would pointed out some deduction you could have made when you err, as it is making a mistake is needlessly frustrating. Edit: turns out the known cells are marked a subtle (to my eyes) red.

                                      It’s also quite easy to get it spinning in seemingly clear situations, such as when you’ve uncovered nothing but a few isolated numbers. (Try it: ask for help right from the start, and spread those over the board. Here it gets very slow starting at 5 or so uncovered squares.) It should not be too hard to modify the AI to only permute all squares that neighbor uncovered ones, and treat the other squares equally.

                                      I’d dispute the claim that this is more fun than classic Minesweeper, though! Nothing wrong with a bit of twitch. And if it’s the puzzling you’re going for, it will be hard to beat a good hand-crafted minesweeper puzzle, such as https://www.gmpuzzles.com/blog/2017/12/minesweeper-john-bulten/ or (shameless plug) https://maybepuzzles.files.wordpress.com/2016/05/mines.png.

                                      1. 4

                                        Hand-crafting doesn’t scale! When I wanted minesweeper puzzles (but I was OK with small ones), I implemented a brute-force solver, a very primitive pattern-based solver, and then ran them in a loop: generate a field, if nothing can be opened ot marked — open a random empty cell next to the already opened ones (if there are any), if pattern-matching allows doing something — do it, else let the human try. That actually produced quite interesting (small) puzzles.

                                      1. 4

                                        Nice work, some thoughts:

                                        • Print line number where assertion failed
                                        • Way to compare doubles, possible with an optional precision
                                        • Way to compare blocks of memory
                                        • Consider renaming to snow.h
                                        1. 3

                                          I really would have liked to print the line number where the assertion fails, but I’m not sure if that’s possible. Because of the use of macros, everything ends up on the same line after the preprocessor, so __LINE__ will be the same for everything. If you know of a way to fix that, I’d love to hear it. (The "in example.c:files" message was originally supposed to be "in example.c:<line number>")

                                          More different asserts is a good idea, and so is renaming the header - the thing was under the temporary name “testfw” until right before I made this post.

                                          1. 2

                                            Looks neat! I feel that the line number of the end-of-block would still be useful, but don’t quite see how to word that without seeming incorrect.

                                            1. 2

                                              It’s not just at the end of the it block which the error occurs in; it’s the end of the entire invokation of the describe macro. In the example.c file for example, __LINE__ inside of any of those it blocks will, as of the linked commit, be 62.

                                        1. 4

                                          At a glance it seems okay, but I guarantee those colour choices will look like crap on a light background. For most (all?) testing frameworks I’ve used that output in colour, I always have to find the “no colour” option or else I can’t read it.

                                          1. 2

                                            I support I sort of consider it the user’s responsiblility to have configured a color scheme where most colors are readable. However, it would be both easy and a good idea to make it possible to configure the color scheme (at least from the source code), and I should probably add an option to output without colors (and enable that option by default when the output is not a TTY).

                                            1. 3

                                              I use solarized light, and nearly everyone these days uses some form of dark colour scheme. The output from the Catch2 testing framework, for example, is mostly unreadable with the colour choices. In other cases, I’ve run across similar problems.

                                              If you’re going to offer colour output, I think you need to have an option to turn it off. (And I see that you’ve added #ifndefs around them.) If/when this ever gets a main function to manage test suites (most serious ones do), don’t forget the --color=no option.

                                              1. 2

                                                I added support for theming first because that was very easy to add.

                                                I just pushed a commit to add support for –no-color (and which disables color when stdout is not a TTY and such): https://github.com/mortie/snow/commit/c41d869c613a3a587279c6f833f74c609cb3bbf5

                                                The commit after that adds support for the NO_COLOR environment variable mentioned by @mulander.

                                              2. 3

                                                @jcs created http://no-color.org/ to propagate a consistent option to disable colors.

                                                1. 2

                                                  Looks like I get to be the first software to support NO_COLOR on that list :)

                                              3. 1

                                                I always wanted a terminal which would automatically corrected colors based on contrast. At least a separate color scheme for default background color.

                                                It should not be that hard, maybe I could add PoC using suckless’s st to my overly long TODO list…

                                                1. 1

                                                  It’s actually quite readable in black on white. Though I agree with the general sentiment, and it’s probably quite a bit worse on a yellowish background.

                                                1. 2

                                                  This game is great. The computer destroyed me though (predictably since it will play perfectly every time through a brute force search). I imagine it would be much better to play against another (less than perfect) human.

                                                  1. 3

                                                    I wrote a multiplayer version of Quinto about 6 years ago, with a couple of rewrites since. An online demo is available at http://quinto-demo.jeremyevans.net/ if you can find another person to play with. Source code is at https://github.com/jeremyevans/quinto if you want to run your own server.

                                                    1. 2

                                                      That’s amazing! Small world. I’d love to see your CoffeScript source code, but couldn’t find it in the repo - is it intentionally kept secret, or is that just an accident? Either way, great game choice, and awesome project!

                                                      1. 2

                                                        The code was originally written in CoffeeScript+Node. The server was rewritten in Go, and then later rewritten in Ruby. At some point, I stopped using CoffeeScript on the front end and just started editing the resulting Javascript file directly. All of the information is in the repository if you look in the history: https://github.com/jeremyevans/quinto/tree/7ad48e43f76c1a9a847d5a677a8f11c69c9fa5bc

                                                    2. 2

                                                      There’s potential for beating a computer that plays the highest scoring move each time. You can play to avoid setting up long parallel plays, and it may be worth saving 5s and 0s since those can extend words of length greater than 1.

                                                    1. 1

                                                      The README at https://github.com/pwdless/cierge seems a better introduction, including details on how to deploy with docker; it runs on ASP.NET Core.

                                                      No mention of whether this is used in production anywhere, unfortunately.

                                                      1. 6

                                                        Fascinating read, including the RAM heating aside!

                                                        I’m curious about the implications of that racy ORQ stack probe though. If that’s actually not a no-op due to reading and writing back memory, isn’t that still a likely cause of even more obscure bugs? Say for a concurrent program sharing its stack space between multiple thread. Could the GCC probe be done in a safer (or more obviously safe) way?

                                                        EDIT: The LKML thread goes into the details a bit further: https://lkml.org/lkml/2017/11/10/188

                                                        1. 3

                                                          It shouldn’t cause any bugs, the mitigation works by writing to each page that’s beyond your current end of stack, i.e. uninitialized memory, up to the amount of stack your function needs. It’s trying to hit the end of stack guard page. Lots of good details in this stack clash exploit write up.

                                                          1. 2

                                                            Say for a concurrent program sharing its stack space between multiple thread

                                                            If I understand right, it’s okay because it’s probing beyond the end of the stack. I’m not allowed to use a region on my stack beyond the current stack frame at all for anything (on this thread or any other) at all without invoking UB. With the stack protection scheme that is in use there, it must be mandatory to have guard pages at the ends of stacks, so the end of a stack is never close enough to another data structure to be in danger.

                                                          1. 2

                                                            Why is this a surprise? Function calls will always have slight overhead, because of the indirection of a jump. That’s why inlining is a thing.

                                                            1. 10

                                                              The surprising thing is I would expect {} to be syntactic sugar for dict().

                                                              1. 3

                                                                Really? Since {} is recognized by the parser, I’d expect to generate the opcode directly as part of the bytecode compilation pass.

                                                                Frankly, I’m surprised that dict() doesn’t compile to an opcode, since it’s easy to inline. I guess doing that would take away the ability to rebind what dict() does in the local scope (but I don’t know why anyone would care besides that).

                                                                1. 7

                                                                  You can even rebind it globally.

                                                                  $ python3
                                                                  >>> dict({1: 2})
                                                                  {1: 2}
                                                                  >>> import builtins
                                                                  >>> builtins.dict = list
                                                                  >>> dict({1: 2})
                                                                  [1]
                                                                  

                                                                  EDIT: use builtins instead of __builtins__, compare https://stackoverflow.com/questions/11181519/python-whats-the-difference-between-builtin-and-builtins

                                                            1. 1

                                                              This is not well argued. I tripped over the claim that all four “big problems” are enabled by the unlimited powerful Javascript VM, while that point is hardly relevant to anything but “cryptojacking”. Also I’m missing any “jump the shark” moment. The article does summarize the rotten state of advertising nicely though.

                                                              1. 6

                                                                The malvertising problem also largely comes from having an unlimited powerful VM (doesn’t have to be JavaScript, ActionScript and Java were historically just as bad *). Having a VM available makes exploiting browser bugs to get drive-by software installation far easier.

                                                                • The APIs provided to that VM represent a colossal attack surface.
                                                                • Programs running on the VM can do stuff like making and freeing big allocations in specific patterns to massage the heap layout, or run timing attacks to discern the address of some data or code in the browser process.
                                                                • The VM itself has bugs. It does a lot and the optimisations are really complicated and hard to get right. You see CVEs sometimes like “a buggy optimisation caused an ArrayBuffer and a function pointer to occupy the same space, which can be escalated into remote code execution”.

                                                                There are still bugs sometimes in things like parsers for complex formats like videos which are exploitable without making use of the VM, but fewer of them. It’s harder to write exploits without the VM anyway because your most powerful tool for setting up the process internals the way your exploit code wants them is gone.

                                                                Browsers are WAY harder to RCE with no JavaScript.

                                                                (* if not worse, but I suspect mainly because the implementations were really bad rather than because those PLs are fundamentally worse than JavaScript in some way.)