1. 3

    I’m reminded of various applications such as Calca, Soulver, numi, NaSC, Qialculate, etc. that offer a similar functionality. However, most of them tend to use “standard” mathematical notation, precedence, and associativity, and ascii text. Other software, such as Mathematica, has the ability to let you write and edit visual equations, and then perform computations with them.

    1. 7

      Worth noting that s-expressions avoid a lot of legibility problems discussed in the article. If we look at the first example under the “providing immediate feedback” section where traditional notation looks like:

      50.04 + 34.57 + 43.22 / 3
      

      this would be expressed as:

      (+ 50.04 34.57 (/ 43.22 3))
      

      which would be hard to confuse with:

      (/ (+ 50.04 34.57 43.22) 3)
      

      A lot of people seem to have the impression that s-expressions are harder to read than traditional syntax, but I find the opposite to be the case. With s-expressions you have simple and predictable rules that remove a lot of mental overhead around figuring out what the code is doing.

      1. 2

        Similarly just having the same precedence and associativity for everything would give you an easy-to-predict and easy-to-read syntax. This way you gain terseness, but you have to get used to the associativity of whatever mechanism you’re using, whereas s-expressions (or *shudder* XML, etc) are more portable, but require you to explicitly state the tree with more characters.

        For example, right associative:

        50.04 + 34.57 + 43.22 / 3
        

        And for the sum of everything over three, it would be:

        (50.04 + 34.57 + 43.22) / 3
        

        This is the style that APL/J/K and various languages inspired by them tend to use (they also add different precedence for certain operations that take another operation as one of their inputs, such as fold). Many people use such languages as an enhanced calculator (there are plotting utilities made for them, etc). For example, in K, where division is % and assignment is ::

        force: (6.67e-11*mymass*collidingmass)%radius*radius
        yearlybill: 12*rent+electric+internet
        

        Or with functions, where / is fold:

        force:{[m1;m2;radius](6.67e-11*m1*m2)%radius*radius}
        yearlybill:{[monthlyutilities]12*+/monthlyutilities}
        
        1. 1

          Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.

          I don’t know why people have such a problem with a + b + c / 3 meaning a + b + (c / 3). It’s just something you have to get used to, it’s not really that difficult and there are much bigger problems that need solving. But if it’s really such a big deal, just make it a function \frac{a + b + c}{3} in LaTeX is good enough for mathematicians, so frac(a + b + c, 3) should be good enough for programmers.

          1. 1

            Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.

            I don’t know why people have such a problem with 1 * 2 + 3 and 3 + 1 * 2 meaning different things. It’s just something you have to get used to when using a different language, it’s not really that difficult and there are much bigger problems that need solving.

            1. 2

              The universal rules of mathematical expressions create a strong precedent. People expect them to hold. They get confused when they don’t. Even if they are arbitrary.

              I’m not aware of any language anywhere in all of programming or mathematics that uses different rules and has sustained any kind of popularity. Seems like a hard requirement to ever be successful in my experience.

              1. 1

                They aren’t “universal”. See my other comment. Sustained any kind of popularity is a vacuous statement Forth is used extensively in embedded applications. Your calculator uses a left to right operator precedence and yet you don’t struggle to translate from PEMDAS or whatever system you use.

                1. 2

                  They are absolutely universal. All mathematicians agree on the order of operations here.

                  1. 2

                    Funny because every mathematician I’ve talked to, and listened to about order ambiguity agrees with me and says you should put parentheses to disambiguate.

                    The reality is that because it is cultural means it does not matter if you have a solution to the problem if not everyone is using it. In my opinion abandoning order of operations is much simpler and the order is arbitrary, needlessly convoluted, and does not afford for the expansion of operators. You can make things abundantly clear by using polish notation.

                    - / 2x 3y 1

                    Before you throw your arms up in frustration yes there are proofs done in this format, and they’re great.

                    1. 0

                      because it is cultural

                      Yeah but it isn’t cultural. It’s universal. as I’ve explained

                      1. 1

                        I suppose since it is universal that there are severe pedagogical deficiencies, which doesn’t surprise me terribly. Still would have been completely avoided with a simpler and clearer precedence system. It took me a while to realize that you were talking about strictly mathematicians whereas I was talking about all people. Apologies for my poor communication.

              2. 1

                “Order of operations” have been an arbitrary curse on mathematics since their creation, different cultures don’t actually agree, in addition it restricts the creation of new operators. I’m not particularly invested in left to right or right to left, but either would be much simpler than the random format we have now.

                1. 2

                  Cultures that don’t use ÷ and × often don’t write sentences left-to-right and pages top-to-bottom. They might not even use arabic numerals.

                  I don’t see how it restricts the creation of new operators. Mathematicians seem to have no problem introducing new operators: ∧, ∨, →, ↔, dots, existing operators in circles and all sorts of silly new operators are used all over algebra without any real issue. If it’s not obvious from context, you put brackets in.

                  1. 1

                    What order precedence does modulus have? Is it the same as division or should it be done first, or last? If we had a order precedence that can accomodate new operators this question wouldn’t need to be asked and I wouldn’t have to use parentheses which lets be honest is a hack.

                    1. 1

                      Modulus isn’t a standard mathematical operator. But if you defined it, you could just say what its precedence is.

              3. 1

                wait are you using PEMDAS or BODMAS?

                1. 1

                  Same thing. Brackets = parenthesis, multiplication and division are done at the same time and so their order is whatever sounds better when reading out the abbreviation. What synonym of exponent does ‘O’ stand for?

                  1. 1

                    Multiplication and division are not done at the same time. Orders I believe. http://www.math.harvard.edu/~knill/pedagogy/ambiguity/

                    1. 1

                      Multiplication and division are always done at the same time (with left-associativity - a÷b÷c = (a÷b)÷c in mathematics and this follows over into programming languages that use * and / to emulate × and ÷.

                      2x/3y-1 is not well-defined notation. It’s not mathematics, because mathematics doesn’t use a slash in the middle of some linear text for division (it uses a horizontal line or ÷ depending on the context, although really depending on the level, because I haven’t seen anyone use ÷ since primary school), and it’s not any programming language I’m aware of either. Randomly writing down some text then claiming it’s ambiguous is pretty silly.

                      2 × x ÷ 3 × y - 1 is completely unambiguous, on the other hand: (((2 × x) ÷ 3) × y) - 1. Try putting it into google, or asking someone what 2 × 9 ÷ 3 × 2 - 1 is. Their answer is 11.

                      Mathematicians almost never use ÷ anyway, we write (2 x) / (3 y) where the line is horizontal (not possible on this platform as far as I can tell). But the same rule applies to addition and subtraction: 2 + x - 3 + y - 1 is universally agreed to be (((2 + x) - 3) + y) - 1.

                      Programming languages usually approximate ÷ and × with / and * for the sake of ASCII, so the same rules apply as with those operators. I’m not sure I know of any programming language where you can multiply variables by juxtaposition.

                      I once saw a proposal that it should be based on whitespace: 1+x * 3+y would be (1 + x) * (3 + y), while 1 + x*3 + y would be 1 + x * 3 + y. I thought it was quite a cute proposal, if perhaps prone to error.

                      1. 2

                        Americans use a slash in the middle of linear text to mean division. You clearly didn’t even read the article. Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.

                        1. 1

                          Americans use a slash in the middle of linear text to mean division.

                          Don’t think so.

                          You clearly didn’t even read the article.

                          The article has a bunch of monospace ASCII.

                          Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.

                          It’s what literally everybody in the entire world does.

          1. 15

            I’ve become more and more disillusioned with NixOS over the past couple of months. Packaging things that aren’t available, or even updating existing packages, has so many little undocumented gotchas that (I guess) they assume you’ll figure out reading from reading gh issues or random blog posts. It has actually stopped me working on a few different projects because it’s not worth figuring out how to package something.

            However, I don’t think I can go back to a traditional distro after tasting the stability and convenience of something like NixOS. Has anyone here tried both NixOS and GuixSD. or perhaps switched from one to the other?

            Guix seems so much better documented from the brief read though I’ve given it after seeing this. The docs just have so much detail.

            Also, I’d much rather learn a real language like scheme for making packages than the rather incomprehensible (at least to me) language that Nix invented.

            What are the downsides of Guix that I just haven’t seen yet?

            1. 9

              Guix has fewer packages, because they have a smaller community. Being a GNU project, they attempt to limit the amount of non-free, or license-incompatible, software as much as possible: using linux-libre, nearly no potential ZFS support, no Intel microcode, etc. If your hardware depends on a binary blob, you might have to jump through several hoops to get it working. As of 2018-07-06, they don’t have LVM support.

              That said, guix seems far better thought out than nix. It does not rely on a particular init ecosystem (cough, systemd, cough). It has more features available without installing additional packages, for example: guix import instead of the myriad of pypi2nix, nix-generate-from-cpan, etc packages that are separately written; guix environment makes creating an isolated container as easy as its normal environment isolation; etc. And guix is most certainly better documented.

              If you’re comfortable packaging software yourself (and don’t mind doing so), some of these problems could be fixable. You can keep (or contribute to) a non-free guix repository (such as these, but these do not seem to be well maintained, nor will the be approved of, probably). One could also use guix import to import from a local copy of nixpkgs (though such an import is imperfect, and might require manual maintenance), or run guix atop NixOS.

              Unfortunately, I needed a system that works with nearly-minimal hassle on my hardware, with my software, and that is what NixOS gave me. The nix language is quaint, and the reliance on bash and systemd rather annoying, but personally I can ignore that and use a working computer with a relatively nice environment management system.

              1. 2

                It does not rely on a particular init ecosystem You are referring to Guix, the package manager here, right? Because, as far as I understand, GuixSD, the Linux distribution does depend on https://www.gnu.org/software/shepherd/?

                1. 3

                  I was referring to the fact that neither Guix nor GuixSD rely on systemd. But you are correct, as best as I can tell GuixSD seems to rely on Shepherd.

                  Though maybe not all services seem to rely on it? Some of them don’t seem to mention shepherd at all, but I can’t tell whether or not that means anything because I’m not well versed in Guix scheme.

                  1. 1

                    https://github.com/guix-mirror/guix/blob/master/gnu/services/ssh.scm

                    Here’s one example that clearly refers to shepherd. Is there any reason to believe that shepherd is better than systemd?

                    1. 6

                      Three things, maybe:

                      • Shepherd doesn’t try to be more than an init system. Contrast to Logind, which GNOME depends on, which is tied to systemd. elogind had to be forked and extracted from systemd, because otherwise GNOME would not work without it. I don’t know of any end user applications that require shepherd to be the init system in any way that doesn’t resemble init system / daemon management usage.
                      • shepherd is also written in scheme, which means that Guix expressions can easily generate code per the user’s configuration for the shepherd file since you’re just going from scheme to scheme.
                      • I can’t remember if systemd can do this or not, but you can also run shepherd as a user to manage your user’s daemons (rather than the system-wide daemons). Convenient!
                      1. 1

                        I can’t remember if systemd can do this or not, but you can also run shepherd as a user to manage your user’s daemons

                        Yes, systemd can do that.

                        1. 1

                          I can’t remember if systemd can do this or not, but you can also run shepherd as a user to manage your user’s daemons

                          Systemd does have support for user services, without needing to start another daemon as your user.

                          1. 1

                            I should clarify that I meant being able to run one or more shepherd as a user being a feature :)

                        2. 5

                          Shepherd isn’t an ecosystem of things that come bundled together? It isn’t Linux specific? It doesn’t (yet) slowly overtake various other components of your system, such as udev? There are definitely reasons that I still believe that Shepherd is better than systemd.

                          However, nothing’s perfect. Upon a further examining of the documentation, it does seem that you are correct regarding Guix’s dependence on Shepherd: namely, all services do currently depend on it.

                    2. 2

                      Thanks for that Guix on NixOS link. I actually installed GuixSD in a VM at work today and noticed there were quite a few packages missing that I would like to have, so that seems like a good way to get started making son new packages before I go all in on the OS.

                      1. 1

                        What is the status of Java especially maven dependencies of a project? (which doesn’t seem to be fixed in Nix yet)?

                    1. 14

                      I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files. And if the “file” is a directory, what do the filenames you read and write from/to it mean?

                      So is there really any difference between open(read("/net/clone")) and net_clone();? The author seems to say the former is more loosely coupled than the latter because the only methods are open and read on the noun that is the file…. but really, you are stating exactly the same thing as the “verb” approach (if anything, I’d argue it is more loosely typed than loosely coupled). If a new version wants to add a new operation, what’s the difference between making it a new file that returns some random data you must write code to interpret, and a new method that returns some data you must write code to use?

                      1. 24

                        So is there really any difference between open(read(”/net/clone”)) and net_clone();

                        Yes: The fact that you can write tools that know nothing about the /net protocol, and still do useful things. And the fact that these files live a uniform, customizable namespace. You can use “/net/tcp/clone”, but you can also use “/net.home/tcp/clone”, which may very well be a completely different machine’s network stack. You can bind your own virtual network stack over /net, and have your tests run against it without sending any real network traffic. Or you can write your own network stack that handles roaming and reconnecting transparently, mount it over /net, and leave your programs none the wiser. This can be done without any special support in the kernel, because it’s all just files behind a file server.

                        The difference is that there are a huge number of tools you can write that do useful things with /net/clone that know nothing about what gets written to the /net/tcp/* files. And tools that weren’t intended to manipulate /net can still be used with it.

                        The way that rcpu (essentially, the Plan 9 equivalent of VNC/remote desktop/ssh) works is built around this. It is implemented as a 90 line shell script It exports devices from your local machine, mounts them remotely, juggles around the namespace a bit, and suddenly, all the programs that do speak the devdraw protocol are drawing to your local screen instead of the remote machine’s devices.

                        1. 5

                          You argue better than I can, but I’ll add that the shell is a human interactive environment, C api’s are not. Having a layer that is human interactive is neat for debugging and system inspection. Though this is a somewhat weaker argument once you get python binding or some equivalent.

                          1. 1

                            I was reminded of this equivalent.

                          2. 1

                            But in OOP you can provide a “FileReader” or “DataProvider”, or just a FilePath that abstracts either where the file is or what you are reading from too. The simplest would be the net_clone function above just taking a char* file_path, but in an OOP language the char* or how we read from whatever the char* is can be abstracted too.

                            1. 2

                              Yes, but how do you swap it out from outside your code? The file system interface allows you to effectively do (to use some OOP jargon) dependency injection from outside of your program, without teaching any of your tools about what you’re injecting or how you need to wire it up. It’s all just names in a namespace.

                              1. 0

                                without teaching any of your tools about what you’re injecting or how you need to wire it up

                                LD_PRELOAD, JVM ClassPath…

                          3. 6

                            So is there really any difference between open(read(”/net/clone”)) and net_clone();?

                            Yes, there is. ”/net/clone” is data, while net_clone() is code.

                            1. 4

                              I don’t buy it because the real protocol is what you read and write from the file, not that you can read and write files

                              Yes - but the read()/write() layer allows you to do useful things without understanding that higher-level protocol.

                              It’s a similar situation to text-versus-binary file formats. Take some golang code for example. A file ‘foo.go’ has meaning at different levels of abstraction:

                              1. golang code requiring 1.10 compiler or higher (uses shifted index expression https://golang.org/doc/go1.10#language)
                              2. golang code
                              3. utf-8 encoded file
                              4. file

                              You can interact with ‘foo.go’ at any of these levels of abstraction. To compile it, you need to understand (1). To syntax-highlight it you only need (2). To do unicode-aware search and replace, you need only (3). To count the bytes, or move/delete/rename the file you only need (4).

                              The simpler interfaces don’t allow you to do all the things that the richer interfaces do, but having them there is really useful. A user doesn’t need to learn a new tool to rename the file, for example.

                              If you compare that to an IDE, it could perhaps store all the code in a database and expose operations on the code as high-level operations in the UI. This would allow various clever optimisations (e.g. all caller/callee relationships could be maintained and refactoring could be enhanced).

                              However, if the IDE developer failed to support regular expressions in the search and replace, you’re sunk. And if the IDE developer didn’t like command line tools, you’re sunk.

                              (Edit: this isn’t just one example. Similar affordances exist elsewhere. Text-based internet protocols can be debugged with ‘nc’ or ‘telnet’ in a pinch. HTTP proxies can assume that GET is idempotent and various cacheing headers have their standard meanings, without understanding your JSON or XML payload at all.)

                            1. 6

                              The internet search experience suffered a setback when the major browsers abandoned the separate search box for the combined address/search box. Only FireFox retains this feature, where your default search engine is the first choice in a list.

                              In the days before Alta Vista became better than Yahoo, and then Google crushed all other search options, there were meta-search engines that combined, filtered, and formatted results from several search engines of your choice. IIRC Magellan was one of these. I’ve toyed with the idea of reviving this idea for my own use. Google and Bing are pretty similar, but not perfectly similar, and provide different results depending on whether you are signed-in or anonymous. DDG usually provides different enough results to be important. There’s a lot of room for innovation in meta-search.

                              Finally there are still all sorts of specialized search options. In this category I would start with Amazon and Wikipedia. There are also sites like noodle.com, specializing in education related searches.

                              1. 5

                                DuckduckGo is my go to search.

                                It is simple and doesn’t have the Google bloat to it and thise smart searches like where you can generate a md5 hash for example in a search query or do number system conversions is pretty cool

                                1. 2

                                  Duckduckgo owns, its my configured default search on all devices. When i need something specific from Google, i use the bang feature for google, !g.

                                  1. 2

                                    I never knew that was a bang available, my word. Is there a !b for bing too? (Update: there is wow)

                                  2. 0

                                    So essengially DDG has a great interface and is actually way more useful.

                                    1. 4

                                      Let’s be honest, though: the results are not as good as Google for many/most queries.

                                      1. 3

                                        I don’t know. I switched to DDG at home and I’ve always been able to find what I’m looking for. I still use Google at work so I’m able to compare and contrast. About the only place where Google is better (in my opinion) is in image search, and that may be due to how Google displays them vs. DDG.

                                        1. 4

                                          Here’s a concrete example. Let’s say I’m trying to remember the name of the project that integrates Rust with Elixir NIFs.

                                          First result for me for the query “elixir rust” on Google is the project in question: https://github.com/hansihe/rustler

                                          After scrolling through three pages of DDG results, that project doesn’t seem to be listed or referenced at all, and there are several Japanese and Chinese-language results despite the fact that I have my location set to “United States”. I will forgive all the results about guitar strings since DDG doesn’t have tracking data to determine that I’m probably not interested in those (although the usage of the word “rust” in those results is in the term “anti-rust” which seems like a bad result for my query).

                                          That query is admittedly obtuse, but that’s what I’ve become accustomed to using with Google. These results feel generally characteristic of my experience using DDG. I end up using the !g command a lot rather than trying to figure out how to reframe my query in a way that DDG will understand.

                                          1. 2

                                            I think you did that wrong. You were specifically interested in NIF but left that key word off. Even Lobsters search engine, which is often really off for me, gets to Rustler in the first search when I use these: elixir rust nif. Typing it into DDG like this gives me Rustler at Page 1, Result 2.

                                            Just remember these high-volume, low-cost engines are pretty dumb when not backed by a company the size of Google or Microsoft. You gotta tell them the words most likely to appear together. “NIF” was critical in that search. Also, remember that you can use quotes around a word if you know for sure it will appear and minus in front of one to eliminate bogus results. Put “site:” in front if you’re pretty sure which place or places you might have seen it. Another trick is thinking of other ways to say something that authors might use. These tricks 1990’s-early2000’s searches get me the easy finds I submit here.

                                            1. 0

                                              I disagree that “NIF” was essential to that query. There are a fair number of articles and forum posts on Google about the Rustler library. It’s one of the primary contexts that those two languages would be discussed together. DDG has only one of those results as far as I see. Why? Even if I wasn’t looking for Rustler specifcally, I should see discussions of how those two languages can be integrated if I search for them together.

                                              1. 2

                                                There are a fair number of pages where Elixir and Rust will show up without Rustler, too. Especially all the posts about new languages. NIF is definitely a keyword because you’re wanting a NIF library specifically instead of a page about Rust and Elixir without NIF. It’s a credit to Google’s algorithms that it can make the extra connection to Rustler pushing it on the top.

                                                That doesn’t mean I expect it or any other search engine to be that smart. So, I still put every key word in to get consistently accurate results. Out of curiosity, I ran your keywords to see what it produces. The results on the top suck. DuckDuckGo is usually way better than that in my daily use. However, instead of three pages in, DuckDuckGo has Rustler on page 1, result 6. Takes about 1 second after hitting enter to get to it. Maybe your search was bad luck or something.

                                            2. 1

                                              I did exactly that search and found it at the 5th position.

                                              While “elixir rust github” put it at 1st position. Maybe you have some filters? I have it set to “All Regions”.

                                          2. 2

                                            Google has so many repeated results for me that I feel they have worse quality for most of my queries than ddg or startpage. Maybe I’ve done something wrong and gotten myself into a weird bubble, but these days I find myself using Google less and less.

                                            1. 1

                                              Guess so. I have been using it at uni though for a long time and gotten atleast what I needed.

                                              But I admit that googs has more in their indexes.

                                        2. 5

                                          Searx is a fairly nice meta search engine.

                                          1. 4

                                            Finally there are still all sorts of specialized search options. In this category I would start with Amazon and Wikipedia.

                                            DuckDuckGo has a feature called “bangs” that let you access them. Overview here. Even if not using DDG, their list might be a nice reference of what to include in a new, search engine.

                                            1. 1

                                              the URL bar itself now performs a search when you put something that’s not a URL in it

                                              1. 1

                                                I thought that was clear. What I like about the old style dedicated search box is it that its is so easy to switch between search engines.

                                                1. 3

                                                  I believe that you can use multiple search engines in an omnibar by assigning each search engine a keyword, and typing that keyword (and then space) before your search.

                                                  1. 1

                                                    Or if you use DuckDuckGo, you can use !bangs to pivot to another search engine or something else.

                                                  2. 2

                                                    With keyword searching (a feature I first used in Opera, and which is definitely present in Firefox; I can’t speak to any other browsers), it’s “so easy” to switch between search engines—in fact, far easier than with a separate search box. I type “g nephropidae” to search Google, or “w nephropidae” for Wikipedia, “i nephropidae” for image search, or even “deb nephropidae” for Debian package search (there’s no results for that one).

                                                    1. 2

                                                      This is not completely obvious from the user experience. Without visual cues, much available functionality is effectively hidden. You must have either taken the initiative to research this, someone told you, or you stumbled upon it some other way. This also effectively requires you to have CLI-like commands memorized, the exact opposite of what GUIs purport to do. And adding new search engines? That’s non-obvious.

                                                      1. 1

                                                        I use YubNub to get large library of such keywords that is the same on every device.

                                                1. 3

                                                  For those unaware, https://startpage.com is excellent and has great privacy policy. I actually prefer it to DuckDuckGo these days because I feel its default search is of higher quality.

                                                  Reminds me a lot of the Google from 10-15 years ago.

                                                  1. 2

                                                    The default search quality is higher probably because they act as a Google proxy sometimes (offering privacy by being between you and Google).

                                                  1. 2

                                                    It took me a few moments to realise/remember that this is a chapter from Surely You’re Joking, Mr. Feynman! (despite the fact that it says so in the paragraphs near the top, buried inside somewhere).

                                                    1. 9

                                                      I am vaguely reminded of The Birth & Death of JavaScript.

                                                      1. 1

                                                        I found the question pretty interesting but the discussion of the (hidden) solution too long (everything after 3 seem to be a rehash of 3).

                                                        State a timestamp in the future (distant enough for the email to reach everyone before the timestamp) and select a random public event at that time (or the first one to occur right after it).

                                                        I think this was stated many times on reddit with bitcoin hash id as the public event.

                                                        Example: If its 9:10 now, write an email with “take the bitcoin id that appears after 9:25 and apply <transformation to get it to a random number between 1 and 20>” and send it now. Wait. Look at the ID.

                                                        Edit: actually on Reddit a lot of solutions of this form were suggested like solar activity or stock prices.

                                                        1. 3

                                                          Instead of Bitcoin, solar activity, or stock prices, there also exists the NIST Randomness Beacon.

                                                        1. 26

                                                          I’ve talked to some people in and close to this industry and it feels like we’re a good 15 years away from autonomous vehicles. The other major issue we’re not addressing is that these cars cannot be closed source like they are now. At a minimum, the industry needs to share with each other and be using the same software or same algorithms. We can’t enter a world where Audi claims their autonomous software is better than Nissan’s in adverts.

                                                          People need to realize they won’t be able to own these cars or modify them in any way if they ever do come to market. The safety risks would be too great. If the cars are all on the same network, one security failure could mean a hacker could kill thousands of people at once.

                                                          I really think the current spending on this is a huge waste of money, especially in America when tax money given to companies to subsidize research could be used to get back the trains system we lost and move cities back inward like they were in the earlier 1900s. I’ve written about this before:

                                                          http://penguindreams.org/blog/self-driving-cars-will-not-solve-the-transportation-problem/

                                                          1. 20

                                                            If the cars are all on the same network

                                                            Any company that is connecting these cars to the Internet is being criminally negligent.

                                                            I say that as an infosec person who worked on self-driving cars.

                                                            1. 3
                                                              1. 2

                                                                They have to be able to communicate though to tell other cars where they intend to go or if there is danger ahead.

                                                                1. 7

                                                                  It’s called blinkers and hazard lights.

                                                                  1. 9

                                                                    That’s just networking with a lot of noise in the signal.

                                                                    1. 7

                                                                      Networking that doesn’t represent a national security threat, and nothing that a self-driving car shouldn’t already be designed to handle.

                                                                      1. 3

                                                                        What happens when someone discovers a set of blinker indications that can cause the car software to malfunction?

                                                                    2. 1

                                                                      Serious question (given that you’ve worked on self-driving cars): is computer vision advanced enough today to be able to reliably and consistently detect the difference between blinkers and hazards for all car models on the roads today?

                                                                      1. 2

                                                                        As often is the case, some teams will definitely be able to do it, and some teams won’t.

                                                                        Cities and States should use it as part of a benchmark to determine which self-driving cars are allowed on the road, in exactly the same way that humans must pass a test before they’re allowed a drivers license.

                                                                        The test for self-driving cars should be harder than the test for humans, not easier.

                                                                    3. 2

                                                                      They could use an entirely separate cell network that isn’t connected to the Internet. All Internet enable devices, like the center console, could use the standard cell network and they have a read-only bus between the two for sensor data like speed, oil pressure, etc.

                                                                  2. 11

                                                                    The other major issue we’re not addressing is that these cars cannot be closed source like they are now.

                                                                    I strongly agree with this. I believe autonomous vehicles are the most important advancement in automotive safety since the seatbelt. Can you imagine if Volvo had kept a patent on the seatbelt?

                                                                    The autonomous vehicle business shouldn’t be about whose car drives the best, it should be about who makes the better vehicles. Can you imagine the ads otherwise? “Our vehicles kill 10% fewer people than our competitors!” Ew.

                                                                    1. 2

                                                                      I don’t buy your initial claims.

                                                                      When you said “we’re 15 years away from autonomous vehicles”, what do you mean exactly? That it’ll be at least 15 years before the general public can ride in them? Waymo claims this will happen in Pheonix this year: https://amp.azcentral.com/amp/1078466001 That the majority of vehicles on US roads will be autonomous? Yeah, that’ll definitely take over 15 years!

                                                                      We can have a common/standard set of rigorous tests that all companies need to pass but we don’t need them to literally all use the same exact code. We don’t do that for aeroplanes or elevators either. And the vanguard of autonomous vehicles are large corporations that aren’t being funded by tax dollars.

                                                                      That said, I agree that it would be better to have more streetcars and other light rail in urban areas.

                                                                      1. 6

                                                                        It will be at least 15 years before fully autonomous vehicles are available for sale or unrestricted lease to the general public. (In fact, my estimate is more like twice that.) Phoenix is about the most optimal situation imaginable for an autonomous vehicle that’s not literally a closed test track. Those vehicles will be nowhere near equipped to deal with road conditions in, for example, a northeastern US winter, which is a prerequisite to public adoption, as opposed to tests which happen to involve the public.

                                                                        Also, it’s a safe bet this crash will push back everyone’s timelines even further.

                                                                        1. 1

                                                                          I think you are correct about sales to the public but a managed fleet that the public can use on demand in the southern half of the country and the west coast seems like it could happen within 15 years.

                                                                    1. 6

                                                                      Personally, I would not be opposed to such a tag, and would not mind seeing interesting, physics-related links here. However, I do think it would be a rather severely off-topic tag at lobste.rs, and perhaps this is a reason to not create it.

                                                                      1. 1

                                                                        “When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.”

                                                                        Simultaneously a straw man and a false dichotomy. Not written by someone who understands logic?

                                                                        1. 2

                                                                          The author is Leslie Lamport, who won the 2013 Turing Award for his work on distributed algorithms.

                                                                          1. 1

                                                                            I’m aware of that. My question is rhetorical.

                                                                            1. 1

                                                                              What he may have meant is that programmers using the biological approach with things like information hiding, guard functions, and testing built complex programs that usually work as intended. That’s without knowing anything about formal logic or mathematical aspects. Writers covering things like LISP used to compare it to biological approaches as arguments it was more adaptable whereas the formalized stuff failed do to rigidity and slow-moving. Just reading Leslie’s remark, someone might assume all biologically-inspired approaches were barely comprehensible or failures whereas the formal or logical methods stayed outperforming them. Most of the latter actually failed.

                                                                              I still enjoyed reading it despite that inaccuracy. Leslie’s mind is interesting to watch in action with down-to-earth style. This reminded me of a computer scientist who thought like a biologist to overcome limitations CompSci folks were facing. Led him to do everything from invent massively-parallel processing to using evolution to try to outperform human designers. Always claimed biology was better. A lot of better write-ups are paywalled or disappearing with Old Web but I can try to dig some out this week if you’re interested.

                                                                              1. 2

                                                                                Please do dig it up, I’m quite intrigued to see where their solutions worked well, and where they didn’t.

                                                                                1. 2

                                                                                  With the way ML/AI is going, it’s quite possible many future systems could be much closer to biology than human design. An AI system-design software will just do whatever works as long as its optimization function says it’s good.

                                                                                  1. 2

                                                                                    I am in no way questioning Lamport’s brilliance nor contributions in general. However most people, brilliant or otherwise, have blind spots. I believe he’s betrayed some of his here, and that in itself is interesting and worth reading.

                                                                              1. 3

                                                                                I love this concept.

                                                                                One challenge is that one can always read all the pages. It would be great flavor if this shipped with like… 20 times as many pages. Things like random letters, accounting statements. So you can also experience the whole “sifting through a lot of stuff” thing, and perhaps landing on an interesting bit somewhere. Maybe even having this actually ship as several distinct sets of books, for example.

                                                                                EDIT: You might want to check out Her Story for some interesting ideas in there as well. A bit harder to execute upon on paper, but is a very interesting mechanism for non-linear storytelling.

                                                                                1. 2

                                                                                  I like the idea of adding cruft to confuse the reader. Another option would be to hide different chapters at different physical locations and references would be instruction of how to get to the next chapter. But the it ceases to be a book, of course.

                                                                                  EDIT: I’ve added a comment to README along the lines of your comment. I hope you don’t mind. One modification though. Adding 20x more content isn’t feasible for a printed book. So, instead, the pulp should look as a legitimate content to waste reader’s time by solving nonsesical puzzles etc.

                                                                                  1. 1

                                                                                    It would be great flavor if this shipped with like… 20 times as many pages.

                                                                                    It’s an example of security through obscurity!

                                                                                  1. 7

                                                                                    Have a peer-to-peer web, but keep the concept of servers in addition to it. It’s no better a solution than currently exists, but at least it offers potential redundancy (fueled by popularity, but perhaps some other metrics can be devised, as was hinted towards the end of the article) instead of the current lack of existence that information has a tendency to vanish into.

                                                                                    Additionally, I’d like to point out that things such as paywalls or pizza delivery would still require servers that are not data-addressable, because they are services and not information that can be copied and distributed (at least, not without a clever reworking of how the service works). Servers are here to stay for the time being, and a peer-to-peer web most likely won’t change that dramatically.

                                                                                    1. 5

                                                                                      BitTorrent has web seeds (download from HTTP servers in addition to peers). They’re quite popular: used by many OS distrubutions, Archive.org, media.ccc.de, Amazon S3

                                                                                      1. 1

                                                                                        By “keep the concept of servers”, do you mean that whoever publishes to the network should also self-host their data and treat the redundancy as a bonus? This way of looking at it has also occurred to me. You could describe this as a more robust form of client-side caching, where all static assets are saved locally and the validity of the cache is communicated through the network protocol. Thinking of the p2p web as a bandwidth optimization makes sense to me in exactly the way you say “it’s no better a solution than currently exists, but…” we get some nice additional properties at scale. But this point of view falls short of the ambitions the decentralized web movement has for these network architectures.

                                                                                      1. 7

                                                                                        Alfred, iTerm

                                                                                        I can never get a concrete reason why to use these over Spotlight/Terminal.app. There used to be a significant difference, but today I can’t think of a compelling reason.

                                                                                        Edit: Ditto for flu.x

                                                                                        1. 4

                                                                                          Personally, I couldn’t let go of having shortcuts to switch to the nth tab. Thus, iTerm beat Terminal for me.

                                                                                          1. 3

                                                                                            I use Alfred primarily for various workflows that I have set up. That’s not something that can replicated with spotlight.

                                                                                            https://www.alfredapp.com/workflows/

                                                                                            I have a few smaller ones that I’ve designed myself.

                                                                                            I use the Github Repos Workflow constantly: http://www.packal.org/workflow/github-repos

                                                                                            1. 2

                                                                                              I happily used Spotlight for years. Then, a couple OSX updates back, it stopped properly indexing applications. I never was able to fully figure out what the problem was, as there was seemingly no pattern to which applications would be excluded. At one point it stopped including Chrome in the index, and that was the straw that broke the camel’s back for me. (More specifically, I believe it still included them in the index based on testing the command line interface, but Spotlight simply stopped showing them.)

                                                                                              I switched to Alfred, and it immediately worked “perfectly” - which is to say it performed identically to how Spotlight did before the updates. It’s been a few months now, and I have no complaints with Alfred, it does everything Spotlight did, and is much faster.

                                                                                              1. 1

                                                                                                Weird! In your position I think I would have done the same thing.

                                                                                                1. 1

                                                                                                  I have the same problem and switched to Alfred for the same reason.

                                                                                                2. 1

                                                                                                  iTerm is waaaaaaaay ahead of Terminal.app.

                                                                                                  1. 6

                                                                                                    I keep getting replies like this, but still no concrete reason.

                                                                                                    1. 8

                                                                                                      I think it’s because there aren’t great reasons anymore. Yes, you’ve got some tmux integration and similar I guess, but e.g. tmux support requires (or at least used to require) custom-built versions of tmux that kept it from being as useful in practice as you might think. Meanwhile, Terminal itself has added tons of features that used to be iTerm-only and added some of its own (e.g. window groups), and while there’s some comments below that iTerm has smoother scroll, I have noticed that using Terminal can actually speed up programs I run if I’ve got them dumping directly to stdout (because it can get stuff on the screen faster).

                                                                                                      I used iTerm for many years, but I’m also back to Terminal. Ditto for Alfred, similar reasons.

                                                                                                      1. 6

                                                                                                        Terminal.app has added

                                                                                                        • Mouse Reporting
                                                                                                        • Ligatures (which is still in beta for iterm)
                                                                                                        • Vertical and horizontal character spacing
                                                                                                        • Key macros
                                                                                                        • Tabs
                                                                                                        • Window groups
                                                                                                        • Custom entry commands
                                                                                                        • STDout Search.

                                                                                                        The difference between iTerm and Terminal.app is becoming more superficial. At this point the largest difference is the degree of customization, and people who care about this seem to be more evangelical about it.

                                                                                                        That being said I still use iTerm for two reasons.

                                                                                                        1. Hotkey Quake like drop down terminal window.
                                                                                                        2. Its what I’ve been using.
                                                                                                        1. 1

                                                                                                          Only things missing from Terminal.app are:

                                                                                                          • True Color support
                                                                                                          • Hotkey dropdown
                                                                                                      2. 7

                                                                                                        smoother scroll, true color support, greater tmux integration, splits.

                                                                                                        On the other hand I think Terminal.app has the edge with better font rendering and slightly smoother performance (latest Beta version of iTerm2 is much much better in that regard, but Terminal.app has still edge on that front, but it’s locked on 30fps, so it’s not that much greater in the end).

                                                                                                        1. 5

                                                                                                          Btw I’m still using Terminal.app because I found it much more stable, and I’ve stopped using tmux for terminal splitting and tiling. Now I use Tmux mostly for attaching and detaching and security reasons, as tmux increases input latency which I cannot stand!

                                                                                                          And most important of all is that I didn’t want to become addicted/attached to my personal dev environment. I have been through customization hell with Emacs and Vim, now I am back to really minimal 200 Loc configs in both, using mostly stock stuff on macOS, and some universal UNIX programs. I have around 10 applications installed on my macOS, rest is stock Apple stuff and it works really well!

                                                                                                          1. 2

                                                                                                            What phl said :-) also, better splitting. Better full screen mode.

                                                                                                          2. 1

                                                                                                            I recently tried switching back to Terminal.app, but couldn’t get the colour schemes to show correctly. Terminal does something to the colours to add more contrast, and I couldn’t figure out how to fix it. https://apple.stackexchange.com/questions/29487/is-it-possible-to-disable-terminals-automatic-tweaking-of-colors-in-lion

                                                                                                        2. 1

                                                                                                          To be fair to flu.x, that’s a relatively recent addition, and still allows a lot more control (at least on macOS) over the timing, degree, and transition curve to red-shifted light. The rest, I’m with you.

                                                                                                          1. 2

                                                                                                            To be even fairer, it’s “f.lux”, not “flu.x” ;)

                                                                                                        1. 3

                                                                                                          One would simply have to type ‘Cartesian closed categories and the price of eggs’, etc – which is a lot more intuitive than typing \title{Cartesian closed categories and the price of eggs} (not to mention \documentclass{article}), as in LaTeX one must.

                                                                                                          Each of these has an equivalent action that needs to be taken. Where in LaTeX you write \title{}, in a word processor you click the button to make a title (that may or may not be labeled nicely by default). \documentclass{} rather resembles the “Choose a Template” page that appears in many popular word processors. Granted, it’s writing a command as opposed to clicking a button, which seems to be the major usability difference here.

                                                                                                          When it comes to stopping people from creating documents in purple 28pt Comic Sans, teaching them all to use LaTeX is a lot less efficient than stating that you will refuse to read anything that doesn’t match the style guide. (Teaching them to use word processors properly might also help.)

                                                                                                          Yes, stating a specification and letting people use whatever tools they prefer is easier than teaching people to use any particular tool properly.

                                                                                                          LaTeX does less to prevent authors from getting on with writing documents than TeX does. But if neither of the two existed, and you had to come up with something, right now, in 2016 – would it really be a markup language?

                                                                                                          There is a good chance that the answer would be yes. The majority of programming languages tend to stick to plain text documents, due to a variety of reasons (some of these ideas apply). A document formatting (and typesetting) language would not be too different, which is what TeX is.

                                                                                                          If perhaps the answer were no, then it would be a standardized data format (those also have a tendency to be plain text, or eventually be encoded in plain text, by the way), which would allow you to use whatever user interface your heart desires (and your mind and hands are able to create). This is also what most word processors do, except they don’t want to agree on a format, and each makes their own.

                                                                                                          The MIT Research Science Institute argument isn’t much better.

                                                                                                          I completely agree here. Those three MIT RSI arguments seem like bullshit to me. As the author said quite nicely: “Comparing good use of LaTeX with poor use of word processors is unfair”.

                                                                                                          But that isn’t how most people want to write. Most of us would rather have a piece of paper or a screen that is covered in the words and punctuation marks that are actually going to appear in the finished piece than a screen covered in markup like \parencite[see][]{bennett_2002}.

                                                                                                          This seems to be what the author is most annoyed about. They’re not writing about why LaTeX is bad for writing prose, as much as they are arguing as to why WYSIWYG or WYSIWYM is better for writing prose than markup. To which I can heartily say, there are WYSIWYG and WYSIWYM editors! See this table sorted by “Editing style”, and perhaps also take a look at this.

                                                                                                          LaTeX was invented so that nobody would have to write prose in TeX, which is too hard for ordinary mortals. The above were created so that nobody would have to write prose in LaTeX – which is not too hard for ordinary mortals, but still a fairly bad idea.

                                                                                                          LaTeX is very easy to write ordinary prose in, but it is not the most beautiful way to represent it. I agree with the author that markdown, or a word processor, may be easier to learn to use. However, I disagree that LaTeX is worse at letting me focus on my content than a word processor: the same metadata is required for both, and require learning how to use them properly; the only thing that changes is the user interface.

                                                                                                          And I would like to finish by saying that TeX is not too hard for ordinary mortals, because even though typesetting is the first priority, writing content is the second.

                                                                                                          1. 3

                                                                                                            It seems vaguely similar to ls++.

                                                                                                            1. 1

                                                                                                              Hy is a fun and heretical little language, betraying both Python ideals and LISP ideals. :-)

                                                                                                              And I think its really cool how it seamlessly integrates itself into the import mechanism.

                                                                                                              I’m curious: Why share it now? Is something bubbling within the Hy community, has it just never been posted before and now is as good a time as ever, or did you just learn about it?

                                                                                                              1. 2

                                                                                                                It has been posted before. :)

                                                                                                                1. 1

                                                                                                                  I only just found out about it, via a post on hn

                                                                                                                  1. 1

                                                                                                                    Oh, cool, will have to check out what triggered HN then.

                                                                                                                1. 3

                                                                                                                  Hmm… Curta calculators in decent condition are going for about $1300 on eBay.

                                                                                                                  If somebody were to start a crowd-funding campaign to make a (assemble-it-yourself kit?) replica, I’d surely back that. Until then I’ll have to be satisfied with my slide rules and abacii.

                                                                                                                    1. 2

                                                                                                                      You can find them much cheaper than that at flea markets.

                                                                                                                    1. 5

                                                                                                                      I am reminded of APL/J/K and similar languages. Being primarily array-oriented, this is a very natural way of thinking in such languages.

                                                                                                                      Similarly, I believe numpy likes operating on arrays at a time, and let’s you vectorize functions.