1. 7

    1900: people going around on horses, public lightning using gas.

    1960: cars, jet and nuclear powered airplanes, satellites, semiconductors, computers with LISP and COBOL compilers, antibiotics, fiber optics, nuclear fusion experiments (tokamak)

    2020 - another 60 years and do we really have to show?

    1. 11

      Compared to “commonplace” things like cars and antibiotics? Internet, GPS, maglevs, a vast array of surgical techniques, the absence of smallpox…

      Compared to “works but government and academia only” things like satellites and compilers? Hololens, quantum computers, drones, railguns, graphene, carbon nanotubes, metamaterials…

      Compared to “wildly experimental and probably won’t ever happen” things like tokamak and nuclear airplanes? Probably a lot of classified shit. Antimatter experiments at LHC. Arguably a lot of work with AI

      1. 4

        Maglevs were invented in 1950s and first operated in 1970s. I also don’t have anything made from graphene, or know anyone who knows anyone owning a graphene artefact.

        More importantly, none of that is imagination shattering from 1960s point of view. We do not have things mid-century people couldn’t come up with.

        1. 1

          More importantly, none of that is imagination shattering from 1960s point of view. We do not have things mid-century people couldn’t come up with.

          Antibiotics, heavier-than-air flight, cars, and computers (if you count Jacquard Looms) were all demonstrated before the 1900’ss. They weren’t imagination shattering from a 1890’s point of view.

          Even the internet isn’t imagination shattering from an 1890’s point of view.

          1. 3

            Antibiotics, heavier than air flight, and a programmable computer were not demonstrated before 1900s.

            1. 3
              • We first observed that bacteria didn’t grow in the presence of mold in the 1870’s.
              • The first manned, powered heavier-than-air flight was 1890.
              • The Jacquard Loom had programmable loom patterns and was 1804, and the first programmable reading of data was the US 1890 Census.

              Do any of these look close to what our modern conceptions of these things are? Not really. But it shows that the evolution of the first demonstrations of ideas to widespread use of polished version takes time.

              1. 3

                There’s a huge difference between observation of mold and a concept of antibiotics, no matter how trivial that sounds with hindsight.

                The “uncontrolled hop” does not qualify as a flight, except in the most trivial sense.

                The loom is not a computer, but I’d love to see a fizzbuzz with Jacquard patterns to prove me wrong.

                1. 2

                  It still means that all of the “imagination shattering” stuff in the 1960’s had precedents more than half a century old. We do not have things mid-century people could not have come up with. They did not have things 1800’s people could not have come up with, so we shouldn’t be thinking that our era is particularly barren.

      2. 4

        I think it is reasonable to say that the reworking of daily life has slowed.

        The stove, the refrigerator and the car changed the routine of life tremendously.

        The computer might be more impressive by any number of measures but it didn’t rework daily life so much as add another layer on top of ordinary life. We still must cook meals and drive around.

        The linear extension of the car and the stove would be the auto-chef and the flying/auto-driving car.

        Both things are still further than is sometimes claimed by the press but the seem a bit closer than 2012. However, the automation offered by externally available power, which began in the 1800s, definitely has reached a point of diminishing returns.

        We may experience further progress through computers, AI and such. But this seems to hampered by a “complexity barrier” - an equivalent amount of daily life automation as various technologies offered earlier through power now requires systems that are much more computationally complex. Folding towels really does turn out to be the hard part of washing, etc and even with vast advances in computational ability, we may still be at diminishing returns.

        1. 2

          There have been significant advances since then (for instance, in medical treatments like cancer therapies and surgery—life expectancy in the US has risen from 70 to 79 since 1960), but nothing revolutionary, that would seem remotely as magical as the developments across the first half of the century.

          1. 3

            Magical is relative. All the psychiatrics I take were invented after 1970. They’re pretty magic!

          1. 1

            So one place anti-if can go is the one instruction set computer. If that instruction is implemented branchlessly, you guarantee full branchless computing. Otherwise, you can do minimum branching computing. See: https://esolangs.org/wiki/OISC

            This sort of consideration may be seem academic. But there is actually another place this consideration comes in. GPU computing (basically SIMD) involves the principle that every branch has significant cost. However, if you implement an interpreter for minimum instruction set language, then you can run distinct interpreters on each thread separately at low cost.

            This is more or less the principle behind H. Dietz’ MOG, “MIMD on GPU”, a system that can compile “arbitrary” parallel code to run on a GPU with only a x6-1 slowdown (Unfortunately, the project is frozen in beta for lack of funding).

            See: http://aggregate.org/MOG/

            1. 5

              Those are some pretty flaky arguments regarding OpenBSD. What is “theoretical” SMP? I’m running this from a 4-core OpenBSD laptop. You know, non-theoretically. Same language snark goes with vmm: they tried to implement a hypervisor? I’ll be sure to inform mlarkin of his failure to execute. It may not be what the author wants, but that’s a different story. Anyway, if there are good comparisons between the two systems security-wise, they look like they’re in that chart from https://hardenedbsd.org/content/easy-feature-comparison. Is it up to date with the recent anti-ROP efforts?

              1. 2

                It is. OpenBSD has an SROP mitigation, whereas HardenedBSD doesn’t. HardenedBSD has non-Cross-DSO CFI (Cross-DSO CFI is actively being worked on), whereas OpenBSD doesn’t. HardenedBSD also applies SafeStack to applications in base. CFI provides forward-edge safety while SafeStack provides backward-edge safety (at least, according to llvm’s own documentation.)

                HardenedBSD inherits MAP_STACK from FreeBSD. The one thing about OpenBSD’s MAP_STACK implementation that HardenedBSD may lack (I need to verify) is that the stack registers (rsp/rbp) is checked during syscall enter to ensure it points to a valid MAP_STACK region. If FreeBSD’s syscall implementation doesn’t do this already, doing so would be a good addition in HardenedBSD.

                So, there’s room for improvement by both BSDs, as should be expected. It looks like OpenBSD is starting the migration towards an llvm toolchain, which would allow OpenBSD to catch up to HardenedBSD with regards to CFI and SafeStack.

                Sorry for the excessive use of commas. I enjoy them perhaps a bit too much. ;)

                1. 1

                  I haven’t read the whole article, because I’m not interested in HardenedBSD.

                  What is “theoretical” SMP? I’m running this from a 4-core OpenBSD laptop. You know, non-theoretically.

                  The article is indeed vague about it, but I think the author meant scalability issues. Too much time spent in the kernel space.

                  Same language snark goes with vmm: they tried to implement a hypervisor? I’ll be sure to inform mlarkin of his failure to execute.

                  I don’t have any experience with virtualization, but the point seems to be that you can only have OpenBSD and Linux guests under an OpenBSD host which compares less than something like bhyve.

                  1. 1

                    SMP

                    From what I have read about SMP on OpenBSD its not that it would not detect 4 or 64 cores, its that its subsystems (like FreeBSD 5.0 for example) were not entirely rewritten to fully itilize all cores, that in many places still so called GIANT LOCK is used, may have changed recently, sorry if information is not up to latest date.

                    vmm

                    Now ints very limited, can You run Windows VM on it? … or Solaris VM? Last I read about it only OpenBSD and Linux VMs worked.

                    Is it up to date with the recent anti-ROP efforts?

                    I am not sure, You may ask here - https://www.twitter.com/HardenedBSD - or on the HardenedBSD forums - https://groups.google.com/a/hardenedbsd.org/forum/#!forum/users

                    1. 3

                      or Solaris VM? Last I read about it only OpenBSD and Linux VMs worked.

                      It runs Illumos derivatives (eg. OpenIndiana). There’s a speicific feature missing that FreeBSD/NetBSD need which is being worked on. It doesn’t run Windows because Windows needs graphics.

                      1. 2

                        Thanks for clarification, I hope that graphics support/emulation will also came to vmm soon.

                        I added that information to the post.

                    2. 1

                      I’m not sure, the article seems like it makes an honest enough comparison between hardenedBSD and OpenBSD that I make OpenBSD a priority to consider the next time I need truly secure OS.

                      1. 3

                        The “One may ask…” paragraph is so slanted toward HardenedBSD over OpenBSD that I’d have immediately assumed a HardenedBSD developer or fan was writing it.

                        1. 1

                          Tried my best, I thought that it was clean enough from the article that OpenBSD is secure for sure while HardenedBSD aspires to that target with FreeBSD codebase as start …

                        2. 1

                          Tried my best, I thought that it was clean enough from the article that OpenBSD is secure for sure while HardenedBSD aspires to that target with FreeBSD codebase as start …

                      1. 4

                        So if a bunch of people decide to fork their own version of Roko’s Ransomware, which one should I pay protection fee to to not be tortured for eternity?

                        1. 3

                          Addressed in the Charlie Stross blog post I referenced in a comment in this thread:

                          why should we bet the welfare of our immortal souls on a single vision of a Basilisk, when an infinity of possible Basilisks are conceivable?

                          1. 2

                            Good stuff.

                            There’s also the question “why should we care about hypothetical copies of ourselves in the future?” - after all, there should be hypothetical copies of ourselves in parallel universes and if the present universe is infinite, there be an infinity of copies of ourselves here, some portion in hell, some in heaven, some in bizarre purgatories.

                            Moreover, even if you posit a god-like intelligence able to accomplish virtually anything in the future, that godlike intelligence seems unlikely to be sift through the quantum noise to create truly exact copies of ourselves (I could make reference the “no cloning” theory of quantum mechanics and etc). So the hypothetical punished copies wouldn’t even be as good as copies suffering whatever other fates might await elsewhere or elsewhen.

                            It seems like the construct illustrates the difficulty humans have in separating intelligent ideas from garbage-thoughts when one is conceiving AIs (who has noticed that humans follow stated goals in a highly nuanced fashion rather than literalistic fashion? Not lesswrong it seems - or at least they haven’t consider this is key part of our being “more intelligent” than computer programs or the way we’re still better than programs).

                            1. 3

                              As for hypothetical copies — this version of a basilisk seems to be worded carefully enough to say that you cannot be sure if currently you are a pre-Singularity original version, or a simulated copy.

                        1. 1

                          The argument seems vaguely interesting but it is basically a popularization of an academic article that is linked-to but turns out to be behind a paywall, which makes the whole exercise rather useless.

                          1. 3

                            A good argument and a reasonable looking list.

                            But even more, one might consider that the history of software has to some extent been decided by the implementation or non-implementation of certain clever but quite-hard-to-implement ideas.

                            What is revision control, make and file system began together? What if memory allocation wasn’t included in the first otherwise rational OSes (unix and etc)? What if most of the features of a relational database were implemented without the “rational model” being created?

                            It’s hard to speculate here because everyone has a slightly different idea of what features are crucial and which would have existed “inevitably”. But none-the-less, I think the alternate-software-worlds view is plausible.

                            1. 24

                              Oh yeah. I really hate the overuse of computers and lack of analog failsafes. The worst thing is that it’s not only consumer Internet of Shit devices. Critical infrastructure is apparently built the same way and HOW DID THAT EVER GET APPROVED ANYWHERE?! We now have government people screaming about “cyber warfare” and stuff because there were already incidents of power grids being hacked. How did anyone ever agree to give full control of everything to computers is beyond me. Worse, not just computers — general purpose computers running general purpose operating systems. What the hell?! Critical infrastructure should be controlled by, like, FPGAs that only do the control stuff and nothing more. Not Windows machines with tons of I/O ports that will happily run malware from USB sticks. Yeah, if you want to add super smart deep learning predictive buzzword magic things to improve the control — make it OPTIONAL. If the smart computer shuts down, the system should still work in a more basic old-school way.

                              1. 6

                                General purpose computers are not the best, but I’m more worried about isolation. In the worst cases a company might have a single network with their client databases on the same network as their development work and their “cloud” offerings. In the worst cases if you can pwn a lowly receptionist’s computer (Just leave a USB stick in the parking lot) then you have the keys to the castle.

                                Multiple networks are good, but they are frequently all connected to the internet anyway. I rarely hear about air gaps being used for anything but NSA or similar level operations. I think most companies could benefit from a few “offline only” restricted networks. Just simple air gapping can negate some of the risks of using general purpose hardware with proprietary OSs/software.

                                1. 6

                                  I have an acquaintance who has air-gapped his business PC’s (accounting system, etc) since the 80’s. I used to think he was over the top. Nothing gets connected to those PC’s.

                                  Upgrades are VERY infrequent, although I’m not sure how he handles them exactly. I don’t know how he does backups.

                                  Clearly this doesn’t scale past a business where you have one book-keeper who can get on THE accounting PC and do their thing.

                                  Regardless, he doesn’t complain about malware. I suspect he sleeps pretty well at night.

                                  1. 4

                                    I rarely hear about air gaps being used for anything but NSA or similar level operations.

                                    Really? While I’ve personally avoided these places, I’ve known a number of people who needed to leave the secure area to print out entries from Stack Overflow on paper, and then hope that they printed out contained the information they needed to solve their programming challenge when they returned to their desks. This was private-sector-but-government-contract work, but still a million miles away from “NSA or similar level operations”.

                                    1. 3

                                      I’ve known a number of people who needed to leave the secure area to print out entries from Stack Overflow on paper, and then hope that they printed out contained the information they needed to solve their programming challenge when they returned to their desks.

                                      It’s because the people who developed those rules saw clever attacks coming. They just put physical, electrical, and then digital separation between different levels of security. Now, we got air-gap-jumping malware, people hitting printers, leaks through light bulbs, and who knows what else. The rule you just gave might stop a bunch of them from working between those rooms so long as they aren’t using USB sticks or something.

                                      1. 2

                                        Just use your smartphone.

                                        1. 3

                                          Another solution is each desk getting two network ports, one computer on the internet, one computer on an internal network. That has some issues though, the user is not allowed to plug the wrong device into the wrong network or “link all the things!”, in some government agencies they actually have a unique network socket and plug for each network so you can’t possibly get the wrong devices hooked up. You still want to disable USB/Firewire/eSATA/HDMI/whatever ports that could be used for communication.

                                          As Nick mentioned, you still have to consider if these computers might be communication on back channels such as speakers and microphones, fan speed control, CPU/monitor frequency, HDD noise, temperature monitors. Hackers are very creative apparently. Like the bad kids in class, you may have to physically separate these computers in different rooms and sneaker net data or print outs.

                                    2. 3

                                      The situation seems almost exactly analogous to the situations that the FCC and electrical codes were intended to prevent.

                                      A company isn’t allowed to sell an electronic device that emits strong interference to radio receivers. There are tests these things are suppose to pass. A company isn’t allowed to sell an electrical switch that kills one out of a thousand people. There are tests, similarly.

                                      But somehow, the concept of “Internet interference” isn’t a thing. Obviously, it should be.

                                      1. 2

                                        Who do you sue when the software fails?

                                        1. 2

                                          There are various security certification programs around the world… and they’re mostly complete shitshows. Sometimes “compliance” makes actual security worse.

                                        2. 2

                                          I’m okay with using computers for critical infrastructure – if and only if the entire stack is formally verified from silicon up. Almost nothing around today meets that standard.

                                          1. 2

                                            I note you said almost. In case someone thought nothing did, I’ll note a few products and projects that could do it but just didn’t get any market demand outside defense. The first was CLI stack which resulted in FM9001 processor. Their tooling also transformed into ACL2 project used in a lot of hardware verification.

                                            https://link.springer.com/content/pdf/10.1007%2FBFb0021724.pdf

                                            Next I saw was VAMP done with Verisoft project. It’s a DLX-style core (MIPS-like). I don’t know its I.P. status but someone built on it relatively recent.

                                            http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=FB6FCE66F64A71373F63D3B2A5609A6B?doi=10.1.1.217.2251&rep=rep1&type=pdf

                                            In high-assurance security, Rockwell-Collins decided to do a fault-tolerant, stack machine with separation kernel built-in. Their neat tooling let them verify a bunch of it plus that programs are implemented correctly in assembly. Its registers are tripled with voting to spot bitflips. They use it in cross-domain solutions (i.e. guards) among other things.

                                            https://www.rockwellcollins.com/-/media/Files/Unsecure/Products/Product_Brochures/Information_Assurance/Crypto/AAMP7G_data_sheet.ashx

                                            http://www.ccs.neu.edu/home/pete/acl206/papers/hardin.pdf

                                            That’s about as good as it gets from an individual component. It would seem to be better to go with a NonStop-style setup if trying to mitigate more risks on top of that far as reliability. CHERI or SAFE for security. I want to see someone combined the best of all them one day w/ NUMA support. That would be The Ultimate System. :)

                                            1. 2

                                              Sweet, thanks for the specific examples.

                                              Edit: Wow, the VAMP is really impressive. Fully verifying a tomasulo scheduler must have been a giant PITA. That’s the single most complicated component I’ve ever had the pleasure of building in hardware.

                                            2. 1

                                              Yeah, verified code and minimal stacks would be ideal. Like, a verified control application running on seL4 (which itself is verified). Even better — verified FPGA circuits when possible instead of general purpose processors. Also for bonus points add classic reliability tricks like having multiple implementations and a simple circuit deciding between their commands.

                                              1. 1

                                                Even better — verified FPGA circuits when possible instead of general purpose processors.

                                                That’s possibly more complex with who knows what level of verification for FPGA itself. Hardware is easier to fully verify given its binary nature. There’s good tools for it. However, we already have languages, compilers, and CPU’s which are individually verifying or verified. Might as well target our code to them since the verification of correspondence between our part and theirs might be smaller than a whole, new piece of hardware.

                                                Since you’re interested in that, though, there is an architectural concept in between doing hardware and working on CPU’s. Looks like nobody has submitted it. I’ll try to remember to submit it Monday when more are reading and might see it. Made a note.

                                          1. 3

                                            I got kind of lost about what part of this was the “easy” stuff. Fixing flaky tests can be quite hard. The whole Maslow’s hierarchy of needs also seemed completely unrelated.

                                            1. 3

                                              One interpretation of Maslow’s hierarchy of needs is “if you’re having trouble finding enough food to survive, it’s gonna realllly tough to do things like maintain friendships or produce incredible art or scientific research.”

                                              One interpretation of Dix’s hierarchy of software needs is “if you’re having trouble getting your tests to pass consistently, it’s gonna be realllly tough to do things like deploy continuously or split your monolith apart into microservices.”

                                              1. 2

                                                Yes, but I don’t know what that has to do with being “easy”.

                                                1. 4

                                                  Finding food is basically always easier than doing scientific research (partly because it’s inherently easier; partly because if finding food is hard, then malnourishment makes doing scientific research harder).

                                                  Similarly, (assuming the point that dix is making) getting your tests to pass is basically always easier than migrating to microservices (partly because it’s inherently easier; partly because making large-scale changes to your software is going to be to be very tough if you can’t be confident that those changes are correct).

                                                  Fixing flaky tests can be certainly be quite hard, but if your reason for not doing it is “it’s too hard”, then tackling even more challenging problems seems like a bad decision.

                                                  1. 2

                                                    It depends on your definition of easy. If one’s version of primitive or generic human lives in the topics, where there are low hanging fruit, by all means pick those. But in full generality, finding food could involve anything from long hours hunting and/planting to long hours at the fastfood joint making the money for the food.

                                                    Scientific research might involve sitting around speculating, it might well be “easy” though it’s payoff time might be long.

                                                    And so, what I see is there’s a confusion between “easy” and “has an immediate payoff”. It’s definitely important to tackle some tasks with an immediate payoff while the stuff with a long payoff percolates along. And it’s good to tackle tasks that are really-easy and have a semi-immediate payoff. But acknowledging this, we’ve got a complex situation, which can’t be that easy, right?

                                                    So what’s easy? Thinking about it, it almost seems like “figuring out what’s easy” isn’t always easy itself. That leads to a paradox. But it might be a “paradox of finite regress” rather than infinite regress. Go for the low hanging fruit, if you can see them. Otherwise, maybe look carefully and you might see them or put out the darn effort, climb the tree and get other if you’re lucky or you’re speculation has borne fruit. It’s altogether not easy, sorry…

                                                    1. 3

                                                      Part of “easy” here has to do with dependence of the “harder” activities on the “easier” ones. It’s entirely possible that cost(eat_low_hanging_fruit) + cost(climb_to_high_limbs_on_a_full_stomach) is less than cost(climb_to_high_limbs_on_an_empty_stomach), despite involving more activity. Similarly, a scientist who has spent most of the day hunting might get more research done in the remainder of the day than the scientist who is plagued by hunger pangs and malnutrition does with their whole day. Similarly, a team that maintains a robust test suite might able able to tackle hard architecture changes more quickly than a team with no test suite at all, despite the total amount of work that has to be done being larger.

                                                      In fact, I’d argue that anyone who writes automated tests believes this implicitly. There’d be no reason to write non-production code if it didn’t make you more effective at writing production code. (That said, it is entirely possible to write tests in a way that actually slows you down.)

                                                      1. 1

                                                        This whole thing is getting a bit ridiculous. Clearly “easy” is the wrong adjective to use for this. If one only do easy things you’ll never do hard things. In general, I pick things with high impact. They may be easy, they may be hard.

                                                        1. 2

                                                          “Easy” is maybe not the most clear word here, but in the context of the entire post it seems understandable enough.

                                                          Notice how the title of the post is “Do The Easy Things First”, not “Only Do The Easy Things”. Dix is not advocating to not do things with “high impact”; he’s advocating to set yourself up for success while doing those things, even if that means delaying starting on them to take care of foundational work.

                                                          1. 1

                                                            And I am saying if you always do the easy thing first you’ll never get to the hard thing.

                                                            1. [Comment removed by author]

                                                              1. 1

                                                                I’d also argue that if you never have a compelling reason to do the hard things, you shouldn’t do the

                                                                Your whole argument is to do the easy things first. If that’s the heuristic you want then you won’t do the hard things until you have nothing else to do. This is why I think impact is a more valuable way to look at it.

                                            1. 5

                                              This seems slightly silly.

                                              I mean, I certainly agree with the EFF in principle. But the CIA’s mandate is essentially to hack foreign powers. It goes directly against that if they dig up and then report zero-days. Telling the CIA they should stop doing their job is not going to be effective no matter how persuasively you frame it. The change the EFF is arguing for here has to happen at a higher level, and the US government has never shown any particular concern for the privacy or security of its citizens (and right now it’s certainly at a low ebb even by the usual mediocre standard).

                                              1. 3

                                                You make a good point about the role of the CIA, and I wonder if it’s the globalization of software and hardware that is going to make such jurisdiction roles like the FBI, CIA, DHS, NSA much more confusing moving forward. Say the CIA in their efforts to gather intelegence on enemies of the state, discover something that can affect both the homeland and the enemy, what responsibilities are there for the CIA? Ethically, we should try to defend our citizens, but that’s not really the role of the CIA. Reporting it and getting the issue fixed could strengthen the defenses of the US and its citizens but then reduce our ability to attack or learn. Should the NSA try to strengthen our defenses, and in the process let the CIA know of any vulnerabilities it can take advantage of to eavesdrop on enemies?

                                                1. 1

                                                  Yeah,

                                                  I think it would be better that the CIA didn’t exist, that it’s very existence fundamentally undermines democracy.

                                                  That said, if the CIA is going to exist, them fighting out a sort-of equal battle with the black hats in the realm of target surveillance seems far preferable to the various NSA programs that involuntarily enlist corporations and individuals in a program of mass surveillance.

                                                  If the CIA suppressed a civilian agency or private company’s discovery of these things, it would be bad. But otherwise, this is doing the research that expect happens in “black hat” labs and foreign agencies.

                                                  On the other hand, EFF pretty much has to wag its finger at every misdeed. Their position prevents them from saying “oh but this is OK, I guess” since doing so would result in someone arguing something much “worse” would result

                                                  1. 1

                                                    Did you miss the reference to the Vulnerabilities Equities Process?

                                                    1. 3

                                                      What about it? The equities process is a joke; it’s actually a little insulting to everyone’s intelligence. You can’t use a bug for a few months to compromise high-profile targets and then disclose it; the act of disclosing it stands a very good chance of alerting those targets that you compromised them, and how you did it.

                                                  1. 5

                                                    A marvelous article.

                                                    I’m not sure if “lost medium” is quite the right term. For “lost medium”, I think of the photo-copied anarchist “zines” of my youth, papyrus scrolls or usenet net news.

                                                    Maybe “lost ontology” or “lost order of life”. Oddly, I feel like this is an order of life whose loss people are acutely aware of today, through the “where’s my flying refrain” and so-forth.

                                                    1. 1

                                                      I agree. I thought this was going to be about magazines, books, vinyl, or maybe the telegraph. A realistic title might be more along the lines of “Effects of technology and UX metaphors on individuals and society.”

                                                    1. 3

                                                      There is also D, the language of Codd and Date’s The Third Manifesto in addition to the D that is a better C++.

                                                      1. 2

                                                        It seems like this basically doesn’t need to be said.

                                                        The national security apparatus has been the most influential wing of the Federal Government for quite some time. Under Obama, it has only became more influential as similarly happened under Bush.

                                                        The only reason the state might feel some urge to pardon to Snowden is public relations. Many people feel Snowden did a good thing. I’m pretty sure the NSA/security-apparatus folks feel not only that Snowden betrayed them by violating their secrecy but that what he accomplished, greater scrutiny of the NSA, was working against them regardless of how he did it.

                                                        1. 9

                                                          Well,

                                                          I think everyone can agree that OO as most aggressively sold in the 90’s was a flawed fiction. In that pitch, the idea was that one could simply make classes and this would become reusable code and one could just use one’s intuition of real world objects and ontologies and these would become effective program designs. Of course this is wrong.

                                                          You can improve the OO paradigm by making it more circumspect or jump an entirely different approach.

                                                          But I’d also mention that “the problems themselves remain”. Which is to say that I see the fundamental problem is what happens when one tries to leverage everyday ontologies and object-intuitions to the world of computer science.

                                                          The “diamond problem” and related questions wind-up ultimately problems of reducing a fuzzy world to a clear hierarchy. And this problem exists regardless of the programming and design paradigm you use.

                                                          So I suppose my main point is that design is going to be hard no matter what the paradigm - which isn’t really a statement about which paradigm is better but rather just a statement that design is hard and changing paradigms isn’t going to eliminate that hardness.

                                                          1. 1

                                                            The “diamond problem” and related questions wind-up ultimately problems of reducing a fuzzy world to a clear hierarchy. And this problem exists regardless of the programming and design paradigm you use.

                                                            Not necessarily. You can have objects without inheritance, as Go does, which avoids the diamond problem.

                                                            1. 3

                                                              @joe_the_user might be referring to the fact that, inheritance or not, humans organize information hierarchically and defining that hierarchy wrong leads to awkward situations. In OO with inheritance it becomes an unfortunate code artifact. Without inheritance it becomes an awkward documentation/understanding artifact. But either way, the problem of coming up with the correct hierarchy exists.

                                                              1. 1

                                                                Yes, exactly.

                                                              2. 1

                                                                Or allow single inheritance only, but no multiple inheritance (And no getting around it by using interfaces, hi Java!) so you can only get a “line” configuration, never a “diamond”.

                                                                1. 2

                                                                  Or you just use multiple inheritance, but without the diamond problem.

                                                                  The diamond problem is an issue with the specific design approach C++ picked. It’s no inherent to multiple inheritance.

                                                            1. 4

                                                              While this article raised some interesting points - the title was unnecessary click bait - and there was a better article on the issue Why Would a Math Teacher Punish a Child for Saying 5 x 3 = 15? that was linked to in the article.

                                                              1. 12

                                                                I don’t know why, but this linked article made my blood boil. What the kid did was FULLY CORRECT. If you’re going to argue that the teacher was correct you have to explain why the instructions were INCOMPLETE THEN! Sorry. I get angry because this hits the child really, really hard. It tells them the world is random and stupid people will deduct points (reject you, beat you up, slander you, pass you in life) even when you did every thing logically. It’s a lesson that can wait till they are 20.

                                                                1. 6

                                                                  Having been a math teacher in the past, I feel like you exactly described how many elementary teachers view their mission.

                                                                  It tells them the world is random and stupid people will deduct points (reject you, beat you up, slander you, pass you in life) even when you did every thing logically.

                                                                  Having been a child in the past, I’ve found there’s nothing a child can do that adults dislike more than an appeal to logic.

                                                                  1. 3

                                                                    Maybe it is “fully correct” in the context of your math instruction, but have you considered that maybe the teacher taught an arbitrary convention in the hopes of improving student understanding, and that the problem was likely testing comprehension of that convention?

                                                                    In other words, the instructions are potentially not “incomplete” when considered in the context of the teacher’s instruction.

                                                                    1. 4

                                                                      The student followed the teacher’s instruction precisely, they just did an implicit commutation of the multiplication (if I were a math teacher, I’d be delighted to see this). In fact, the student showed, that they smartly applied the method, by choosing the variant that is faster to calculate (5+5+5 requires fewer steps than 3+3+3+3+3).

                                                                      And yes, I agree, the teacher tested the method they explained in class, when probably the test should be about whether the student could solve the problem, and only give the method as a hint. Probably the issue is, that the folks who wrote the math standard require teachers to pose such questions in tests, in order to make sure teachers follow their program.

                                                                      Problems that test whether a student can solve a problem using a given method are not bad per se. I.e. a problem that asks students to conduct “the proof by mathematical induction” is perfectly fine in my opinion. They are not for arithmetics on the level of elementary school. If there are 5 ways to do a multiplication, and you punish your students for using 4 out of these 5 methods because it is not the method spelled out in the syllabus, you are doing it wrong.

                                                                      1. 2

                                                                        Having taught some public school, having gone through public school as something of a problem child and having gotten an MA in mathematics, I’d that say elementary school teachers indeed often teach and impose arbitrary conventions in mathematics. This imposition may to “further understanding” or for some other reason (I suspect just the urge to impose discipline generally).

                                                                        The imposition of such arbitrary rote conventions, however, actually expresses these teacher’s utter ignorance of the spirit and practice of mathematics and if anything, seems like one of the reasons that math is extremely unpopular in the US in particular. Basically, understanding math requires one to understand that “there’s more than one way of doing it” and the arbitrary convention puts the student in the position of being programmed rather than acting as the programmer. I’d also add that I think the biggest motivation for such conventions is to guarantee classroom order by making sure no one clever finishes their work early.

                                                                        So I have to sympathize with kghose. I too, feel my blood boil when reading articles of this sort.

                                                                      2. 3

                                                                        There’s a difference between three baskets of five apples and five baskets of three apples. I’m going to guess that was part of the multiplication lesson; it was definitely part of the curriculum when I was in school.

                                                                        1. 5

                                                                          The difference is only there if the multiplication operator some how also has the side effect of adding baskets.

                                                                          1. 5

                                                                            When your want to obtain the total number of apples, it doesn’t matter how you partition them, and you can also redistribute the apples from 3 baskets into 5.

                                                                            There is a problem in your elementary-school math class, when your problems are not correctly solvable by a mathematics professor ;)

                                                                            1. [Comment removed by author]

                                                                              1. 6

                                                                                I doubt the teacher who subtracted a point is unfamiliar with that equality.

                                                                              2. 1

                                                                                What is the value of showing a child how to do multiplication of different kinds of numbers? Seems a bit advanced given that really we should be showing that multiplication is commutative before showing where that’s not always the case.

                                                                                1. 6

                                                                                  The obvious question is, what is multiplication? It’s repeated addition. Repeating what? Exactly. Which operand is repeated and which is the repetition count? That’s something you’ll want to get straight before you start raising powers, which is repeated multiplication.

                                                                                  It’s weird that people simultaneously shit on math education for teaching rote memorization and also shit on them for teaching concepts.

                                                                                  1. 2

                                                                                    It’s weird that people simultaneously shit on math education for teaching rote memorization and also shit on them for teaching concepts.

                                                                                    The double edged sword of teaching. Although, the teaching profession doesn’t help the problem - there is little evidence of what actually works in teaching - some good randomised controlled research with hundreds of thousands of students would be a good start. Although, any evidence that proved what worked would be ignored by politicians….

                                                                              3. 1

                                                                                It tells them the world is random and stupid people will deduct points (reject you, beat you up, slander you, pass you in life) even when you did every thing logically.

                                                                                a lesson well learned early in life.

                                                                            1. 3

                                                                              I’ve been faced with this problem lately - finding a robust, online way to calculate the mean of a stream of numbers coming in. It is indeed a harder problem than it seems.

                                                                              My approach is to take an N-nary tree and prune the branches that aren’t needed. So, effectively, for X numbers, I’d be taking logN(X) nested running averages (Of The last 1-N values - updated each insert, Of the last N-NxN values - Updated every N inserts (ei, the running average of the previous complete running averages), and so-on repeat logN(x) times + balancing/rollover operations, which boil down adding a new node at the top). At any point, the mean is an appropriately weighted combination of these running averages.

                                                                              Each running average involves summing P numbers where P < N and then dividing by N - so you need a double-double but you should get “minimal” error overall, you shouldn’t get accumulating error and other bad things.

                                                                              If anyone knows or can think-of any Hole/Gotchas to this approach, I’d love to hear them.

                                                                              1. 3

                                                                                What goes wrong with the naive streaming approach of m_1=x_1, m_n = m_(n-1) * (n-1)/n + x_n/n (possibly with some standard fp math tricks I don’t know about)?

                                                                                1. 3

                                                                                  The problem I’d see with the naive running average is that as n becomes large and then (n-1)/n is going underflow to be 1, 1/n become zero and you’re screwed. Plus errors accumulate with each insertion.

                                                                                  The virtue I’d claim with my approach is that you’re never dividing by more than a fixed constant. And you can make most of divisions be by this constant, which you can choose to be a power of 2, which should give minimal error if done appropriately.

                                                                                2. 2

                                                                                  It seems like something that would be in the published literature, but google scholar isn’t finding much for me. This paper has something related, an addition algorithm that minimizes error: http://dx.doi.org/10.1109/ARITH.1991.145549 - maybe it could be inspiration, or there might be something useful in that journal / by that author?

                                                                                  1. 2

                                                                                    Ah, it seems like anything that emulates arbitrary precisions arithmetic would naturally guarantee exactness. And If you keep a running sum with arbitrary precision arithmetic and divide only at the end, the result algorithm is more or less identical to the approach I have been thinking of - if you break the process down to operations on regular floats.

                                                                                1. 6

                                                                                  Praise be our lord, his name is Integer.

                                                                                  1. 3

                                                                                    You still need double-length integers for “exact” average, tho.

                                                                                    1. 2

                                                                                      So when you take the mean of a list of integers, do you use integer division? Is your final result also an integer? Do you sum the integers forever, use “infinite precision” arithmetic and return a fraction? I’m trying to think how this would work.

                                                                                    1. 1

                                                                                      All computation is “symbolic computation”.

                                                                                      Lisp puts more emphasis on code and data being the same thing. But there ultimately is no absolute difference between code and data in any language - or any Turing complete language lets you write a program whose data is taken logic and data.

                                                                                      1. 3

                                                                                        Lisp puts more emphasis on symbolic computation (concise expressions of lists and symbols) than the historical alternatives (FORTRAN, C, Pascal).

                                                                                      1. 8

                                                                                        I am one of the non-functional programmers who’s still skeptical. I’m learning Haskell very slowly in my spare time. I’m also a math person but I generally don’t find myself appreciating FP’s math-y-ness.

                                                                                        Anyone, responses to the slides:

                                                                                        1) Pure functions - I think an appreciation of pure functions extends to multiple domains. I’d say standard imperative or oo design would say “better pure than impure” but wouldn’t insist on pure all the time. FP’s focus on pure functions seems to involve

                                                                                        2) Immutable data - This doesn’t seem like good stuff relative to monads. Rather, monads are a counter-intuitive approach that is extremely useful for dealing with data when you want to maintain the assumption that this data is immutable. Immutable data seems like a real mess if one going to quickly write, say, an Asteroids-like game (a programmer friend told me about spend two months trying and failing to write Asteroids in Haskell). After reading more, I found the argument that if Asteroids is considered a simulation, then having each cycle of a simulation conditionally produce a bunch of diffs and then putting them together is a better way - and that can very appropriately be done with monads.

                                                                                        And all this adds up, in my mind to something like FP being a “good, right way to design a program” and FP languages being tools that enforce this goodness. Which makes FP very much not analogous to either OO design and FP languages very much not analogous in purpose to OO languages. Especially, OO is much more like “just a tool” - you can use OO techniques to add another datasource to some huge piece of enterprise software by creating an interface that makes the new datasource look the old datasource. And the rest of program wouldn’t even need to be designed in a sane, OO way at all. Such an action may akin to organ transplants for dying patients but it’s still the stuff that many programmer’s jobs are made of.

                                                                                        FP isn’t an alternative to OO as such but an alternative to giant craptastic enterprise somewhere overall. I may be skeptical

                                                                                        But anyway, I suppose my main conclusion is that the level that FP operates at often isn’t made clear by its proponents. It would be easier to understand if presented as “a discipline for producing a globally superior architecture” rather than generically better programming “approach” since the latter statement isn’t clear the level the approach operates on.

                                                                                        Or you can consider this my effort to describe what I think I understand and tear into it as you will.

                                                                                        1. 9

                                                                                          So, I’ll bite. I am a “functional programmer,” I guess you could say, in that I’ve taught myself a lot about functional languages, do problems and side projects in Haskell, and so on; but my day job is and has always been in imperative languages. For the last three years, I’ve written Java and Ruby. Before that, I wrote Python and PHP, and now I’m back to Python again. The omnipresent scourge of Javascript haunts me everywhere I go, of course, but that goes without saying.

                                                                                          I use functional principles when designing, refactoring, and critiquing code in all of these languages. I find that it gives me a clarity of purpose in places where other methodologies are “fuzzy,” where there aren’t easy answers or the answers that you can find are merely questions of taste. The approach I take centers around purity, immutability, transparency, and transformation-oriented code. I will use (anonymized) examples from a code review that I did today.

                                                                                          def generate_something(config, a_flag=True)
                                                                                              if config['unrelated']
                                                                                                  fail "Cannot generate anything without the unrelated values!"
                                                                                              end
                                                                                          
                                                                                              result = []
                                                                                          
                                                                                              config['things'].each do |thing|
                                                                                                  next if thing['foo']
                                                                                          
                                                                                                  result << "#{@bar}/#{@baz}: #{thing}"
                                                                                                  result << "#{@bar}/#{@quux}: #{thing}" if a_flag
                                                                                              end
                                                                                          
                                                                                              result
                                                                                          end
                                                                                          

                                                                                          This is pretty typical code, I think, that people wouldn’t bat an eye at. It’s not beautiful, but it serves a purpose, and usually people wouldn’t be tempted to refactor it. In this code review, I made several points:

                                                                                          1. The test for the config['unrelated'] flag should be done elsewhere, because the values aren’t used inside the function.
                                                                                          2. The loop can be clearer if expressed as a transformation of a list.
                                                                                          3. Pass in only the data that is necessary; don’t use objects that know too much.
                                                                                          def generate_something(things, a_flag=True)
                                                                                              things.select { |t| !t['foo'] }.flat_map do |t|
                                                                                                  baz = "#{@bar}/#{@baz}: #{thing}"
                                                                                                  quux = "#{@bar}/#{@quux}: #{thing}" if a_flag
                                                                                                  a_flag ? [baz, quux] : baz
                                                                                              end
                                                                                          end
                                                                                          

                                                                                          This is a more functional approach: it has less access to the outside world. It’s not modifying anything internally. Its loop is a transformation. This method almost has the type [Thing] -> Bool -> [String]. Almost, because there are those pesky instance variables. If I wanted to completely make this functional, I’d have the method accept a different argument:

                                                                                          def generate_something(things, string_templater)
                                                                                              things.select { |t| !t['foo'] }
                                                                                                  .flat_map { |t| string_templater.call(t) }
                                                                                          end
                                                                                          

                                                                                          This would give the method the type [Thing] -> (Thing -> [String]) -> String. It isn’t necessary, because the implicit self object is actually an argument to all methods, but that’s how it could proceed, if needed.

                                                                                          Why learn to do this? Ultimately, it’s because I like to have a set of principles which I can trust in the context of almost any problem. I think that many of the object oriented techniques that we have learned, in particular the SOLID principles and some “design patterns,” are halfway to functional programming from the “real world” side. That’s great! It’s good to have techniques that are proven in practice rather than theory; but without the theory, it starts to become difficult to know how to apply those principles in places where you don’t have any practice.

                                                                                          As another example, consider the “Null Object” pattern. Null is a problem throughout object oriented programming. Tony Hoare famously called it his “billion dollar mistake.” Fine: we can address it with the “Null Object” pattern, right? Except that, in order to make consistent use of this pattern, you need to implement a null object for every class in your system. You have to religiously use them, and they don’t even eliminate null checks.

                                                                                          if (pet == null) pet = Pet.NULL_PET;
                                                                                          

                                                                                          On the other hand, if you approach this from a category theoretical perspective, you get… monads. The semantics of Maybe a work for all types a, because they must. It’s just math. There are no null checks because there are no nulls; if a null result is possible, then you’ll return Maybe Pet and be done with it.

                                                                                        1. 6

                                                                                          Wow,

                                                                                          With confusion at this level now, among people who appear informed, it seems like the confused ideas will really multiply when or if we get closer to flexible human-equivalent AI.

                                                                                          How human society will deal with the creation of a “strong” AI is a difficult question and viewing them as human-equivalent isn’t going to help.

                                                                                          1. 3

                                                                                            What is confused about this? You’re not giving enough detail about what points are incorrect. This clearly isn’t a prediction of what will happen, but rather a philosophical waxing on what should happen. If it is conscious, and can communicate, should it not be treated with the dignity that other living beings are given? It might not care about life, or liberty, or the pursuit of happiness, but that does not mean that what it desires should be disregarded, provided that it does not impinge upon the rights of other sentient beings. We are all in this boat together, surely wisdom and history have shown us that if we are to work together in the years to come we should not act out in fear. If we can’t treat other sentient beings with respect, perhaps we shouldn’t build them.

                                                                                          1. 3

                                                                                            I think Dreyfus' critique was less absolute than people are imagining.

                                                                                            For example, Dreyfus seems in favor of the possibility of “Heideggerian AI”,see: “Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian” http://cid.nada.kth.se/en/HeideggerianAI.pdf