Threads for greg_kennedy

  1. 8

    At work, I have been wondering about the fundamental conflict between branch coverage and asserts during the last week: It makes no sense to write a test which checks the false-case of an assert. If you would remove the assert the code would be as correct as before but the test would fail.

    The answer of sqlite to this dilemma is to distinguish between the test run and the test validation via coverage. Asserts are removed during the coverage check, so there is no temptation to write test case for false asserts.

    The presentation explains this philosophy as “Measuring coverage validates your tests, not your product” and gives some concrete examples for ALWAYS and NEVER. Good stuff.

    1. 2

      Yeah this stuck out to me too, and reading their docs, it helps me make sense of something I hadn’t understood before:

      • a DEBUG build of software executes assert(),
      • but a PRODUCTION one doesn’t, and so
      • why do some programs use it as though for error checking or argument validation, when those go away in the build?

      The common example in the wild is something like assert( ptr = malloc(sizeof(int)) ); which is dangerous in release builds.

      SQLite’s solution is that assert() is more like a smart comment: You can use it to “enforce” contracts in debug builds, but it ALSO doubles as a reminder to someone reading the code that this value really must, yes, be defined.

      Given that, I actually have some software that could benefit from adding assert(): there are internal functions which call other internal functions, that originally were coded defensively like

      if(arg1 == NULL) { fprintf(stderr, "arg1 cannot be null\n"); exit(EXIT_FAILURE); }

      , but I started feeling silly programming defensively against myself. So then I went and took care to make sure the callers never pass NULL, and then I removed the argument validation, but now I feel unsafe again. assert() to the rescue!

      1. 2

        Slow and steady wins the race?

      1. 3

        I use Perl regularly at work. I think it’s slowly being replaced with Python but there are a lot of experienced devs who used it all the time and still reach for it.

        I doubt anyone there will go to Perl 6 though. Any learning curve is too much.

        1. 4

          No comments?

          I think the C++ graphics proposal is a very bad idea, an overreach, unnecessary, something we should use libraries to do instead, etc etc.

          1. 1

            I’ll bite. I recently proposed doing something for Rust which is remarkably similar in scope and goals to the Cpp graphics proposal; it is all about making 2D graphics programming easier (and cross-platform), but deliberately doesn’t engage the edge of performance or advanced capabilities. There’s other work, such as nical’s blog has many interesting things about the fusion of “modern graphics” and 2D.

            So most of the criticism of the Cpp graphics proposal applies to my idea as well, except one - I’m proposing mine as a library. So if someone wants something for a quick visualization or a fun 2D game, I hope they reach for my (yet to be done) crate. If they need something else, there’s no official imprimatur or anything else forcing them to use it. This is perhaps one of the biggest weaknesses of the C++ ecosystem - why can’t they have nice things outside the scope of the standard library?

            1. 1

              Having skimread the proposal (This is the one we’re talking about, isn’t it?), my first thoughts were similar. Which part of this proposal needs to be part of the C++ standard? Why can’t they just make a library instead?

              If it’s a useful library, people will use it in applications and base other libraries on it. In doing so, they will work out whether it is well designed and provides a useful bridge between the programmer and their graphics hardware. If it is useful to a wide range of users, you have a de facto standard, and you can either ‘bless’ it and turn it into an official standard, or just leave it as it is because it’s not that hard to use a library. If it turns out to be rather niche in its applications, you can keep it as a library for those who want it. If it turns out to be a poor fit for the graphics hardware of today or tomorrow you can abandon it and avoid all the time and effort involved both in getting it into the C++ standard and in implementing it once it’s standardised.

              I really do think that the biggest improvement to the C++ standard would be the addition of a standardised cargo-style way of pulling in packages. With that in place, the motivation to stuff every possible library into std, because that’s currently the only way to make it easily and widely accessible, would evaporate.

          1. 3

            When this falls flat it’s called a “Dummymander” instead. And this does happen in real-world examples, including my home state of Arkansas. In 2010 the Democrats redrew the map as an attempt to pick up another House seat. Instead, they spread their base too thin, and ended up tossing the entire state to Republicans.

            https://www.dailykos.com/stories/2013/1/2/1174641/-How-Dems-Helped-Cost-Ourselves-the-House-The-Arkansas-Dummymander

            Also referenced in there is this paper about the 1992 Gerrymander attempt in Georgia, which turned a 9D-1R advantage in 1991 into a 3D-8R split in 2001.

            http://www.socsci.uci.edu/~bgrofman/140%20Grofman%20and%20Brunell.%20%202005%20%20The%20Art%20of%20the%20Dummymander....pdf

            1. 2

              Something you might want to try instead is looking for an open-source closed-captioning tool. If you have a variety of news sources / speeches / etc etc that you run CC on, then you can create a big database of speech transcriptions. Later, when you have a “suspicious” recording, you could CC that too and then do an index lookup against your DB to see how closely it matches original context.

              It’s not the same as detecting edits “from nowhere”, but it could be useful for quickly spotting shady editing for high-profile sources like political figures and celebrities.

              1. 1

                I’d considered automatic transcription, then fall back to the harder stuff, but I’m concerned that the quality might not be enough. However, presumably, the transcription system would output the same text for the same input, even if cut up, right?

              1. 1

                Something you might want to try instead is looking for an open-source closed-captioning tool. If you have a variety of news sources / speeches / etc etc that you run CC on, then you can create a big database of speech transcriptions. Later, when you have a “suspicious” recording, you could CC that too and then do an index lookup against your DB to see how closely it matches original context.

                It’s not the same as detecting edits “from nowhere”, but it could be useful for quickly spotting shady editing for high-profile sources like political figures and celebrities.

                1. 10

                  This claims to be 5x faster than a non-parallel version, which strongly implies that du is somehow(?) limited by computation speed instead of by disk access time, which is making me question the very mature of the universe. Can anyone explain that?

                  1. 19

                    No, it’s limited by disk access time, but all the disk access is done sequentially in du. stat(), stat(), stat(), … Every call must complete before the next, so your queue depth to read disk is only 1. If a dozen threads issue stat() at the same time, the disk can issue multiple reads. An SSD has a latency somewhere around 10ms, but it can complete more than one request per 10ms given the opportunity.

                    1. 5

                      Isn’t the solution then “make du parallel” instead of “rewrite du in a new language and introduce a second utility”?

                      1. 9

                        I predict people might ask if the performance of du is actually such a bottleneck to justify the complexity. If somebody posted a patch to pthread openbsd du, I wouldn’t immediately jump on board.

                        You can sidestep such questions by dropping the code on github. Everybody says, wow, cool, that’s great, but nobody is required to make a decision to use or maintain such a tool.

                        Maybe it’s good enough leave it alone holds us back. Maybe let’s change everything and hope it’s better keeps us going in circles.

                        cc @caleb

                        1. 2

                          Relatively difficult call; the performance of du has definitely bothered me at times, but parallel c code is several kinds of hard-to-get-right.

                      2. 0

                        any reason du shouldn’t be modified to use this approach?

                      3. 3

                        It’s probably multi-faceted (as any benchmark data). I believe du (and dup probably) look at metadata to find file sizes, so I don’t think there is much bandwidth required. On top of that, if the benchmark is using NAND flash storage, the firmware driver may be able to parallelize read requests far better than a HDD. Even then, issuing multiple read requests to the HDD driver can allow it to optimize its read pattern to minimize seek distance for the head. I would comment on multi-platter HDDs, but I honestly know very little about the implementation of HDDs and SSDs, so someone please correct me if I’ve said anything too far from the truth.

                        You can even see speed-up in multi-threaded writing, for example in asynchronous logging libraries: https://github.com/gabime/spdlog

                        EDIT: in short, the speedup is probably from parallelizing the overhead of file system access

                        EDIT2: The benchmarks on the spdlog readme are a bad example because it uses multiple threads to write to the same file, not multiple threads writing to separate independent files

                      1. 2

                        On one hand: I agree that DNS-over-HTTPS is a silly and convoluted solution.

                        On the other hand: DNS-over-TLS is a bad solution for the reason pointed out: it lives on its own port.

                        Question: Why do we need ports any more at all? It seems like if we didn’t have dedicated port numbers, but instead referred to resources by subdomain or subdirectory beneath the main hostname, then all traffic would be indiscriminate when secured by TLS.

                        1. 4

                          Could it have been possible for DNS-over-TLS to use 443 and make the server able to route DNS and HTTP request appropriately? I’m not very knowledgable of TLS. From what I understand its just a transport layer so a server could simply read the beginning of an incoming message and easily detect if it is an HTTP or DNS header?

                          1. 9

                            Yes, much like http2 works. It complicates the TLS connection because now it passes a hint about the service it wants, but that bridge is already crossed.

                          2. 4

                            IP addresses allow two arbitrary computers to exchange information [1], whereas ports allow to arbitrary programs (or processes) to exchange information. Also, it’s TCP and UDP that have ports. There are other protocols that ride on top of IP (not that anyone cares anymore).

                            [1] Well, in theory anyway, NAT breaks that to some degree.

                            1. 3

                              Ports are kinda central to packet routing, if my understanding is correct, as it has been deployed.

                              1. 5

                                You need the concept of ports to route packets to the appropriate process, certainly. However, with DNS SRV records, you don’t need globally-agreed-upon port assignments (a la “HTTP goes to port 80”). You could assign arbitrary ports to services and direct clients accordingly with SRV.

                                Support for this is very incomplete (e.g. browsers go to port 80/443 on the A/AAAA record for a domain rather than querying for SRVs), but the infrastructure is in place.

                                1. 5

                                  On what port do I send the DNS query for the SRV record of my DNS server?

                                  1. 1

                                    Obviously, you look up an SRV record to determine which port DNS is served over. ;)

                                    I don’t know if anyone has thought about the bootstrapping problem. In theory, you could deal with it the same way you already bootstrap your DNS (DHCP or including the port with the IP address in static configurations), but I don’t know if this is actually possible.

                                  2. 2

                                    You need the concept of ports to route packets to the appropriate process

                                    Unless we assign an IP address to every web facing process.

                                2. 1

                                  Problem: both solutions to private DNS queries have downsides related to the DNS protocol fundamentally having failed to envision a need for privacy

                                  Solution: radically overhaul the transport layer by replacing both TCP and UDP with something portless?

                                  The suggested cure is worse than the disease, in this case, in terms of sheer amount of work, and completely replaced hardware and software, it would require .

                                  1. 2

                                    I don’t think DNS is the right place to do privacy. If I’m on someone’s network, he can see what IP addresses I’m talking to. I can hide my DNS traffic, but he still gets to see the IP addresses I ultimately end up contacting.

                                    Trying to add privacy at the DNS stage is doing it at the wrong layer. If I want privacy, I need it at the IP layer.

                                    1. 4

                                      Assuming that looking up an A record and making a connection to that IP is the only thing DNS is used for.

                                      1. 3

                                        Think of CDN or “big websites” traffic. If you hit Google, Amazon, Cloudflare datacenters, nobody will be able to tell if you were reaching google.com, amazon.com, cloudflare.com or any of their costumers.

                                        Currently, this is leaking through SNI and DNS. DoH and Ecrypted SNI (ESNI) will improve on the status quo.

                                        1. 2

                                          And totally screws small sites. Or is the end game centralization of all web sites to a few hosts to “protect” the privacy of users?

                                          1. 2

                                            You can also self-host more than one domain on your site. In fact, I do too. It’s just a smaller set :-)

                                            1. 1

                                              End game would be VPNs or Tor.

                                            2. 2

                                              Is that really true? I though request/response metadata and timing analysis coud tell them who we were connecting to.

                                              1. 2

                                                Depends who they are. I’m not going to do a full traffic dump, then try to correlate packet timings to discover whether you were loading gmail or facebook. But tcpdump port 53 is something I’ve actually done to discover what’s talking to where.

                                                1. 1

                                                  True. maybe ESNI and DoH are only increasing the required work. Needs more research?

                                                  1. 1

                                                    Probably to be on safe side. Id run it by experts in correlation analyses on network traffic. They might already have something for it.

                                                2. 2

                                                  nobody will be able to tell if you were reaching google.com, amazon.com, cloudflare.com or any of their costumers.

                                                  except for GOOGL, AMZN, et al. which will happily give away your data, without even flinching a bit.

                                                  1. 1

                                                    Yeah, depends on who you want to exclude from snooping on your traffic. The ISP, I assumed. The Googles and Amazons of the world have your data regardless of DNS/DoH.

                                                    I acknowledge that the circumstances are different in every country, but in the US, the major ISPs actually own ad networks and thus have a strong incentive not to ever encrypt DNS traffic.

                                                    1. 1

                                                      Yeah, depends on who you want to exclude from snooping on your traffic. The ISP, I assumed. The Googles and Amazons of the world have your data regardless of DNS/DoH.

                                                      so i’m supposed to just give them full access over the remaining part which isn’t served by them?

                                                      I acknowledge that the circumstances are different in every country, but in the US, the major ISPs actually own ad networks and thus have a strong incentive not to ever encrypt DNS traffic.

                                                      ISPs in the rest of the world aren’t better, but this still isn’t a reason to shoehorn DNS into HTTP.

                                                      1. 1

                                                        No, you’re misreading the first bit. You’re already giving iit to them, most likely, because of all those cloud customers. This makes their main web property indistinguishable from their clients, once SNI and DNS is encrypted.

                                                        No need to give more than before.

                                                        1. 1

                                                          You’re already giving iit to them, most likely, because of all those cloud customers.

                                                          this is a faux reason. i try to not use these things when possible. just because many things are there, it doesn’t mean that i have to use even more stuff of them, quite the opposite. this may be an inconvenience for me, but it is one i’m willing to take.

                                                          This makes their main web property indistinguishable from their clients, once SNI and DNS is encrypted.

                                                          indistinguishable for everybody on the way, but not for the big ad companies on whose systems things are. those are what i’m worried about.

                                                          1. 1

                                                            Hm I feel were going in circles here.

                                                            For those people who do use those services, there is an immediate gain in terms of hostname privacy (towards their ISP), once DoH and ESNI are shipped.

                                                            That’s all I’m saying. I’m not implying you do or you should.

                                                            1. 1

                                                              I’m not implying you do or you should.

                                                              no, but the implications of DoH are that i’ll end up using it, even if i don’t want to. it’ll be baked into the browsers, from there it’s only a small step to mandatory usage in systemd. regarding DoH in general: if you only have http, everything looks like a nail.

                                              2. 1

                                                Alternative solution: don’t use DNS anymore.

                                                Still lots of work since we need to ditch HTTP, HTTPS, FTP, and a host of other host-oriented protocols. But, for many of these, we’ve got well-supported alternatives already. The question of how to slightly improve a horribly-flawed system stuck in a set of political deadlocks becomes totally obviated.

                                                1. 3

                                                  That’s the biggest change of all of them. The whole reason for using DoH is to have a small change, that improves things, and that doesn’t require literally replacing the entire web.

                                                  1. 1

                                                    Sure, but it’s sort of a waste of time to try to preserve the web. The biggest problem with DNS is that most of the time the actual hostname is totally irrelevant to our purposes & we only care about it because the application-layer protocol we’re using was poorly designed.

                                                    We’re going to need to fix that eventually so why not do it now, ipv6-style (i.e., make a parallel set of protocols that actually do the right thing & hang out there for a couple decades while the people using the old ones slowly run out of incremental fixes and start to notice the dead end they’re heading toward).

                                                    Myopic folks aren’t going to adopt large-scale improvments until they have no other choice, but as soon as they have no other choice they’re quick to adopt an existing solution. We’re better off already having made one they can adopt, because if we let them design their own it’s not going to last any longer than the last one.

                                                    DNS is baked into everything, despite being a clearly bad idea, because it was well-established. Well, IPFS is well-established now, so we can start using it for new projects and treating DNS as legacy for everything that’s not basically ssh.

                                                    1. 8

                                                      Well, IPFS is well-established now

                                                      No it’s not. Even by computer standards, IPFS is still a baby.

                                                      Skype was probably the most well-established P2P application in the world before they switched to being a reskinned MSN Messenger, and the Skype P2P network had disasters just like centralized services have, caused by netsplits, client bugs, and introduction point issues. BitTorrent probably holds the crown for most well-established P2P network now, and since it’s shared-nothing (the DHT isn’t, but BitTorrent can operate without it), has never had network-wide disasters. IPFS relies on the DHT, so it’s more like Skype than BitTorrent for reliability.

                                                      1. 0

                                                        It’s only ten years old, sure. I haven’t seen any reliability problems with it. Have you?

                                                        DHT tech, on top of being an actually appropriate solution to the problem of addressing static chunks of data (one that eliminates whole classes of attacks by its very nature), is more reliable now than DNS is. And, we have plenty of implementations and protocols to choose from.

                                                        Dropping IPFS or some other DHT into an existing system (like a browser) is straightforward. Opera did it years ago. Beaker does it now. There are pure-javascript implementations of DAT and IPFS for folks who can’t integrate it into their browser.

                                                        Skype isn’t a good comparison to a DHT, because Skype connects a pair of dynamic streams together. In other words, it can’t take advantage of redundant caching, so being P2P doesn’t really do it any favors aside from eliminating a single point of failure from the initial negotiation steps.

                                                        For transferring documents (or scripts, or blobs, or whatever), dynamicism is a bug – and one we eliminate with named data. Static data is the norm for most of what we use the web for, and should be the norm for substantially more of it. We can trivially eliminate hostnames from all asset fetches, replace database blobs with similar asset fetches, use one-time pads for keeping secret resources secret while allowing anyone to fetch them, & start to look at ways of making services portable between machines. (I hear DAT has a solution to this last one.) All of this is stuff any random front-end developer can figure out without much nudging, because the hard work has been done & open sourced already.

                                                        1. 4

                                                          IPFS is not ten years old. Its initial commit is five years ago, and that was the start of the paper, not the implementation.

                                                          1. 1

                                                            Huh. I could have sworn it was presented back in 2010. I must be getting it confused with another DHT system.

                                                      2. 7

                                                        Sure, but it’s sort of a waste of time to try to preserve the web.

                                                        This is letting Perfect be the enemy of Good thinking. We can incrementally improve (imperfectly, true) privacy now. Throwing out everything and starting over with a completely new set of protocols is a multi-decade effort before we start seeing the benefits. We should improve the situation we’re in, not ignore it while fantasizing about being in some other situation that won’t arrive for many years.

                                                        The biggest problem with DNS is that most of the time the actual hostname is totally irrelevant to our purposes & we only care about it because the application-layer protocol we’re using was poorly designed.

                                                        This hasn’t been true since Virtual Hosting and SNI became a thing. DNS contains (and leaks) information about exactly who we’re talking to that an IP address doesn’t.

                                                        1. 2

                                                          This is letting Perfect be the enemy of Good thinking. We can incrementally improve (imperfectly, true) privacy now.

                                                          We can also take advantage of low-hanging fruit that circumvent the tarpit that is incremental improvements to DNS now.

                                                          The perfect isn’t the enemy of the good here. This is merely a matter of what looks like a good idea on a six month timeline versus what looks like a good idea on a two year timeline. And, we can guarantee that folks will work on incremental improvements to DNS endlessly, even if we are not those folks.

                                                          Throwing out everything and starting over with a completely new set of protocols is a multi-decade effort before we start seeing the benefits.

                                                          Luckily, it’s an effort that started almost two decades ago, & we’re ready to reap the benefits of it.

                                                          DNS contains (and leaks) information about exactly who we’re talking to that an IP address doesn’t.

                                                          That’s not a reason to keep it.

                                                          Permanently associating any kind of host information (be it hostname or DNS name or IP) with a chunk of data & exposing that association to the user is a mistake. It’s an entanglement of distinct concerns based on false assumptions about DNS permanence, and it makes the whole domain name & data center rent-seeking complex inevitable. The fact that DNS is insecure is among its lesser problems; it should not have been relied upon in the first place.

                                                          The faster we make it irrelevant the better, & this can be done incrementally and from the application layer.

                                                        2. 2

                                                          But why would IPFS solve it?

                                                          Replacing every hostname with a hash doesn’t seem very user-friendly to me and last I checked, you can trivially sniff out what content someone is loading by inspecting the requested hashes from the network.

                                                          IPFS isn’t mature either, it’s not even a decade old and most middleboxes will start blocking it once people start using it for illegitimate purposes. There is no plan to circumvent blocking by middleboxes, not even after that stunt with putting wikipedia on IPFS.

                                                          1. 1

                                                            IPFS doesn’t replace hostnames with hashes.It uses hashes as host-agnostic document addresses.

                                                            Identifying hosts is not directly relevant to grabbing documents, and so baking hostnames into document addresses mixes two levels of abstractions, with undesirable side effects (like dependence upon DNS and server farms to provide absurd uptime guarantees).

                                                            IPFS is one example of distributed permanent addressing. There are a lot of implementations – most relying upon hashes, since hashes provide a convenient mechanism for producing practically-unique addresses without collusion, but some using other mechanisms.

                                                            The point is that once you have permanent addresses for static documents, all clients can become servers & you start getting situations where accidentally slashdotting a site is impossible because the more people try to access it the more redundancy there is in its hosting. You remove some of the hairiest problems with caching, because while you can flush things out of a cache, the copy in cache is never invalidated by changes, because the object at a particular permanent address is immutable.

                                                            Problems (particularly with web-tech) that smart folks have been trying to solve with elaborate hacks for decades become trivial when we make addresses permanent, because complications like DNS become irrelevant.

                                                            1. 1

                                                              And other problems become hard like “how do I have my content still online in 20 years?”.

                                                              IPFS doesn’t address the issues it should be addressing, using hashes everywhere being one of them making it particularly user-unfriendly (possibly even user-hostile).

                                                              IPFS doesn’t act like a proper cache either (unless their eviction strategy has significantly evolved to be more cooperative) in addition to leaking data everywhere.

                                                              Torrent and dat:// solve the problem much better and don’t over-advertise their capabilities.

                                                              Nobody really needs permanent addressing, what they really need is either a Tor onion address or actually cashing out for a proper webserver (where IPFS also won’t help if your content is dynamic, it’ll make things only more javascript heavy than they already are).

                                                              1. 1

                                                                how do I have my content still online in 20 years?

                                                                If you want to guarantee persistence of content over long periods, you will need to continue to host it (or have it hosted on your behalf), just as you would with host-based addressing. The difference is that your host machine can be puny because it’s no longer a single point of failure under traffic load: as requests increase linearly, the likelihood of any request being directed to your host decreases geometrically (with a slow decay via cache eviction).

                                                                IPFS doesn’t address the issues it should be addressing, using hashes everywhere being one of them making it particularly user-unfriendly (possibly even user-hostile).

                                                                I would absolutely support a pet-name system on top of IPFS. Hashes are convenient for a number of reasons, but IPFS is only one example of a relatively-mature named-data-oriented solution to permanent addressing. It’s minimal & has good support for putting new policies on top of it, so integrating it into applications that have their own caching and name policies is convenient.

                                                                IPFS doesn’t act like a proper cache either (unless their eviction strategy has significantly evolved to be more cooperative) in addition to leaking data everywhere.

                                                                Most caches have forced eviction based on mutability. Mutability is not a feature of systems that use permanent addressing. That said, I would like to see IPFS clients outfitted with a replication system that forces peers to cache copies of a hash when it is being automatically flushed if an insufficient number of peers already have it (in order to address problem #1) as well as a store-and-forward mode (likewise).

                                                                Torrent and dat:// solve the problem much better and don’t over-advertise their capabilities.

                                                                Torrent has unfortunately already become a popular target for blocking. I would personally welcome sharing caches over DHT by default over heavy adoption of IPFS since it requires less additional work to solve certain technical problems (or, better yet, DHT sharing of IPFS pinned items – we get permanent addresses and seed/leech metrics), but for political reasons that ship has probably sailed. DAT seems not to solve the permanent address problem at all, although it at least decentralizes services; I haven’t looked too deeply into it, but it could be viable.

                                                                Nobody really needs permanent addressing,

                                                                Early web standards assume but do not enforce that addresses are permanent. Every 404 is a fundamental violation of the promise of hypertext. The fact that we can’t depend upon addresses to be truly permanent has made the absurd superstructure of web tech inevitable – and it’s unnecessary.

                                                                what they really need is either a Tor onion address

                                                                An onion address just hides traffic. It doesn’t address the single point of failure in terms of a single set of hosts.

                                                                or actually cashing out for a proper webserver

                                                                A proper web server, though relatively cheap, is more expensive and requires more technical skill to run than is necessary or desirable. It also represents a chain of single points of failure: a domain can be siezed (by a state or by anybody who can social-engineer GoDaddy or perform DNS poisoning attacks), while a host will go down under high load (or have its contents changed if somebody gets write access to the disk). Permanent addresses solve the availability problem in the case of load or active threat, while hash-based permanent addresses solve the correctness problem.

                                                                where IPFS also won’t help if your content is dynamic,

                                                                Truly dynamic content is relatively rare (hence the popularity of cloudflare and akamai), and even less dynamic content actually needs to be dynamic. We ought to minimize it for the same reasons we minimize mutability in functional-style code. Mutability creates all manner of complications that make certain kinds of desirable guarantees difficult or impossible.

                                                                Signature chains provide a convenient way of adding simulated mutability to immutable objects (sort of like how monads do) in a distributed way. A more radical way of handling mutability – one that would require more infrastructure on top of IPFS but would probably be amenable to use with other protocols – is to support append-only streams & construct objects from slices of that append-only stream (what was called a ‘permascroll’ in Xanadu from 2006-2014). This stuff would need to be implemented, but it would not need to be invented – and inventing is the hard part.

                                                                it’ll make things only more javascript heavy than they already are

                                                                Only if we stick to web tech, and then only if we don’t think carefully and clearly about how best to design these systems. (Unfortunately, endemic lack of forethought is really the underlying problem here, rather than any particular technology. It’s possible to use even complete trash in a sensible and productive way.)

                                                                1. 1

                                                                  The difference is that your host machine can be puny because it’s no longer a single point of failure under traffic load: as requests increase linearly, the likelihood of any request being directed to your host decreases geometrically (with a slow decay via cache eviction).

                                                                  I don’t think this is a problem that needs addressing. Static content like the type that IPFS serves can be cheaply served to a lot of customers without needing a fancy CDN. An RPi on a home connection should be able to handle 4 million visitors a month easily with purely static content.

                                                                  Dynamic content, ie the content that needs bigger nodes, isn’t compatible with IPFS to begin with.

                                                                  Most caches have forced eviction based on mutability

                                                                  Caches also evict based on a number of different strategies that have nothing to do with mutability though, IPFS’ strategy for loading content (FIFO last I checked) behaves poorly with most internet browsing behaviour.

                                                                  DAT seems not to solve the permanent address problem at all, although it at least decentralizes services; I haven’t looked too deeply into it, but it could be viable.

                                                                  The public key of a DAT share is essentially like a IPFS target with the added bonus of having at tracked and replicated history and mutability, offering everything an IPNS or IPFS hash does. Additionally it’s more private and doesn’t try to sell itself as censorship resistant (just look at the stunt with putting Wikipedia on IPFS)

                                                                  Every 404 is a fundamental violation of the promise of hypertext.

                                                                  I would disagree with that. It’s more important that we archive valuable content (ie, archive.org or via the ArchiveTeam, etc.) than having a permanent addressing method.

                                                                  Additionally the permanent addressing still does not solve content being offline. Once it’s lost, it’s lost and no amount of throwing blockchain, hashes and P2P at it will ever solve this.

                                                                  You cannot stop a 404 from happening.

                                                                  The hash might be the same but for 99.999% of content on the internet, it’ll be lost within the decade regardless.

                                                                  Truly dynamic content is relatively rare

                                                                  I would also disagree with that, in the modern internet, mutable and dynamic content are becoming more common as people become more connected.

                                                                  CF and Ak allow hosters to cache pages that are mostly static like the reddit frontpage as well as reducing the need for georeplicated servers and reducing the load on the existing servers.

                                                                  is to support append-only streams & construct objects from slices of that append-only stream

                                                                  See DAT, that’s what it does. It’s an append-only log of changes. You can go back and look at previous versions of the DAT URL provided that all the chunks are available in the P2P network.

                                                                  Only if we stick to web tech, and then only if we don’t think carefully and clearly about how best to design these systems.

                                                                  IPFS in it’s current form is largely provided as a Node.js library, with bindings to some other languages. It’s being heavily marketed for browsers. The amount of JS in websites would only increase with IPFS and likely slow everything down even further until it scales up to global, or as it promises, interplanetary scale (though interplanetary is a pipedream, the protocol can’t even handle satellite internet properly)

                                                                  Instead of looking at pipedreams of cryptography for the solution, we ought to improve the infrastructure and reduce the amount of CPU needed for dynamic content, this is a more easy and viable option than switching the entire internet to a protocol that forgets data if it doesn’t remember it often enough.

                                                1. 3

                                                  How the mighty have fallen.

                                                  1. 3

                                                    If it still requires the original shell binary and just calls it with -c, what’s the point?

                                                    The compiled binary will still be dependent on the shell specified in the first line of the shell code (i.e shebang) (i.e. #!/bin/sh), thus shc does not create completely independent binaries.

                                                    shc itself is not a compiler such as cc, it rather encodes and encrypts a shell script and generates C source code with the added expiration capability. It then uses the system compiler to compile a stripped binary which behaves exactly like the original script. Upon execution, the compiled binary will decrypt and execute the code with the shell -c option.

                                                    1. 3

                                                      To hide things from people who don’t know how to use strace, it would appear.

                                                      1. 2

                                                        Yeah, it appears to be just security by obscurity… you are, I guess, supposed to use it to “safely” deploy shell scripts that contain login credentials or something. The script gets scrambled and embedded in the code, then compiled, then unscrambled and executed when run.

                                                        It seems like it’d be easy to do a decryptor by just looking in the text section for a giant blob (the scrambled script), then applying the decrypt routine lifted straight from the github repo.

                                                        I’m actually surprised so many people have contributed to it - I don’t believe there is any way to make what they are doing actually secure, and I’d expect most devs to see through it as well.

                                                        1. 1

                                                          In many OSes having a shell script as an interpreter doesn’t work, so this helps work around the limitation while not requiring a rewrite.

                                                        1. 2

                                                          I apologize for submitting an old link (December 2017) - however, it’s really the most comprehensive write-up of last year’s NaNoGenMo season around. And it seemed like a good time to share here, because 2018’s event will kick off in just a few weeks!

                                                          1. 3

                                                            Without a description of exactly what these things do (especially, forcing your DNS to Cloudflare ?!), this list is just black magic and snake oil.

                                                            1. 3

                                                              I’m going to take the Stack Overflow approach here and suggest that your question is wrong :P

                                                              Maybe you are underestimating point values in sprint grooming! Especially if, like my workplace, we do “agile” but it basically translates to “people just write their own stories and work on their own subjectmatter anyway” - it’s easy to assign points on a personal scale that isn’t reflective of how others would do it.

                                                              Unless you’re actually missing closing stories on time, you’re probably pulling your weight. Maybe raise the question to Scrummaster / Project Manager and ask for their input on story sizing.

                                                              1. 2

                                                                Hey there! We point stories as a team, so the point value is derived from consensus.

                                                                My stories frequently do not close on time—I feel like my PM avoids giving me challenging work because he kind of (fairly!) sees me as a black hole.

                                                                1. 2

                                                                  Ah, okay - well, hopefully some of the suggestions in this thread can help you out! : )

                                                                  1. 1

                                                                    Haha no worries—thanks for the response. 🙂

                                                                  2. 2

                                                                    We point stories as a team, so the point value is derived from consensus.

                                                                    Does your team share an understanding of what a point means? It’s one of those things that can become quite opaque over time, and usually blends somewhere in between “complexity” (whatever that means) and “time” (what stakeholders mostly care about) and “unknownness” (which isn’t necessarily the same as complexity). Unknowns in particular can be hard to think about, but can often turn into time-sucks.

                                                                    I’m curious, are your points really derived from consensus? There are lots of social behaviours around pointing that can lead to it not being a consensus at all. Here are some I’ve encountered frequently:

                                                                    1. If points aren’t submitted blindly then people just repeat whatever is estimated by someone they either respect, or who they assume understands the issue best, or who talked first/loudest
                                                                    2. A senior figure is skeptical of higher points assigned to a ticket by the team, and talks / strong-arms the estimate down
                                                                    3. Engineers who are less familiar with the problem space of a particular ticket don’t have enough time to think/research/ask questions before pointing occurs, and underestimate their unknowns
                                                                    4. Someone wants to get the meeting over quickly and rushes the team through pointing so they can (eg) get to lunch (don’t have this meeting right before lunch, or at the end of the day!)
                                                                    5. The process for reaching “consensus” when there’s about an even split between (eg) 2 and 4 points isn’t clearly defined, and/or tends to bias the estimate to the lower figure
                                                                    6. The ticket is pointed largely based on the expertise of someone who ends up not being the one working on the ticket, and no adjustment is made
                                                                    7. Some engineers either don’t speak up, or aren’t engaged, or need to be specifically given space in order to voice their thoughts - eg, making the room quiet and asking them directly for their input, and making sure they’re not spoken over by someone else. If this doesn’t happen valuable insights can be lost.

                                                                    I’m sure there are more. In any case, these kinds of things can lead to a situation where the points assigned to the ticket don’t reflect the difficulty or time for the person who’s actually doing the work.

                                                                    And, like everyone else says, points !== value :)

                                                                1. 17

                                                                  This is everything that’s wrong with programmer hiring practices.

                                                                  1. 10

                                                                    “For entry-level roles I give bonus points if there’s some sort of testing, but more experienced roles I penalize candidates who don’t at least list relevant test cases.”

                                                                    No test cases for your whiteboard code? SURPRISE, GOTCHA! What’s next? “I docked points for interviewees who did not also provide an autotools configure.in to build their whiteboard code.”

                                                                    1. 1

                                                                      This is an unfair comparison, knowing how to write good tests is not the same in importance as reciting build rules. Ideally you should be submitting tests alongside code in every commit. It’s a critical piece of SWE knowlege.

                                                                      1. 7

                                                                        Ideally you should be submitting tests alongside code in every commit. It’s a critical piece of SWE knowlege.

                                                                        This right here is religion.

                                                                        And again, that someone doesn’t write a test for their whiteboard doodle doesn’t mean they don’t know how to write good tests. That’s the SURPRISE, GOTCHA! The rules of the game are quite arbitrary.

                                                                    2. 1

                                                                      I couldn’t agree more. Thanks for sharing your thoughts.

                                                                      1. 1

                                                                        But I LIKE these puzzles. I got fixated on the fact that I can’t dial 5 this way.

                                                                        1. 5

                                                                          Oh, me too. But I’m wildly fed of up clever-clever programmers who think that their cute way to encode a dynamic programming problem counts as a valid hiring filter.

                                                                          Extra points are awarded for problems which turn out to have solutions that vastly outperform the dynamic programming one, especially if the problem only happens to be amenable to dynamic programming due to some special features that you’d never see in a real world example.

                                                                          (The other favourite appears to be ‘lets see if the candidate can spot the graph problem I’ve just described’.)

                                                                          1. 1

                                                                            Oh, totally :)

                                                                            In the past, we’ve used actual problems from our research as a joint white board brainstorming session and used that as an excuse to figure out how the candidate works. It’s possible it unfairly filters out candidates who are more comfortable using a few days to think about a problem and are not so quick verbally.

                                                                            We had a different strategy where we would send out a do at your leisure coding test which would work out more for such types.

                                                                            I don’t think we ever synthesized the two tests meaningfully.

                                                                      1. 5

                                                                        “How do you feel about the deaths in Myanmar and India based on your creation?” “What we really want to do is fix the problem. We really want to get to solutions. I think getting to solutions is important.”

                                                                        Mark Zuckerberg is responsible for the deaths in Myanmar and India because someone used his service in the process? A huge service with a billion users doing all kinds of things in various jurisdictions and cultures? That’s ridiculous. He shouldn’t answer such an aggressive, blame-shifting question. He should’ve roasted the interviewer on the spot. He couldn’t for same image reasons that make him dodge things he’s actually guilty of.

                                                                        While we’re at it, are we supposed to hold the food and water suppliers guilty because they kept murderous criminals and governments not only alive but with the energy needed to do their dirty deeds? Should they have profiled every use of their products before letting people eat? Or are these people wanting to achieve popularity by assiging blame only expecting specific companies or groups to have this responsibility that everyone else dodges?

                                                                        1. 4

                                                                          The author of the post has responded to this reading of it.

                                                                          1. 1

                                                                            Has anyone asked TBL the same question? What was his response?

                                                                            1. 2

                                                                              The difference is that TBL can’t shut down the internet but Facebook could entirely stop the usage of their platform for this specific event, potentially preventing the massive scale of this (especially when combined with their ownership of WhatsApp)

                                                                              They could have just shut down their service in certain places! The info about how this stuff was being coordinated is out there. They could have stopped being the active communication platform for massive violence

                                                                              When the house is on fire you stop trying to take a nap on the couch

                                                                              1. 2

                                                                                That easy? What percentage of Indian WhatsApp messages were organizing mob killings?

                                                                                Why didn’t the Indian government simply cutoff access to WhatsApp? Isn’t it the governments job to do that?

                                                                            2. 1

                                                                              “Read it again, man. It was not an accusation. It was a question he flat-out refused to answer.”

                                                                              She was talking to a guy being accused of all kinds of stuff representing a public company where anything she says might be used against him. Even something good might turn into an article claiming he only said it for the publicity. Those people have to be careful. From that angle, what she said definitely comes off as a ridiculous accusation setting Zuckerberg up for a heated argument that won’t benefit him. That’s on her for asking such a question.

                                                                              “But he couldn’t even manage to spit out a vaguely-human sounding answer.”

                                                                              She was trolling him for a reaction but expecting Zuckerburg to act humanely. Pot calling the kettle black.

                                                                              If I was trolling him, I’d open with all the times he and Facebook convinced people privacy didn’t matter before he bought all the houses around his for privacy. “Have you changed your mind about the importance of users’ privacy, Mr Zuckerberg? What are your current thoughts about that?” There would be no contest he was responsible for his prior speech and recent purchases. At that point, he’d try to weasel out of it or give an interesting response. Maybe even a somewhat sincere one. Claiming he’s morally responsible for any possible use of an automated, global, free service is just going to lead to poker face and/or comments like mine.

                                                                              Edit: Read it earlier. Forgot jwz wasn’t interviewer but was defending the inteview. Fixed the attribution.

                                                                              1. 7

                                                                                The article is trying to capture an example of a common problem among engineers, managers, industry, anyone involved in “tech disruption”. These people and organizations (unconsciously, or willingly) try to disown any responsibility for the things they create and the effects they bring into the world. A much lower-profile example is Harvard researcher Hau Chan, who gave a presentation at a Feb. 2018 conference about using AI to profile potential gang members: when asked during the Q&A about racial bias in training data, or misuse of technology by oppressive regimes or police, he replied “I’m just an engineer” - prompting one attendee to shout “Once the rockets go up, who cares where they come down?” in his best Werner von Braun accent and storm out.

                                                                                Contrast that to Einstein, who felt quite conflicted and ultimately remorseful about his role in bringing forth the theories leading to the development of the atomic bomb. There are several other examples in the jwz thread.

                                                                                My read on this is that the interviewer is asking only half-sincerely, but it also isn’t meant as pure trolling. It’s an attempt to pin down exactly how far Zuckerberg feels his responsibility should extend for the world-disrupting platform he’s created. If he doesn’t feel this is his responsibility, fine, say so. What about election interference? What about helping to perpetuate cyber-bullying and hate crimes? How about fostering organization of white supremacist groups?

                                                                                As his testimony in Congress showed, he’s unwilling to commit on any level to anything even remotely close to responsibility. Which, IMO, fairly leaves him open to questions like this. Perhaps if he (or government regulation, or somebody) would put forth actual boundaries, this question would be moot.

                                                                                Similar questions could and should be asked of any Western company doing business in China, helping them prop up their state surveillance systems. And so on.

                                                                                1. 2

                                                                                  Before I disagree, I want to say your comment makes a lot of sense and you said it all really well. Your goal is admirable. The problem is with the source and execution here.

                                                                                  “My read on this is that the interviewer is asking only half-sincerely, but it also isn’t meant as pure trolling. It’s an attempt to pin down exactly how far Zuckerberg feels”

                                                                                  It isn’t. The interviewer knows this if they’ve watched any interviews or studied mark Zuckerberg. Check this out. Zuckerberg turned down $15 billion dollars because he believed in his company that much and couldn’t be negotiated out of it. I give him props on that since I don’t think I could do it. That’s amazing, crazy, or something. He later went public after dominating that market. He’s rich as hell. He’s also recently paranoid due to elections being manipulated via his service plus tons of negative coverage. That’s the smart, calculating, careful asshole she should know she’s talking to if having done the tiniest bit of journalism.

                                                                                  Then, she tries to “pin down” how he “feels” by accusing him of being responsible for some stuff in a foreign country he may or may not know about because his service was used at some point. She just keeps hitting him over and over with it by her own admission in a tactic that’s common in corporate media to smear opponents instead of really interview them. By using that tactic, she created an instant response in a guy whose dealt with both drama-creating and real journalists for many years now. That response was to know she was working his ass for ratings or just too incompetent for someone on his level. The next move was blocking her attempt whether he looked dumb or good doing it. One guy made a whole Presidency work out by pretending to be dumb. Aloof or dumb is a proven strategy in such situations where arguing back can be seen as “attacking” the interviewer or suggestive in some other way.

                                                                                  “Contrast that to Einstein, who felt quite conflicted and ultimately remorseful about his role in bringing forth the theories leading to the development of the atomic bomb. There are several other examples in the jwz thread.”

                                                                                  If Einstein didn’t, someone else would. Many were closing in. He felt conflicted but probably shouldn’t have. As I got older, I started leaning toward a belief that one shouldn’t feel bad about what other people do with their time/work or yours. It’s on them. Don’t make it easy to do harm if you can avoid it. Past that, don’t worry about it since scheming assholes are a natural part of humanity. It will keep happening whether it’s your work or someone else’s. The reason is the schemers are scumbags. Many are persistent one’s, too. Some are so smart and obsessive you could never stop them if you tried. My field, high-assurance security, specifically aims at these people. You’d go nuts if you equated your own responsibility to the successes of the best and brightest attackers. Gotta take it down a notch.

                                                                                  “As his testimony in Congress showed, he’s unwilling to commit on any level to anything even remotely close to responsibility.”

                                                                                  Like most Americans who don’t even participate in the political process or vote with their wallet. They mostly act apathetic and selfish. He does, too. He’s just doing it in a way with wide impact that made him a billionaire. I’m not grilling Zuckerberg any harder than the billion users of his surveillance-driven, always-in-the-media-for-scheming-bullshit service who made him rich in the first place. This is both a demand-side and regulatory problem. On the regulation side, there’s tons of voters making sure there aren’t any blocking abuse of data, elections, etc. So, my position is fuck the people responsible for creating one Zuckerberg, Henry Paulson, and Bill Gates after another since they don’t give a shit. Some mercy included for those that haven’t learned yet since they didn’t have the opportunity.

                                                                                  Just saying you won’t change anything if the people and/or systems that keep creating these monsters keep creating these monsters without any changes.

                                                                            3. 2

                                                                              The interviewer doesn’t really think Zuckerberg is resonsible for genocide in Burma any more than he does. She’s not even trying to get an answer from him. She’s simply performing for her target audience, trying to show everybody that she’s a “real journalist” and willing to “stick it to the man” by trying to get him to say something that could cost the company a pile of cash.

                                                                              1. 1

                                                                                Well said. That’s another possibility that would lead to him having a non-reaction or rational evasion. It ties in with my claim, too, that she was pulling some kind of stunt.

                                                                              2. 2

                                                                                I don’t think jwz is recommending a retributive model of responsibility (wherein we punish people for making decisions that are only clearly poor in retrospect). Instead, looking at how facebook contributed to the deaths in Myanmar helps to identify red flags, so that facebook can behave differently next time. This is not really compatible with the purely-forward-looking policy Zuckerberg appears to be promoting – one where Facebook looks only at its own behavior in a vacuum and therefore is almost guaranteed to double down on flawed assumptions. Unexpected social behaviors can only be understood by taking societies into account.

                                                                                1. 1

                                                                                  one where Facebook looks only at its own behavior in a vacuum and therefore is almost guaranteed to double down on flawed assumptions.

                                                                                  That’s the model Facebook is supposed to follow. It’s called capitalism: each party being as selfish as possible maximizing its own gains externalizing all loses and otherwise not giving a shit. Something like half of America votes in favor of capitalism every year. Most of Facebook’s users, esp after many news articles, that know it’s a for-profit, publicly-traded, scheming, surveillance company are supporting its evil behavior by using it. Facebook’s incentives, forced by prioritizing shareholders, will ensure they always scheme on users more over time. The moral solution is to simply avoid Facebook as much as possible in favor of more ethical providers.

                                                                                  There’s always been more private or morally-assuring ways to communicate. The market, both paid and free, massively votes against those providers in favor of scumbags like Zuckerberg. The choices of consumers and voters have collectively led to his dominance and wealth. As a rational capitalist, he should continue paying lip service while letting other people suffer or die. As utilitarians minimizing harm, jwz and I should be using Facebook as little as possible (necessary evil w/ family), putting as little personal info on there as possible, and sending messages through ethical, private services. Even if contacted on Facebook, we should reply back in another app if possible.

                                                                                  This is what I do with being off Facebook having had an enormous, social cost to me. I’m actually going to have to get back on in future using it in ultra-minimal way as stated. Still, I’m standing up for my principles instead of just talking about principles. jwz whining that a capitalist running a publicly-traded, surveillance company should care about people more is just a foolish, publicity stunt. Even if Zuckerberg improves, his actions will make such an evil company look more desirable to current and future users perpetuating the evil instead of supporting non-invasive, ethical alternatives. It logically follows that jwz is a hypocrite if he isn’t pushing people off Facebook or to minimize it. Also, all the time he spends whining about Zuckerberg is time he could be promoting alternatives like Mastodon.

                                                                                  EDIT: I’ve reposted a version of this comment on author’s blog, too. Let’s see what happens. Off to work now.

                                                                                  1. 2

                                                                                    I largely agree.

                                                                                    However, as you mentioned, Facebook users are largely both aware of and accepting of the foundations of Facebook’s existence: they know that Facebook is a centralized capitalist enterprise that continues to exist only by selling their information, & they think all of that is worthwhile.

                                                                                    It’s possible to want people off Facebook while also wanting Facebook to be more mindful of the way the remaining users are managed. At the same time, pointing to partial responsibility in political turmoil (and painting Zuckerberg in particular as uncaring), even if it’s a little misleading, is liable to get people off Facebook without needing them to recognize more general problems with capitalism (or even swallow ideas that would make them less comfortable with Google).

                                                                                    Incremental change isn’t incompatible with radical change: when radical change isn’t viable, incremental change is all that’s possible, while incremental change can be pursued in conjunction with more radical experiments. (Having no safety net doesn’t encourage people to go all-in on risky propositions: it discourages them from taking any risks at all.)

                                                                                    With regard to Mastodon: jwz writes a lot of blog posts & I have no idea whether or not he uses or promotes it. I get the impression that his readership is more culturally similar to lobste.rs than to mastodon: lots of people who work in tech, not a whole lot of gay communist furries. I’m not sure how well a big push from jwz would go. (There were big pushes from j3st3r & John McAffee, toward patched instances where federation was broken, and I don’t think either lasted longer than 48 hours. jwz is not j3st3r but neither is he the IWW.)

                                                                              1. 3

                                                                                Is OCaml making something of a comeback, or is this some Baader-Meinhof stuff? I just started working with it a bit to do a new plugin for LiquidSoap, and suddenly it seems it’s all over my feeds.

                                                                                1. 5

                                                                                  I’m not sure, but I think ReasonML might be raising a bit of interest and/or awareness. It certainly has for me - ReasonML and ReasonReact are are about at the top of my new-things-to-try list.

                                                                                  1. 3

                                                                                    i think even before that, ocaml has been making a steady if gradual comeback over the last few years. opam, for instance, has been a pretty big boost for it (never underestimate the value of a good package manager in growing an ecosystem!), jane street’s dune is really exciting snce build systems have always been a bit of a weak spot, and more recenrly, bucklescript has been attracting a lot of attention among the webdev crowd even before reason came along (and now, of course, reason/bucklescript integration is a pretty big thing)

                                                                                    1. 1

                                                                                      Sadly I’ve never been able to figure out Opam… it seems to mutate the global package environment every time you want to build something, just like Cabal does (although they are fixing it with the cabal new-* commands). This is a massive pain if you want to work on different projects at once, and makes it hard to remember what state your build is in. Wish they would learn a bit from Cargo and Yarn and make this it more user friendly - so much good stuff in OCaml and Coq that I’d love to play around with, but it’s wrapped up in a painful user experience. :(

                                                                                      1. 3

                                                                                        are you using opam to develop projects foo and bar simultaneously, and installing foo’s git repo via opam so that bar picks it up? that is indeed a pain; i asked about it on the ocaml mailing list once and was recommended to use jbuilder (now dune) instead, which works nicely.

                                                                                        1. 1

                                                                                          just came across this, might be useful: https://esy.sh/

                                                                                    2. 4

                                                                                      OCaml is such a well designed programming language with such wide reaching influence that I hope people continually rediscover it does make a comeback.

                                                                                      1. 3

                                                                                        Andreas Baader and Ulrike Meinhof co-founded a left-wing terror group 1970 in Germany that killed over 30 people. I can’t see a connection here but maybe you could elaborate your comment.

                                                                                        1. 6

                                                                                          It’s a synonym for the “frequency illusion” in some circles, the illusion of something being more common recently when it’s just that you started noticing it more recently.

                                                                                          1. 2

                                                                                            I don’t get the connection.

                                                                                            1. 3

                                                                                              It’s like the Streisand Effect or the Mandela Effect: named because of some phenomenon around an event that an onlooker noticed and popularized, not because of any connection to the person themselves.

                                                                                              https://science.howstuffworks.com/life/inside-the-mind/human-brain/baader-meinhof-phenomenon.htm

                                                                                              “Now if you’ve done a cursory search for Baader-Meinhof, you might be a little confused, because the phenomenon isn’t named for the linguist that researched it, or anything sensible like that. Instead, it’s named for a militant West German terrorist group, active in the 1970s. The St. Paul Minnesota Pioneer Press online commenting board was the unlikely source of the name. In 1994, a commenter dubbed the frequency illusion “the Baader-Meinhof phenomenon” after randomly hearing two references to Baader-Meinhof within 24 hours. The phenomenon has nothing to do with the gang, in other words. But don’t be surprised if the name starts popping up everywhere you turn [sources: BBC, Pacific Standard].”

                                                                                        2. 2

                                                                                          I also seem to see it more often recently. Either Baader-Meinhof too (umm; or “reverse Baader-Meinhof”? seems I’m getting pulled in thanks to recent exposition?), or I have a slight suspicion that ReasonML may have contributed to some increase in OCaml public awareness. But maybe also Elm, and maybe F# too?

                                                                                        1. 3

                                                                                          I often go from hobby to hobby. In the past, I’ve tinkered with electronics, reading lots of programming topics (especially language theory), and more gaming than I care to admit. Lately, I’ve been:

                                                                                          Brewing both beer and now wine/cider. I started with a kit, then faked my way through a couple of recipes. Now I’ve got 2 beers conditioning in bottles and just finished reading my third book about the hobby. Hoping to finally get to the point where I can make something drinkable and dial in my consistency. I’ve also got a muscadine wine from some grapes we picked, a Welches concentrate wine (with $6 in it in total), and an apple cider from some leftover apple juice.

                                                                                          Reading is something I’m trying to get into. I used to read a lot (of technical books) when I was younger, but I find myself sticking to digital media more often, which I’d like to cut down on. With the brewing hobby and books I’ve read, I’m beginning to really like reading physical books again. I just need to find a library nearby so that this hobby doesn’t get so expensive.

                                                                                          Kitten care is a new one for me. We’ve had a kitten before, but my fiancee took care of him mostly. With our new kitten, I’ve been trying to take more of a role in taking care of her. Still learning, but so far it’s been fun.

                                                                                          1. 2

                                                                                            Good luck with the Welch’s wine. Have you made it before? I did a batch and while it was a “success” in that it “produced alcohol”, it also tasted like… well, grape juice. (Also, I hope you didn’t add any acid: my second batch I tried a bit, and it ended up tasting like stomach juice. Yuck).

                                                                                            Now, if you haven’t tried EdWort’s Apfelwein - THAT is a cheap and tasty treat. I need to make 10 gallons of that next time I get a chance. I love that stuff.

                                                                                            1. 1

                                                                                              Thanks! I’ve never made it before but the little bit I’ve tasted as it’s been fermenting has been good, albeit very sweet still. I didn’t add acid to this one, but that’s good to know that it becomes overwhelming very quickly. How much did you add to yours / how many gallons?

                                                                                              I think that my cider that’s fermenting is pretty close to apfelwine, at least based on the homebrewtalk post I just found (which only said apple juice, sugar, and yeast for the ingredients). Definitely excited to give that one a try!

                                                                                          1. 2

                                                                                            Neat post. It brought up memories of solving the Knight’s Tour back in my high school days using recursion.

                                                                                            I commented on the post, but here is my solution, in Perl. It uses a heuristic of visiting the least-reachable squares first, so it is very, very much faster than the brute-force solution.

                                                                                            https://github.com/greg-kennedy/100Hops

                                                                                            1. 10

                                                                                              I feel bad that this even needs to be written. Truly a sign of “developers chasing trends off a cliff”.

                                                                                              Can’t wait for the next round of these articles, “Why you should use SQL instead of Blockchain”.

                                                                                              1. 6

                                                                                                I feel bad that this even needs to be written. Truly a sign of “developers chasing trends off a cliff”.

                                                                                                Maybe I’m mean, but I’m kinda happy to let people suffer the natural consequences of following hype over need and truly feel the pain of getting this wrong. Otherwise, how are they going to learn?

                                                                                                1. 3

                                                                                                  Seeing examples of others following similar hype and enduring the pain. I try doing that where possible. It rarely works, though. Most developers seem to need first-hand experience a lot of the time.

                                                                                                2. 1

                                                                                                  SQL is a good concept wrapped in horrible syntax, and, according to Date, it’s a botched version of a truly great concept anyway.