1. 2

    Nice write up. Better … I now know about Elfeed, an emacs feed reader. https://github.com/skeeto/elfeed

    1. 1

      I’m interested to see what you think of elfeed.

      I tried it a while back but eventually gave up on it. Configuration was a pain and it didn’t match my “workflow” very well. And since the author’s moved to Vim, it seemed unlikely to improve much.

      It wasn’t terrible, I just couldn’t get used to it.

      1. 1

        Maybe he came back. The repo shows a commit 2 days ago.

        I just started using it. The jury’s still out on whether I like it. So far, not bad.

    1. 1

      I would love to know the validity of this claim. It seems fishy that a patent was filed but no white paper was submitted to journal for peer review (that I can find). If anyone with more expertise can provide their take on the matter, I would greatly enjoy it!

      1. 19

        The inventor has a website called boundedfloatingpoint.com. There he describes it in a bit more detail than the article, but not much.

        Note carefully how he describes it:

        This invention provides a device that performs floating point operations while calculating and retaining a bound on floating point error.

        And “[t]his invention provides error notification by comparing the lost bits”.

        It’s a solution to the problem of “unreported errors”. His solution provides extra fields in the floating point representation to carry information about ’lost bits” and allows the operator to specify how many significant digits must be retained before an error is flagged.

        This is an advantage over the current technology that does not permit any control on the allowable error. This invention, not only permits the detection of loss of significant bits, but also allows the number of required retained significant digits to be specified.

        At a cursory glance one might be inclined to think he’s solved the problem of floating point, but the reality is he’s developed a standard for communicating error in floating-point operations that can be implemented in hardware.

        Not to detract from his solution, but it doesn’t seem like he’s invented anything that will surprise hardware designers.

        1. 7

          Thank you for that analysis. This is a real problem with floating point numbers, but hardly the only one.

          People who haven’t seen it might be interested in this post from last year about a new number representation called “posits”, which addresses some completely orthogonal issues with fixed-size number representations. :)

          1. 1

            Nice! Thanks for the link.

          2. 1

            It’s a solution to the problem of “unreported errors”. His solution provides extra fields in the floating point representation to carry information about ’lost bits” and allows the operator to specify how many significant digits must be retained before an error is flagged.

            SIGNAL ON LOSTDIGITS;
            NUMERIC DIGITS 10;
            NUMERIC FUZZ 2;
            

            We just need to do all our math in REXX.

        1. 4

          Aha, glad to see more people thinking of replacing prelude, especially Snoyman!

          The Foundation project aims towards the same goal, but I guess having a FP-complete backed alternative cannot hurt!

          [edit] Right, posted this comment too soon! This is a different approach, they plan to actually reuse the existing libraries. This is definitely nice, hopefully the prelude problem will definitely be fixed in 2~3 years from now.

          1. 3

            What “Prelude problem” ?

            1. 5

              The project README more or less states the problem:

              The RIO module works as a prelude replacement, providing more functionality and types out of the box than the standard prelude (such as common data types like ByteString and Text), as well as removing common “gotchas”, like partial functions and lazy I/O. The guiding principle here is:

              • If something is safe to use in general and has no expected naming conflicts, expose it from RIO
              • If something should not always be used, or has naming conflicts, expose it from another module in the RIO. hierarchy.

              Snoyman and FP-complete are trying to move Haskell more in the direction of a batteries-included solution for software development. The Haskell Foundation project mentioned by @NinjaTrappeur above is attempting the same thing.

              Many of the changes RIO makes as a Prelude replacement solve problems beginners don’t know they have until they’ve been coding a while in Haskell. Using String rather than Text or ByteString is one of the most common mistakes beginners make. And why shouldn’t they? It’s right there in the “base” of the language. And then you learn, often after months of coding, String performance is a disaster.

              Whether RIO is the right solution, time will tell. That it’s a step in the right direction, is beyond doubt.

              1. 5

                I personally use Protolude. The problems it solves (for my projects, on my computer, for my use cases) are:

                • head :: [a] -> a becomes head :: [a] -> Maybe a (and all the other partial functions that throw error "message", like tail and so on…)
                • everything is Text
                • convenience functions that I used to copy in all my project, for example: toS which convert from any string-like (Text, String, ByteString, …) to any other string-like.
                • foldl, head, … are on traversable not just lists
                • a lot of other stuff, that I’m missing at the top of my head
                1. 2

                  afaik, it’s the issues connected with the standard prelude, either concerning inefficient data structures (String is defined as [Char], ie. a linked list) or simple lack of utilities, which are then afterwards commonly installed by many uses (eg. Data.Vector). Many other “alternative” preludes have tried to replace the standard, but most of them can’t manage to get any significant adoption.

                  That’s at least what I understand when someone says “Prelude problem”.

                  1. 2

                    The Foundation README gives some information about this “problem”, RIO gives other arguments. The two main issues people have with Prelude is partial functions and Lazy IO, as fas as I can tell.

                    1. 1

                      @akpoff, @zge and @lthms pretty much summed up the problem.

                      I would also come up with another problem class: “legacy semantics”.

                      [EDIT] The following statement is wrong.

                      The most notable offender is the Monad typeclass. As it is defined in base (prelude is re-exporting parts of the base library), Applicative is not a superclass of monad. Those two typeclasses are actually completely unrelated as it’s implemented. In other terms, you could end up with a Monad not being an Applicative. Some people are trying to fix that directly in base, some are trying to fix that in external libraries such as Foundation.

                      In the end, it is not such of a big deal for an intermediate/experienced developer; however, it is quite confusing for newcomers. Not knowing what you can safely use from the standard library is not a really nice user experience in my opinion.

                      [Edit] As a side note, I am saddened to see that the return keyword is preferred over pure. This keyword has a totally different meaning in procedural languages (ie. 90% of the languages), using it in this context is just a constant source of confusion for newcomers…

                      1. 1

                        Applicative has been a superclass of Monad for quite some time in GHC base. I disagree with the change (breaks compatibility) but understand why they did it.

                        1. 1

                          Oh yes, you’re right, my bad!

                          See monoid and semigroup, the problem is quite similar.

                  1. 3

                    This is pretty sleazy. NVidia used to make it hard enough to use consumer cards in lights-out scenarios. Some settings can only be adjusted via the GUI control panel, whereas the drivers for the workstation cards can be adjusted from the command line. But to add the prohibition to the driver EULA is pretty low.

                    1. 2

                      I’m adding spfwalk to spf_fetch as a tool, and support for spfwalk and the smtpctl version to spf_fetch. The scripts in spf_fetch, along with a bit of config and cron(8), handle the work of updating pf(4) records.

                      1. 2

                        This is crazy. I chalked the root-without-password problem up to Apple not caring much about MacOS anymore since they make so much more money from mobile devices. But this makes me think they just gave up.

                        1. 2

                          Once an intruder gains access to the user’s iPhone and knows (or recovers) the passcode, there is no single extra layer of protection left.

                          Is this such a big deal? With physical access to the device, anything is one password away. What do you expect once an intruder gains access to your MacBook and knows (or recovers) the password? The whole point of two factor authentication is just, in addition to knowing the password, you need physical access to a trusted device.

                          1. 7

                            The issue is that before Apple would require you to enter other passwords to accomplish certain action.

                            For example, before if you wanted to change your backup encryption password Apple would require you to type it in. Otherwise it would refuse to reset the password.

                            Now you can remove the backup encryption password without entering the older password first. This allows the possessor to create a new backup password, back the phone up, grab the backup and with other tools dump all other passwords in the Keychain, like all their Safari-saved passwords for other websites.

                            In this new world if the user happens to have setup two-factor auth, from the compromised device they can (quoting the article):

                            • Change the user’s Apple ID password
                            • Remove iCloud lock (then reset and re-activate the iPhone on another account)
                            • Discover physical location of their other devices registered on the same Apple account
                            • Remotely lock or erase those devices
                            • Replace original user’s trusted phone number (from then on, you’ll be receiving that user’s 2FA codes to your own SIM card)
                            • Access everything stored in the user’s iCloud account

                            All this because you have the device in hand and either guessed or coerced their PIN from them. Whereas before Apple had layers to their security model. With an iOS 11 device you can totally own everything Apple they own and possibly a lot more.

                            That’s why it’s a big deal.

                            1. 3

                              With physical access to the device, anything is one password away.

                              I would not expect physical access to the device alone to yield administrator-level control of my iCloud account and the ability to wipe any of my other devices (to which the attacker did not have physical access). I think that’s a genuinely non-intuitive behavior.

                          1. 2

                            Anyone else amazed to see WIPO handed down their decision in less than 3 months? Is that normal?

                            1. 12

                              Try http://paperswelove.org.

                              | Papers We Love is a repository of academic computer science papers and a community who loves reading them.

                              It’s a git repo of classic CS papers and a website for people who want to get together to discuss them. Some of the papers are easy, some hard, but a treasure trove to work your way through.

                              1. 6

                                Thanks for the PWL shout-out. PWL was started as a reading group within our old company, and we began with Out of the Tarpit and Communicating Sequential Processes. Those were the first papers I read, and I found them widely applicable, interesting, and approachable.

                                1. 3

                                  In the same spirit, I really enjoy Fermat’s Library. I don’t really read the annotated versions most of the times, as the website is set up in a way that makes it annoying for me (which is a shame, the comments are normally pretty great), but use it mostly as a feed of interesting papers to read on different areas.

                                  1. 1

                                    It’s a huge repo, any guidelines to find the ones which are easy/beginner friendly?

                                    1. 6

                                      The section on design is a good start, especially “No Silver Bullet” (NSB) and “Out of the Tarpit” mentioned above.

                                      Fred Brooks, who wrote NSB, also wrote another classic before that called “The Mythical Man Month” which is sold on Amazon as a collection of papers, including NSB.

                                      A Mathematical Theory of Communication” by Claude Shannon is the foundational work in information theory.

                                      Reflections on Trusting Trust” is a great read and eye opening as an intro to security.

                                      The Unix Time-Sharing System” by Dennis Ritchie and Ken Thompson is the classic intro to Unix.

                                      In addition to these and others from PWL, I’d also add Dijkstra’s paper “Go To Statement Considered Harmful”. Whether you agree or disagree, it’s the seminal paper on structured programming.

                                      I’d start with those, follow interesting footnotes and references to see where they lead you.

                                  1. 1

                                    I added auto detecting and joining of known wireless networks to netctl. It’s not ready for boot time, but it works once booted. In particular, it works well with apmd(8) scripts like resume.

                                    I also added some quickstart details to the README.md in the github repo.

                                    1. 1

                                      Are you aware of ArchLinux’s netctl project? It’s based around systemd so it’s not portable to *BSD but the UX might be useful to draw inspiration from. Also, it’s an established project with the name ‘netctl’. :)

                                      1. 1

                                        Not until just now. Given that systemd will never be ported to OpenBSD, I’m Ok with the name collision. ;-)

                                        Thanks for the UX ideas!

                                      1. 6

                                        Working on a cli network configuration manager for OpenBSD. So far it can enable/disable/start/stop/restart interfaces as well as create/destroy and switch between locations. Locations are currently limited to just one wap config for wireless interfaces. I’m tidying up the man page later today.

                                        I have some older code that will connect to the strongest wap from a group of configurations. I’ll integrate that next.

                                        It’s not a replacement for ifconfig(8)…it’s written in pure shell (using no commands outside of shell, /bin and /sbin), modeled after rcctl(8). I think I can get automated wap connecting working at boot time. Keeping it in pure shell (or c) is the only way to ensure it will work at boot time.

                                        I’m using it on my my laptop to switch between work and home. I should have a preview ready for general consumption pretty soon.

                                        1. 1

                                          Are you aware of nsh? http://www.nmedia.net/nsh/

                                          1. 1

                                            No, I don’t recall hearing anything about it before.

                                            nsh looks nice, but it requires changing your system from default. It replaces the functions of netstart(8) and some parts of rc(8). You have to delete things like /etc/hostname.* and /etc/mygate and give nsh management of some networking daemons.

                                            My project is less ambitious. It’s modeled after rcctl(8) and works with netstart(8). Other than creating symlinks to manage hostname.if(5) configuration files, it’s bog-standard OpenBSD.

                                            Still, very cool stuff.

                                            1. 1

                                              I like the sound of it. Please post a link on the board when you decide to release it.

                                              1. 2

                                                Thanks. I’ll definitely post it here, hopefully in a couple of days.

                                                I’ll also post it to http://github.com/akpoff.

                                                1. 2

                                                  Initial version posted on github, and article submitted to lobste.rs.

                                                  https://github.com/akpoff/netctl

                                                  https://lobste.rs/s/di8j1j/netctl_cli_network_location_manager

                                          1. 12

                                            Chen’s blog post is interesting in both what it references, McIlroy critiquing Knuth, and in what it misses in that exchange.

                                            In short, in 1986 Jon Bentley asked Donald Knuth to demonstrate literate programming by implementing a word-count program which would then be critiqued by Doug McIlroy. Knuth delivered a beautiful example of literate programming in Pascal. 10 pages worth. McIlroy, in addition to his critique, delivered a six-segment shell script that accomplished the same thing without intermediate values…a purely functional implementation as Chen describes it.

                                            McIlroy, among other comments, ends his critique with:

                                            Knuth has shown us here how to program intelligibly, but not wisely. I buy the discipline. I do not buy the result. He has fashioned a sort of industrial-strength Fabergé egg—intricate, wonderfully worked, refined beyond all ordinary desires, a museum piece from the start.

                                            That’s the background.

                                            Chen takes up the topic because he’s intrigued by McIlroy’s solution because it’s purely functional, wondering how he’d do the same today. He writes his solution in Haskell in two variations: “standard” and literate. As a Haskell implementation, it’s effective. Chen then discuss the advantages of both and falls on the side of “standard” rather than literate. Had he left it at that, it would be an interesting bit of Haskell.

                                            A curious exchange in the comment section brings the discussion back to McIlroy’s critique of Knuth. Dorin B takes Chen to task for misunderstanding McIlroy’s point:

                                            You missed the point in McIlroy’s sollution: to use reusable components.

                                            Chen then replies:

                                            No, I think I illustrated exactly the point that McIlroy was making, and I believe that if you emailed him, he would completely agree with me today. … Note how every single line in my Haskell program is in fact a reusable component.

                                            Chen completely misses Dorin’s that for McIlroy’s reusable components isn’t about functions or sub-routines but composable tools. Dorin’s right.

                                            In the interview Chen posted with McIlroy the question that segues into a discussion of his critique of Knuth’s solution begins with a discussion of how pipes effectively invented the concept of the tool. McIlroy says:

                                            McIlroy: Yes. The philosophy that everybody started putting forth: “This is the Unix philosophy. Write programs that do one thing and do it well. Write programs to work together. Write programs that handle text streams because that is a universal interface.” All of those ideas, which add up to the tool approach, might have been there in some unformed way prior to pipes. But, they really came in afterwards.

                                            MSM: Was this sort of your agenda? Specifically, what does it have to do with mass produced software?

                                            McIlroy: Not much. It’s a completely different level than I had in mind. It would nice if I could say it was. (Laughter) It’s a realization. The answer is no. I had mind that one was going to build relatively small components, good sub-routine libraries, but more tailorable than those that we knew from the past, that could be combined into programs. What has… the tool thing has turned out to be actually successful. People just think that way now. That’s providing programs that work together. And, you can say, if you if stand back, it’s the same idea. But, it’s at a very different level, a higher level than I had in mind. Here, these programs worked together and they could work together at a distance. One of you can write a file, and tomorrow the other one can read the file. That wasn’t what I had in mind with components. I had in mind that … you know, the car would not be very much use if its wheels were in another county. They were going to be an integral part of the car. Tools take the car and split it apart, and let the wheels do their thing and then let the engine do its thing and they don’t have to do them together. But, they can do them together if you wish.

                                            MSM: Yeah. I take your point. If I understand it correctly, and think about it, a macro based on a pipeline is an interesting thing to have in your toolbox. But, if you were going write a program to do it, you wouldn’t just take the macro, you’d have to go and actually write a different program. It wouldn’t be put together out of components, in that sense.

                                            McIlroy: So, when I wrote this critique a year or two ago of Knuth’s web demonstration. Jon Bentley got Knuth to demonstrate his web programming system, which is a beautiful idea. …

                                            Now, in 1968, I would have thought he was doing just right. He was taking this sub-routine and that sub-routine, and putting them together in one program. Now, I don’t think that is just right. I think that the right way to do that job is as we do it in Unix, in several programs, in several stages, keeping their identity separate, except in cases where efficiency is of extreme importance. You never put the parts into more intimate contact. It’s silly. Because, once you’ve got them there, it’s hard to get them apart. You want to change from English to Norwegian, you have to go way to the heart of Knuth’s program. You really ought to be able to just change the pre-processors that recognize this is a different alphabet.

                                            For Chen to then argue that his Haskell implementation illustrates exactly McIlroy’s point shows Chen either didn’t read what McIlroy had to say about it, or doesn’t understand it. That’s not to say McIlroy is against functions, sub-routines or software toolboxes. But that’s not the point McIlroy was making.

                                            Of course Chen isn’t alone in misunderstanding Unix and what Thompson, Ritchie, Kernighan, McIlroy and many others achieved with it. In a nutshell this is what distinguishes many BSD users from Linux users.[1] BSD isn’t merely about POSIX, nor is it about avoiding Windows and other proprietary software (important as those goals may be). BSD is mostly about the Unix philosophy.[2]

                                            On the whole, Chen’s discussion of literate vs “standard” program is interesting. As a Haskell programmer, I find his solution informative. As commentator on McIlroy or the Unix philosophy, I’ll look elsewhere.

                                            [1] That’s not to say many Linux users aren’t interested in the Unix philosophy or that all BSD users are Unixphiles. Setting aside criticisms about security, implementation and what have you, the issue many Linux users have with /systemd/ is isn’t Unix-like.

                                            [2] Yes, the various BSDs differ a bit on how that looks and how rigorously to pursue it.

                                            Edit: Fix formatting.

                                            1. 2

                                              I think that the right way to do that job is as we do it in Unix, in several programs, in several stages, keeping their identity separate, except in cases where efficiency is of extreme importance.

                                              We often must ditch this lots-of-separate-programs approach whenever efficiency is of more than negligible importance.

                                              1. 0

                                                I admit I know nothing of the wider context; all I know is what you’ve posted here. But from what you’ve written it sounds like Chen is presenting the 21st century McIlroyian view which may not be the same as the original in concrete terms but is the same spirit rebased on today’s technology.

                                                1. 3

                                                  I think Chen is trying to cast McIlroy that way, but for it to be the 21st-century version of McIlroy’s argument, reusable software components would have to come after tools. But that’s not how it went. Indeed, McIlroy says as much in his critique of Knuth:

                                                  Now, in 1968, I would have thought he was doing just right. He was taking this sub-routine and that sub-routine, and putting them together in one program. Now, I don’t think that is just right. I think that the right way to do that job is as we do it in Unix, in several programs, in several stages, keeping their identity separate, except in cases where efficiency is of extreme importance.

                                                  Chen is saying more than is warranted based on the support he provides (the linked interview). It’s one thing to write something like: “In the same spirit as McIlroy’s reusable tool approach…” It’s another to write:

                                                  No, I think I illustrated exactly the point that McIlroy was making, and I believe that if you emailed him, he would completely agree with me today.

                                                  Again, that’s not to say McIlroy would disagree with a reusable-component approach to software development.

                                                  The point is McIlroy was making a very specific critique of Knuth’s program based on value of tool over software reuse and composition and Chen completely missed it. And then asserts McIlroy would agree with him that writing software with reusable software components is point McIlroy was making.

                                                  Chen misunderstands McIlroy, and worse, imputes his misunderstanding to McIlroy.

                                              1. 1

                                                Is this accurate? Some of those screens were up on HN a few days ago (specifically the json data) and people had some legit concerns that the data wasn’t the real leaked data.

                                                Is the actual leaked information up anywhere? I’m surprised at how little information there is on exactly how they even knew about the data breach.

                                                And the author of the posts says Equifax blamed Apache, but actually it was Apache Struts, a specific Java web framework (which I myself used at one point and haven’t touched since 2009).

                                                So what really happened and how did people find out about the actual hack. Those are still questions I haven’t seen a reliable answer to anywhere. This page has a lot of screens that may explain it (although I thought the admin/admin thing was a totally separate issue in Argentina?)

                                                1. 1

                                                  The odds-on favorite is a Struts bug discovered and tagged as CVE-2017-5638 in March of this year.

                                                  Originally some thought it was this Struts issue but the Equifax breach was discovered in July. If it was CVE-2017-9805, then it would have been a zero-day.

                                                  And depending on when Equifax was first breach, the use of CVE-2017-5638 might have been a zero day at the time. And to be mildly fair to Equifax, fixing CVE-2017-5638 is non-trivial because it can require rebuilding numerous packages.

                                                  Still, from what we’ve seen in their handling of the breach, Equifax don’t appear to have a mature security culture. There may have been more than one door in.

                                                  Update: I originally wrote that CVE-2017-5638 had been around for 9 years (unknown). It was actually CVE-2017-5638 that had been around for 9 years (unknown).

                                                1. 18

                                                  When ad-blocking was obscure, we could free-load off of the majority who fund services by viewing ads… now Apple is taking my free lunch! :/

                                                  1. 18

                                                    I clicked on the article. It came up and I started reading it. I didn’t get very far when the window turned black, and said I had to rotate the screen to view it “properly” on my phone. First, I’m not on a phone, thank you very much. Second, I’m on an iPad, using it in landscape mode because I’m using it as a laptop [1].

                                                    Fine, I turn the iPad to portrait mode. Page loads with this #@%@#$@$ vertical ad, covering the article, with no way to dismiss it. Thank you so very much. Thank you so very much that I’m not going to read your sob story about how blocking ads will destroy the Internet.

                                                    [1] No power. Using iPhone as hot spot. Still waiting for power company to restore power after Hurricane Irma.

                                                    1. 5

                                                      Upvoted for your honesty. That’s exactly what ad-blocking is. The malware reduction argument some respond with is bogus. If they were about paying for what they consume and didn’t like malware, they’d just not use the ad-supported services. Free shit rocks, though, right? ;)

                                                      1. [Comment removed by author]

                                                        1. 21

                                                          I worked at a streaming media company. A lot of our ads were supplied by brokers like Google. They were mostly harmless. Frequently, however, we’d get custom ads for special events (launch events for movies, TV shows, and games).

                                                          The code in the special-event ads was a disaster. If I could, I’d clean it up so that it still worked. Problem mostly mitigated.

                                                          However, in many of the embed snippets we’d receive the code was a script that would pull the real ad from the advertising company’s servers. Complete crap. Almost all of them would engage in some kind of DOM manipulation. If you didn’t isolate the ads they would break the layout.

                                                          The ad code would often try to include its own trackers for unique-visit tracking. Flash ads were very popular. So the companies would try page-takeover techniques to block everything and force you to view 15 seconds of crap. (And let’s not forget pop-over and pop-under ads.)

                                                          Very few companies were content with a simple image and an anchor tag to let the user follow-up for further information.

                                                          And that’s the chief problem with online ads. They try to be way too smart. Many want to interact with the user, or worse, “demand” you pay attention. Advertisers frequently have an attitude of “I paid for this, you’re going to give me some time.” They’ll say they just want to inform the public. But no. They want ROI.

                                                          And these are the “legit” advertisers. After that there are the skeezy “b” players (remember “X10”) who aren’t trying to rob you but are more like the used car salesman of the internet. Then there are the porn advertisers and lastly the purveyors of drive-by malware. This last group doesn’t even pay for ad space. They steal it.

                                                          And don’t forget the ad networks and information aggregators who want to build detailed dossiers about everyone (Google and Facebook are the most public of these). Who do you think invented persistent cookies?

                                                          No. Being suspicious of online advertising isn’t a sign of paranoia. It’s sensible.

                                                          1. 4

                                                            Why aren’t ads just regular websites served in an iframe? That way, their shitty code couldn’t break anything about your website. Each site could have their own ID, sent in a query parameter in the iframe URL, to track which websites provide impressions. The ad could still be as flashy and interactive as it wants. The ad’s code could be as shitty as it wanted, and it wouldn’t have a negative impact on any users.

                                                            1. 9

                                                              That would make sense, but many ad networks ban displaying ads in iframes because they can’t check the contextuality of the ad to the page the user sees. The ban also helps mitigate fraud. If the ad could only “see” the iframe around it, it would make it easy to load the ad via techniques as simple as using curl, to more sophisticated uses of multiple javascript xhr requests.

                                                              Google still ban it today (AdSense Policy FAQ). Common phrasing for this is “posting on a non-content page”.

                                                              The online advertising industry created the cesspool and now they’re whining that Apple, Google, Mozilla, and dozens of ad-blocking companies are trying to force them to clean-up.

                                                              On a related note, it might seem weird that Google would try to force better practices with Chrome when they make their money on advertising. But for the most part, Google run a pretty tight ship and force advertisers to adhere to some reasonable standards.

                                                              Weeding out the worst players keeps the ecosystem sustainable. The last thing Google want to see is an end to online advertising. And it doesn’t hurt their chances of winning more advertising dollars from the gap left by their departure.

                                                              1. 6

                                                                because they can’t check the contextuality of the ad to the page the user sees.

                                                                Well they can: IFrame “busters” have been available for a long time, and since the ad network is usually more trustworthy than the publisher (to the Advertiser anyway) they could provide an interface to look up the page the user is on well before location.ancestorOrigins (and generate errors if parent!=top).

                                                                Indeed most of the display networks used to do this – all of them except Google, and now AdSense has edged everyone who wants to do impressions out.

                                                                On a related note, it might seem weird that Google would try to force better practices with Chrome when they make their money on advertising. But for the most part, Google run a pretty tight ship and force advertisers to adhere to some reasonable standards.

                                                                Google is probably the worst thing to come to advertising and is responsible for more ad fraud and the rise of blocking crap JavaScript than any other single force.

                                                                Google will let you serve whatever you want as long as their offshore “ad quality team” sees an ad. Everyone just rotates it out after 100 impressions and Google doesn’t care because they like money.

                                                                Google still lets you serve a page as an iframe – even if it has ten ads on it. Buy one ad, sell ten. Easy arbitrage. Even better if you can get video to load (or at least the tracking to fire). This has been trivial to stop for a long time, but hey, Google likes money.

                                                                Googles advertising tools are amongst the worst in the world (slow, buggy, etc) and make it difficult to block robots, datacentres, businesses, etc. using basic functionality that other tools support.

                                                                What’s amazing is Google’s PR. So many people love Android, good search, that quirky movie about an Intern, the promise of self-driving cars, and so on, that they don’t educate themselves about how Google actually makes their money out of fleecing advertisers and pinching publishers.

                                                                1. 1

                                                                  Iframe busting is a technique for content in the iframe to “bust out” and replace the page with itself. It’s primarily used for ad-takeover and to prevent clickjacking. It’s not a technique for accessing the DOM of the parent. Browser bugs aside, accessing the DOM of the parent requires the child have the same origin as the parent (or other assistance).

                                                                  location.ancestorOrigins might not give the ad network or advertiser the contextual information they want if the page the user is viewing varies by status (guest, authenticated user, basic membership, premium membership).

                                                                  It’s easier (and better for data gathering) for ad networks to demand they’re on the same page the user is viewing. Whether that’s a good thing for the end user probably doesn’t matter to many content providers as long as the ad network isn’t serving up malware (or causing other issues that might hurt the provider/user relationship).

                                                                  In short, you want to monetize your site, you find a way to convince users to pay, or you get advertising which means you play by the ad-networks’ rules.

                                                                  Google definitely has issues, but they’ve made it easy enough and, compared to their competitors, less problematic such that many content providers accept it.

                                                                  1. 1

                                                                    Iframe busting is a technique for content in the iframe to “bust out” and replace the page with itself. It’s primarily used for ad-takeover and to prevent clickjacking. It’s not a technique for accessing the DOM of the parent.

                                                                    The same API ad servers provide to iframes for doing these rich media operations, also carry other capabilities, e.g. EyeBlaster’s _defaultDisplayPageLocation

                                                                    Since (hypothetically) the ad network is more trustworthy than the publisher, this could have been used to trivially unmask naughty publishers.

                                                                    The only reason I can come up with for the sell-side platforms not doing this is that they like money.

                                                                    Google definitely has issues, but they’ve made it easy enough and, compared to their competitors, less problematic such that many content providers accept it.

                                                                    They don’t really have any display/impression competitors for small sites anymore… although I’ve been thinking about making one.

                                                          2. 4

                                                            Well, I respect you for trying to avoid freeloading. I should also add I think it’s ethical for people to use ad blockers for security who otherwise avoid ad-supported site. Just trying to stop any sneaky stuff.

                                                            1. [Comment removed by author]

                                                              1. 2

                                                                That’s reasonable. Similar to AdBlocks Acceptable Ads where being obnoxious or sneaky is unacceptable but ads themselves are OK.

                                                          3. 5

                                                            I disagree with that viewpoint. It’s right up there with, “Our service would be secure if people would just stop requesting these specific URLs.”

                                                            I just don’t see ad-blocking as freeloading. It doesn’t make any sense to pay for something when there’s an equally good free alternative.

                                                            I’m a happy paying customer of GitHub, Fastmail, SmugMug, Amazon Prime, Flickr, Netflix, and probably some services I’m forgetting. At the same time, I’m not stupid, and I’m not going to be annoyed and look at ads.

                                                            1. 1

                                                              ““Our service would be secure if people would just stop requesting these specific URLs.””

                                                              It’s certainly not. Managing the risk your product or service has for consumers is totally different than getting a good you know is ad-supported, has ads built-in by default, and stripping the benefit to the other party while enjoying the content. They’ve put work into something you enjoyed and a way to be compensated for it. You only put work into removing the compensation.

                                                              “ It doesn’t make any sense to pay for something when there’s an equally good free alternative.”

                                                              I agree. I then make the distinction of whether I’m doing it in a way that benefits the author (ads, patreonage, even a positive comment or thanks) or just me at their expense since they didn’t legally stop me. I’m usually a pirate like most of the Internet in that I surf the web with an ad blocker. I’m against ad markets and I.P. law, too broke to donate regularly, and favor paid/privacy-preserving alternatives where possible (i.e. my Swiss email). When I get past financial issues, I’ll be using donations for stuff where possible. I still do that occasionally. Meanwhile, you won’t catch me pretending like I’m not freeloading off the surveillance profiles of others on top of whatever they have on me.

                                                              1. 6

                                                                These anti-adblock sentiments seem to always assume the content creator will get paid if I don’t block the ads. But that assumes that either (1) they get paid by impression – which is vanishingly rare or (2) I would click on ads, which I won’t blocked or not.

                                                                1. 1

                                                                  Now that’s a good counter worth thinking about. It still fits into my overall claim of freeloading, though.

                                                            2. 2

                                                              Mostly it doesn’t which is why most of the time I don’t bother to look for ways to pay for it. But setting aside vast majority of websites where I might visit only once or twice why should I go out of my way to avoid sites that don’t offer any (to me) reasonable way of paying for them?

                                                              From practical point of view using ad-blocker I don’t even know about most websites approach to monetisation if there is one. I do bail on those that notify me about my ad-blocking which I guess is ethical in your book?

                                                              For what is worth I do pay for a bunch of online services, few patrons and sponsor/subscribe to a couple of news media organisations.

                                                              1. 2

                                                                why should I go out of my way to avoid sites that don’t offer any (to me) reasonable way of paying for them?

                                                                A good point. The authors concerned with money should at least have something set up to receive easy payments with a credit card or something. If they make it hard to pay them, the fault is partly on them when they don’t get paid.

                                                            3. 3

                                                              While I agree content needs to be paid for in some manner - network ads use a not insignificant amount of bandwidth which I pay for on my mobile data allowance and at home through my ISP. The infrastructure costs of advertising, and spam email are not all bourne by the producers of that content. From my perspective the advertisers are not funding the content that I want…

                                                              1. 1

                                                                Well, that’s interesting. I can relate on trying to keep the mobile bill down. It still falls in with freeloading where you don’t agree to offer back what they expect in return for their content. Yet, it’s a valid gripe which might justify advertisers choosing between getting ads blocked or something like progressive enhancement for ads. They offer text, a pic, and/or video with what people see determined by whether a browser setting indicates they have slow or expensive Internet. So, they always serve something but less bandwidth is used when less is available.

                                                            1. 4

                                                              The story link is to an opinion based on a summary page that was written for a security research paper, so I’m suggesting “rant”. The source material is pretty interesting, however.

                                                              If you aren’t into computer or network security, you might not realize that SSL data can be legitimately intercepted and scanned for vulnerabilities by your security systems. The authors of the paper explore how prevalent that has become.

                                                              Here’s a link to a summary page written by one of the authors of the paper:

                                                              Understanding the prevalence of web traffic interception

                                                              And here’s a PDF link to the actual paper, which is the better read:

                                                              The Security Impact of HTTPS Interception

                                                              I think the conclusions they make about prevalence of malware may not be justified by the data, given that they identified 24 different legitimate scanning systems but could only fingerprint six of them, but I agree strongly with the final conclusion- If you are going to install a network middlebox, you’d better make absolutely sure that you are comfortable with how it handles it’s end of the connection security. These products may apply different security standards, and many of them are not an upgrade over what an endpoint might do on its own.

                                                              1. 1

                                                                You don’t have to be one of those dreaded “capitalist bootlicker apologists” to recognize that inspecting inbound and outbound traffic (of their devices on their network) makes sense from a security perspective.

                                                              1. 4

                                                                For the more price conscious, the T470 is a good alternative to the X1C5 when you account for the fact that the hard drive and memory are user serviceable so you can source cheaper third party options. The battery life is quite good (double digits) as there’s an internal 3cell and an external 6cell.

                                                                1. 2

                                                                  Still happy with my T450s. Built-in RJ-45, SD Card, multiple USB, Display Port and VGA. And, there’s a real docking station that works under OpenBSD. (Yeah, real time. Ports and display are immediately available. Had to write scripts to handle turning things on and off, but no reboot required.)

                                                                  The main differences I see between it and the T470 is the T470 has USB-C, HDMI instead of VGA and maxes at 32GB of RAM. Disappointing there’s no WQHD option, though.

                                                                  Still, great laptop if you need more than ultraportable.

                                                                1. 1

                                                                  Good advice. I would add, though, that when working with a defect-tracking system, write the summary to reference the defect and defect summary:

                                                                  Defect #: Defect Summary.
                                                                  

                                                                  Or whatever format the system requires to link the commit to the defect. (s/defect/feature/g as needed)

                                                                  1. 2

                                                                    I’ve worked in both environments. I see pair programming as more of a tooling question than a methodology. It’s like the IDE versus text editor question: if an IDE makes you more productive than a text editor, use an IDE.

                                                                    It’s the same with pair programming. I’m not a fan of pair programming but I’ve seen other developers excel at it. If it works for you, great! But don’t force the tool on all developers because it makes you more productive.[1]

                                                                    Some shops think PP ensures more than one developer knows the code base. That’s true to a point. Another option might be to focus on code hygiene and documentation. If you can afford to put two developers full time on every problem perhaps you can afford to let each developer spend 50% of their time on documentation.[2]

                                                                    Regarding the impact of open-door versus closed-door developers on productivity, I see that as orthogonal to PP. One developer behind a closed door knows as much or as little about what the rest of the team is doing as a pair behind a closed door. The key is emerging from behind the door and talking with the rest of the team. For many developers a company IRC server is the perfect solution. For others, weekly team lunch or daily stand-up.

                                                                    [1] I prefer to collaborate by talking through the occasional, knotty problem with another developer, typically on a whiteboard or over coffee. Rarely over a keyboard. (I really hate working on other developer’s keyboards when the CapsLock isn’t remapped to Ctrl. Nothing slows me down more than a keyboard not designed for my typing style.)

                                                                    [2] This is one of the reasons I prefer the BSDs (especially OpenBSD) to Linux. The base documentation is very good and useful for it’s intended purpose. It’s not just a check mark.

                                                                    1. 2

                                                                      faq14 used to say this:

                                                                      Recovering partitions after deleting the disklabel

                                                                      If you have a damaged partition table, there are various things you can attempt to do to recover it.

                                                                      Firstly, panic. You usually do so anyways, so you might as well get it over with. Just don’t do anything stupid. Panic away from your machine. Then relax, and see if the steps below won’t help you out.

                                                                      (apparently, was removed last year in this big cleanup)

                                                                      1. 4

                                                                        scan_ffs(8) still does:

                                                                        The basic operation of this program is as follows:

                                                                        1. Panic. You usually do so anyways, so you might as well get it over with. Just don’t do anything stupid. Panic away from your machine. Then relax, and see if the steps below won’t help you out.
                                                                      1. 3

                                                                        I get the point of his article, but it isn’t the friendliest of defenses. It boils down to, if you like fast, optimized code, use C++. Otherwise, it’s confusing and painful.

                                                                        • “Because honestly, it’s a pretty strange language with a lot of warts, and why would you use it for any other reason?”
                                                                        • “Templates exemplify a lot of what I’ve just discussed. If you are fortunate enough to not be too familiar with them….”
                                                                        • “This system [Templates] is really awful, for a lot of reasons.”
                                                                        • “Most programmers would agree that UB [undefined behavior] makes the language confusing and difficult to understand. The situation is particularly bad because literally anything can happen during UB, and thus UB makes it very difficult to reason about degenerate program states.”
                                                                        • “Over time, if you’re like me, you may begin to experience Stockholm syndrome and start to actually enjoy writing C++.”

                                                                        Summary: “Writing fast code is fun, and while micro-optimizing memory copies and minimizing pointer indirection isn’t for everyone, it can be enjoyable if you have a knack for it.”