1. 10

    Perhaps it’s a quality of implementation issue?

    For example, in Rust it’s not all-or-nothing. You can write Vec<_> to hint it’s a vec, but omit the type of its element (useful for the collect() method that can build any container, but needs to know which). OTOH std::vector<auto> is not legal AFAIK.

    In Rust type inference is function-wide, so it often can work out the type “backwards”:

    let mut a = Vec::new(); 
    a.push("string"); // now we know it's a Vec of strings
    

    while in C++ auto is very local, and requires an explicit initializer (and you might end up with std::initializer_list instead of the type you wanted).

    Rust also doesn’t have function overloading and implicit type conversions, so the results of the inference are more certain.

    1. 6

      There is one huge difference: Rust is memory-safe, so if the compiler gets it wrong, it can only result in a compiler error or incorrect runtime logic. If the compiler gets it wrong in C++, you can wind up with a reference where you expected an owned value to be, and woo boy dangling dereference.

      1. 3

        Rust took its type inference from Haskell right (Hindley-Miller)? Looks like you can hack your way to get to something close to overloading using traits, but it’s an ugly hack. I guess ad-hock overloading could slow down the compiler significantly.

      1. 9

        What I always found funny is that even in languages like C with “no type inference,” the nested expressions still have implicit, inferred types. Like in this case:

        int i = some_function(some_other_function(1 + 3) & 0xF0);
        //                    \  what's some_function's arg?  /
        // Of course, you can figure it out by looking at some_function's signature,
        // but if you're willing to look at some_function's signature for its argument type,
        // then why aren't you willing to look at some_function's signature for its return type?
        
        1. 3

          To be fair though, there’s no overloading in C. Once you know what the function returns, you can count on that.

          In C++, the return type could be dependent on an overload from one of the arguments. And its type could be from a previous call that was also overloaded, etc.

          This is a tricky spot where auto is either amazing or annoying: if those types are wishy-washy because it’s templated, auto is great and perhaps one of the few ways to easily express those interdependent types. If it’s not, it might leave someone temporarily confused while they figure out the overloads.

          1.  

            Java specification even calls it type inference:

            https://docs.oracle.com/javase/tutorial/java/generics/genTypeInference.html

            Because it is!

          1. 4

            Rust uses the Hindley-Milner type system most commonly associated with ML-family languages, most famously Haskell.

            Nope. Rust stopped using H-M before even releasing 1.0. Here’s a description of it http://smallcultfollowing.com/babysteps/blog/2014/07/09/an-experimental-new-type-inference-scheme-for-rust/ and here’s the pull request https://github.com/rust-lang/rust/pull/15955

            1. 18

              What kind of comments/remarks have you seen with an “abrasive tone” that you would flag (wouldn’t link to them, just copy/paste the relevant parts).

              In general, I find that Lobsters is doing pretty well – better than the vast majority of communities I’ve seen – but there are many discussions I don’t read.


              A personal note on “abrasive” communicating: as a non-native speaker from a country where the culture is very direct (Netherlands), it took me quite a long time to communicate well in English in the absence of body language (i.e. over text, like here).

              It’s one thing to know a language’s vocabulary, grammar, and idioms. It’s another thing to use them as a native speaker expects. I worked remote for an Irish company for three years, and I learned a lot about communicating in English during that time. There have been quite a few cases where I unintentionally was more abrasive than I wanted to be. The feedback from coworkers about this was an invaluable learning experience, and if I re-read some stuff I wrote four years ago I often spot things I phrased far more abrasive than I intended.

              The sample size here is just one (i.e. just me), perhaps other people are just better at language/English than me. I suspect part of the problem was that my English was in an “uncanny valley” of being very literate/fluent, but also not quite good enough to fully grok the effect of everything I typed (sometimes this was just a matter of punctuation!) This is different from some of my other non-native coworkers, who were clearly non-native speakers. If they say something awkward/abrasive, then it’s clearly just because they’re not good at English. I was usually not given the benefit of the doubt here.

              At any rare, my point here is, “abrasive tone” doesn’t mean “the author intended it like that”, or “the author it is a jerk”. Sometimes it clearly is (e.g. if you call someone an asshole there is little doubt), but often times it’s more nuanced.

              Hence my request for some examples.

              1. 8

                I’m probably guilty of knowingly making abrasive comments that motivate this proposal:


                The thing about alcohol and money kind of makes sense, but if you actually think the human dependency on food and water is analogous to meth addiction, then you are a moron.


                You completely and utterly missed the point.

                Mastodon, Synapse, and GNU Social all implement a mixture of blacklists, CAPTCHAs, and heuristics to lock out spambots and shitposters. The more popular they get, the more complex their anti-spam measures will have to get. Even though they’re not identical to internet mail (obviously), they still have the same problem with spambots.


                Uh, what?

                […]

                I have no idea what the OP is doing, but it’s weird.


                Most of them are returning like for like (I’m pretty sure the one comparing meth to food, in particular, was not made in good faith) but I probably should have just not responded at all. The last one, I probably should’ve left out the first paragraph, since it seems unnecessary.

                1. 4

                  Most of them are returning like for like (I’m pretty sure the one comparing meth to food, in particular, was not made in good faith) but I probably should have just not responded at all.

                  Rudeness is a systemic factor like anything else. If people are being rude because one person is constantly trolling them, then sure, we should call out the rude people… but we should also do something about the troll.

                  1. 2

                    How would you feel about:

                    1. People downvoting the comment(s) with “Unkind”?
                    2. People replying with comments like “Please change your tone, no need for that language”?

                    Would you be annoyed/offended by one or the other (or neither/both)? And what would, if anything, help you change your tone. This is assuming your comment is either in reply to a “nice” comment or that the parent comment got the same treatment (downvote or replies to change their tone).

                    Ultimately, I think everyone here wishes to change the community for the better, and silencing or kicking members that contribute with insightful, if aggressive, comments should not be a goal.

                    1. 2

                      I would prefer a PM to either of them, since it allows me to edit the comment without cluttering up the main thread.

                    2. 2

                      If you’re interested in some feedback (I’m letting myself risk a guess that this might be your motivation behind posting this comment?): personally, I would agree — to me at least the first two do sound abrasive. The last one actually not so much; I mean, the “Uh, what?” seems to express surprise and lack of understanding, which I believe is more than OK (problems with understanding happen often in discussions, as it’s sometimes genuinely hard to convey thoughts precisely in any language). Although, an “I think I may not understand something” might be an even “gentler” variant. As to the very last sentence, I’d add “to me”, i.e.: “it’s weird to me”. This could make it less of an attempt at absolute, authoritative judgement, and more of a subjective opinion, which tends to be easier to receive. As to the 2nd comment, again, changing the “You completely and utterly…” prefix to a softer one, say, “I believe you may have…” could give the interlocutor some generous benefit of doubt. In the first one, I’d say not responding may be not that bad of an idea; especially per the Internet’s very own “Do not feed the troll” adage from the older days — or, at least, calmly explaining that you feel the interlocutor may have not spoken in good faith, gives them some chance to rethink their statement, and maybe take it back or rephrase. On the other hand, as far as my experience goes, namecalling (moron etc.) seems rather to purely aggravate people; I don’t think I ever seen anybody react in any good way to namecalling…..

                      Hope this helps! And… really sorry if I misunderstood your motivation!…

                      1. -2

                        I’m pretty sure the one comparing meth to food, in particular, was not made in good faith

                        I was though. You made an inherent assumption that survival is worth stealing for but pleasure is not. Your belief that is particular valuation is so True that it is inviolable and anybody who tries must be trolling.

                      2. 2

                        I work with someone who writes like you said you used to. I remind myself that the problem is with me, not him, and besides, his communications are very clear and valuable.

                        1. 7

                          Tell him!

                          I had no idea until people told me, and the only reason people told me was because I asked, and the only reason I asked is because some people told me some people found it hard to get along with me (I had no idea!) Turns out the adjustment in phrasing was small, but it made all the difference in my relationship with some coworkers.

                          I was more than happy to make these adjustments, but … I can’t make them if I don’t know that I need to.

                          It’s kind of like complaining about someone’s music being too loud. I’ve complained maybe 5 or 6 times over the last ten years, and most of the time the response was “I’m so sorry, I had no idea!” Some people are assholes who just don’t care (happened twice), but most just don’t realize how their behaviour is affecting others, and they have no way of knowing unless you tell them.

                          1. 1

                            No, because he shouldn’t change. He communicates clearly and accurately, not aggressively, and there should be more people like him.

                            1. 4

                              English is a tool. If I were swinging an axe incorrectly, I’d want to be told, so I could be safer and more efficient with my efforts.

                              1. 2

                                Is that to help you or him? You might want a cultural change, more people “like him”. But do you think that is the best thing for him as an individual and his career?

                                Here is someone who was literally in the position that person is – and they are screaming “Tell him!”.

                        1. 5

                          Uh, what?

                          Literally every responsive web site I’ve ever written or even seen (other than flak.tedunangst.com, which not-coincidentally has a smaller than probably intended font size in my Android device) uses viewport width=device-width. That’s what Lobsters is using. That’s what GitHub’s mobile site uses. That’s what I use. And they employ stylesheets that look great on Safari, Chrome, and Firefox’s mobile versions. And it’s the same stylesheet used on desktop. And, in case you’re going to complain about sites that disable zoom, that’s a separate non-default directive and you don’t have to enable it on your site.

                          I have no idea what the OP is doing, but it’s weird.

                          1. 3

                            I looked at his stylesheet; he has one style for ≤1280 pixels and one for wider. Phone browsers are MUCH narrower than 1280px and the stylesheet prioritises the margins over the text when it allocates width, so he uses to make the phones pretend their screens are 720px wide and scale down the fonts to make that happen. That way the margins look okay, but the text is much smaller than intended.

                            I agree, weird.

                            @tedu if you read this, the “px” unit isn’t physical pixels in CSS, it’s 1/96 inch. There are reasons for that too, mostly historical but in the end if comes down to device oddities: Some devices’ resolutions aren’t well described in terms of addressable squares, including many printers and some phone screens. Your CSS is based on the idea that all devices are roughly 33cm wide.

                            1. 2

                              This is helpful. Though I’ll note that I’ve used the same variant of stylesheet forever. If I delete the meta viewport tag entirely, it renders pretty much exactly the same. I only added the viewport so that it would stop bouncing to top after navigating back.

                              I guess you could say the font is too small? But it’s the same size as I see on lobsters, or ars technica, or many other sites.

                              1. 5

                                If I delete the meta viewport tag entirely, it renders pretty much exactly the same.

                                Running without a viewport tag is essentially a “legacy mode” for Mobile Safari and its clones. It’s designed to make pages that assume everyone’s using a 1024x768 computer screen render without producing completely broken layouts. at the cost of requiring the user to zoom and pan. By running Safari in that mode while trying to make a site that’s mobile-friendly, you are using the browser in a way that is contrary to Apple’s design intent. And it don’t work too well, do it?

                                In contrast, look at https://notriddle-more-interesting.herokuapp.com/ and https://notriddle-more-interesting.herokuapp.com/assets/style.css. Notice how the style sheet contains no media queries at all, and how it does not perform user agent sniffing. Most of the stuff in that stylesheet is done in multiples of em, relying entirely on the browser’s defaults to be sensible, and when you run Mobile Safari with viewport width=device-width, they are. All I have to do beyond that is implement my margins using margin:auto and max-width.

                                If you want a simpler and cleaner example, and one that I didn’t write, look at http://bettermotherfuckingwebsite.com/. Notice, once again, that it contains no iphone-specific stylesheet (no media queries, no user agent stiffing) and it still looks great on an iphone. The magic incantations are:

                                • viewport with width=device-width
                                • the content margins are implemented using max-width, rather than setting specifc margins, so the margins grow and shrink as the browser grows and shrinks and the content is never forced into a tiny sliver
                              2. 2

                                Well, I tried again and I guess it works. As you said, I was trying to render to a smallish desktop sized canvas, then scale to phone screen. It’s easier for me to see what that looks like with my own desktop browser without resizing it down to actual phone size. I’m generally dissatisfied with special mobile styling and wish it did just work more like a tiny desktop. But no point fighting the whole world. Thanks.

                                1. 1

                                  You don’t need to use a mobile.

                                  If you use chrome or firefox, just drag the tab for your site out of the browser window. You’ll get a new, separate window that you can resize. When you make that narrow, both chrome and firefox will reapply all style elements, and you can see how your site will work on mobile.

                                  It may be a configuration option (I use KDE on linux), but when I resize the browser window, the browser updates the layout as I resize. I can get a quick look at the full range of widths just by moving the mouse slowly right and left for a few seconds.

                              3. 1

                                I’m mostly annoyed that the viewport tag is necessary at all. Browsers should just work without custom extensions.

                              1. 1

                                Looks like it was deleted.

                                1. 1

                                  the “cached” version still works

                                  1. 1

                                    Google

                                    1. That’s an error.

                                    The requested URL /search?q=cache:VmhUzso1ghQJ:https://twitter.com/DroidAlexandra/status/1119207230782550017+&cd=1&hl=en&ct=clnk&gl=us was not found on this server. That’s all we know.

                                    :(

                                1. 15

                                  The answer of course is, yes, it’s technically possible to implement the one infinitely scrolling site I’ve ever used that doesn’t completely suck:

                                  • Can the user hit “back” and return to the exact same place? Yes. When within a topic, it uses history.replaceState to keep track of where you are, and it will use that to return you if the page reloads.

                                  • Is there paging for when the JavaScript breaks? Yes. Non-JavaScript users get a completely different page, with old-school pagination.

                                  • Does the page have a footer? No (unlike every other question, answering in the negative to this one is considered good). If someone uses a theme component to add one, they get what they deserve.

                                  • Can a keyboard user access all other content on the page? Yes. The infinitely-added content is the last content on the page. Nothing is after it in the DOM.

                                  • Can you share a URL to a specific place on the page? Yes. See item 1.

                                  • Can a user easily jump ahead a few “pages” to quickly get to content much further down the list? Yes. That’s what the thing on the side does.

                                  • Does the memory footprint of the page dramatically increase after just a couple new “pages?” No. It removes items from the DOM when they go off-screen.

                                  • Is there a way to disable automatic infinite scrolling and lean on standard paging? No. Not other than disabling JavaScript.

                                  • Have you conducted any user tests? Yes.

                                  • Are you satisfying a use case that has come from research or user request? Yes. Infinite scrolling is employed to encourage people not to jump ahead to the last page.

                                  • Do you have any analytics/tracking to measure success? Yes. Users have complained about it, but it’s partially feeding into the anti-spam system, so it’s easy to justify.

                                  … but have you looked at Discourse’s code?! They not only reimplement parts of the browser, they reimplement parts of Ember.JS because the stock code is too slow. They have to implement debouncing for history.replaceState because updating it while scrolling caused scrolling to stop in Chrome. And the reusing DOM nodes thing is also pretty complicated. And there are still known issues.

                                  Discourse is slow enough that opening a Discourse page causes my laptop fan to briefly spin up. Once it’s running it’s fine, but it’s noticeable enough that I figure infinite scrolling is simply not worth the implementation complexity of doing it right.

                                  1. 5

                                    Looking at that from an iPad now, I’m having a couple of issues:

                                    • I don’t see the URL change as I load more content, and reloading takes me to the top. Maybe iOS Safari doesn’t support replaceState? Luckily, opening a thread and hitting back takes me to where I was, but if I’m leaving the page open in a background tab for a while and it gets cleaned out of RAM, I’ll lose my place. That’s pretty bad.
                                    • When scrolling to the bottom of the screen on iOS, the content “rubber bands” as it scrolls past the screen and then up again. Continuing to do the scroll gesture continues the rubber banding. The page doesn’t start loading more content until the rubber banding has completely stopped. To scroll a couple pages down, I have to scroll down, stop scrolling, wait for the animation to come to a complete halt, wait a moment for the page to notice, wait for the new content to actually load, resume scrolling, reach the end again, wait for the animation to end again, etc.

                                    Those two issues compound: if you’ve been working through a dozen pages, and the page ends up reloaded and you’re at the top again, scrolling quickly through those dozen pages would be infuriating.

                                    It also has the in-page search issue mentioned by craftyguy.

                                    Discourse does it better than most, but I’m not entirely sure it’s possible to build a good infinitely scrolling list which works better than pagination in every browser on every device.

                                    Yes. Infinite scrolling is employed to encourage people not to jump ahead to the last page.

                                    I have to ask, why is that desirable? I mean, I get why you may not want a “last page” button, but if I want to find content from a month ago, why is it desirable to discourage me from finding that content?

                                    1. 2

                                      I’ve written a longer-form response as a separate post. https://notriddle-more-interesting.herokuapp.com/D35K4ZCCNG6RB

                                      1. 1

                                        I don’t see the URL change as I load more content, and reloading takes me to the top.

                                        The latest view and the in-topic view are different. The in-topic view is the one that uses replaceState.

                                        The latest view doesn’t do permanent URLs because it’s constantly shifting around every time something gets bumped anyway. I’m not entirely convinced that this is the right trade-off to make, but if you’ve ever bookmarked page 5 and had the stuff in page 5 change out from under you, I’m sure you can understand why they don’t bother.

                                        When scrolling to the bottom of the screen on iOS, the content “rubber bands” as it scrolls past the screen and then up again.

                                        Yeah, that I’ll agree is an actual problem………. I’ve got nothing. No idea whatsoever how to solve it. Safari is throttling your onscroll events, and they’re probably right to do so considering the fact that they want to make sure it doesn’t stutter while it scrolls. I really have no idea what either group could possibly do about this.

                                        I have to ask, why is that desirable?

                                        The latest view and the in-topic view are different. I guess it doesn’t make much sense for the latest view to restrict it like that (though I’m pretty sure Discourse team would point you at the software’s own search functionality, which has a way to filter by date, rather than fighting with the pagination system).

                                        But the reason it’s desirable to have people read the entire thread instead of jumping to the bottom? Encouraging people to read stuff is kind of core to the whole Discourse philosophy, and it’s the cool part that I want to see pushed elsewhere.

                                    1. 1

                                      These days I just use dns-based blocklists. Just run dnsmasq with an adblock blocklist locally, or on your home network with a raspberry pi.

                                      1. 5

                                        You say that, and it’s fine for some sites, but a lot of them have anti-adblock scripts baked in alongside the site logic. The only way you’re going to work around that is with redirect rules, like what uBlock Origin does. It also isn’t possible to do annoyance removal, like getting rid of fixed banners, using DNS.

                                        1. 3

                                          For the sites that it doesn’t work for, I close the tab and move on. It wasn’t worth my time anyway.

                                          1. 1

                                            To me, attempting to get blanket web-wide annoyance removal feels like freeloading. That’s not why I block ads. It’s my prerogative to avoid privacy invasion, malware vectors, and resource waste; if the site owner goes to lengths to make it hard to get the content without those, that’s their prerogative, and I just walk away. I’m not going to try to grab something they don’t want me to have. (The upshot is that I don’t necessarily even use an ad-blocker, I simply read most of the web with cookies and Javascript disabled. If a page doesn’t work that way, too bad, I just move on.)

                                            1. 1

                                              I figure that living in an information desert of my own making is not a very effective form of collective action. There simply aren’t enough ascetics to make it worth an author’s time testing their site with JavaScript turned off. And if it isn’t tested, then it doesn’t work. If even Lobsters, a small-scale social site that you totally could’ve boycotted, can get you to enable JavaScript, then it’s a lost cause. Forget about getting sites with actual captive audiences to do it.

                                              People need to encourage web authors to stop relying on ad networks for their income, and they need to do it without becoming “very intelligent”. An ad blocker that actually works, like uBlock Origin, is the only way I know of to do that; it allows a small number of people (the filter list authors) to sabotage the ad networks at scale, in a targeted way.

                                              1. 1

                                                Thank you for bringing up Mr. Gotcha on your own initiative, because that sure feels like what you’re doing to me here. “You advocate for browsing with Javascript off. Yet you still turn it on in some places yourself.”

                                                That’s also my objection to the line of argument advanced in the other article you linked: “JavaScript is here. It is not going away. It does good, useful things. It can also do bad, useless, even frustrating things. But that’s the nature of any tool.” I’m sorry, but the good-and-useful Javascript I download daily is measured in kilobytes; the amount of ad-tech Javascript I would be downloading if I didn’t opt out would be measured in at least megabytes. That’s not “just like I can show you a lot of ugly houses”; it inverts the argument to “sure, 99.9% of houses are ugly but pretty ones do exist as well, you know”. Beyond that, it’s a complete misperception of the problem to advocate for “develop[ing] best practices and get[ting] people to learn how to design within the limits”. The problem would not go away if webdevs stopped relying on Javascript, because the problem is not webdevs using Javascript, the problem is ad-tech. (And that, to respond to Mr. Gotcha, is why I enable JS in some places, even if I mostly keep it off.)

                                                In that respect I don’t personally see how “if you insist on shovelling ads at me then I’ll just walk away” is a lesser signal of objection than “then I’ll crowdsource my circumvention to get your content anyway”. But neither seems to me like a signal that’s likely to be received by anyone in practice anyway, and I think you operate under an illusion if you are convinced otherwise. I currently don’t see any particularly effective avenue for collective action in this matter, and I perceive confirmation of that belief in the objectively measurable fact that page weights are inexorably going up despite the age and popularity of the “the web is getting bloated” genre. All webbie/techie people agree that this has to stop, and have been agreeing for years, yet it keeps not happening, and instead keeps getting worse. Maybe because business incentives keep pointing the other way and defectors keep being too few to affect that.

                                                Until and unless that changes, all I can do is find some way of dealing with the situation as it concerns me. And in that respect I find it absurd to have it suggested that I’m placing myself in any sort of “information desert of my own making”. Have you tried doing what I do? You would soon figure out that the web is infinite. Even if I never read another JS-requiring page in my life, there is more of it than I can hope to read in a thousand lifetimes. Nor have I ever missed out on any news that I didn’t get from somewhere else just as well. The JS-enabled web might be a bigger infinity than the non-JS-enabled web (I am not even sure of that, but let’s say it is), but one infinity’s as good as another to this here finite being, thank you.

                                                1. 2

                                                  But neither seems to me like a signal that’s likely to be received by anyone in practice anyway.

                                                  I, personally, can handle a script blocker and build my own custom blocking list just fine. I can’t recommend something that complex to people who don’t even really know what JavaScript is, but I can recommend uBlock Origin to almost anyone. They can install it and forget about it, and it makes their browser faster and more secure, while still allowing access to their existing content, because websites are not fungible. Ad networks are huge distributors of malware, and I don’t mean that in the “adtech is malware” sense, I mean it in the “this ad pretends to be an operating system dialog and if you do what it says you’ll install a program that steals your credit card and sells it on the black market.” I find it very easy to convince people to install ad blockers after something like that happens, which it inevitably does if they’re tech-illiterate enough to have not already done something like this themselves.

                                                  uBlock Origin is one of the top add-ons in Chrome and Firefox’s stores. Both sites indicated millions of users. Ad blocker usage is estimated to be between 20% in the United States, 30% in Germany, and around that spot in other countries, while the percentage of people who browse without JavaScript is around 1%. I can show you sites with anti-adblock JavaScript, that doesn’t run when JavaScript is turned off entirely and so can be defeated by using NoScript, indicating that they’re more concerned about ad blocker than script blockers. Websites that switched to paywalls cite lack of profitability from ads, caused by a combination of ad blockers and plain-old banner blindness.

                                                  Don’t be fatalistic. The current crop of ad networks is not a sustainable business model. It’s a bubble. It will burst, and the ad blockers are really just symptomatic of the fact that noone with any sense trusts the content of a banner ad anyway.

                                                  1. 1

                                                    Oh, absolutely. For tech-illiterate relatives for whom I’m effectively their IT support, I don’t tell them to do what I do. Some of them were completely unable to use a computer before tablets with a touchscreen UI come out – and still barely can, like having a hard time even telling text inputs and buttons apart. Expecting them to do what I do would be a complete impossibility.

                                                    I run a more complex setup with minimal use of ad blocking myself, because I can, and therefore feel obligated by my knowledge. And to be clear, for the same reason, I would prefer if it were possible for the tech-illiterate people in my life to do what I do – but I know it simply isn’t. So I don’t feel the same way about those people using crowdsourced annoyance removal as I’d feel about using it myself: I’m capable of using the web while protecting myself even without it; they aren’t.

                                                    It’s a bubble.

                                                    I’m well aware. It’s just proven to be a frustratingly robust one, quelling several smaller external shifts in circumstances that could have served as seeds of its destruction – partly why I’m pessimistic about any attempt at accelerating its demise from the outside. Of course it won’t go on forever, simply because it is a bubble. But it’s looking like it’ll have to play itself out all the way. I hope that’s soon, not least because the longer it goes, the uglier the finale will be.

                                                    And of course I would love for reality to prove me overly pessimistic on any of this.

                                          2. 2

                                            I use /etc/hosts as a block list, but it’s a constant arms race with new domains popping up. I use block lists like http://someonewhocares.org/hosts/hosts and https://www.remembertheusers.com/files/hosts-fb but I don’t want to blindly trust such third-parties to redirect arbitrary domains in arbitrary ways.

                                            Since I use NixOS, I’ve added a little script to my configuration.nix file which, when I build/upgrade the system, downloads the latest version of these scripts, pulls the source domain out of each entry, and writes an /etc/hosts that sends them all to 127.0.0.1. That way I don’t have to manually keep track of domains, but I also don’t have to worry about phishing, since the worst that can happen is that legitimate URLs (e.g. a bank’s) get redirected to 127.0.0.1 and error-out.

                                            1. 2

                                              For anyone interested in implementing this without pi-hole, I have a couple scripts on github which might help. I adapted them from the pi-hole project awhile back when I wanted to do something a bit less fully-featured. They can combine multiple upstream lists, and generate configurations for /etc/hosts, dnsmasq, or zone files.

                                            1. 39

                                              We need a name for this pattern around network protocols: “Embrace, Capture, Break away, Lock-in”

                                              • Embrace a communication standard
                                              • Capture: attract a large user base
                                              • Break away: break backward compatibility and/or provide a worse UX for those outside of your walled garden
                                              • Lock-in: corner in the userbase

                                              Google did this with Google Talk vs XMMP, email (try running your own mailserver), AMP, RSS…

                                              1. 14

                                                Email is still mostly unmolested if you understand the security and spam context; it’s not that google made it impossible to run your own smtp server, but in order to do so and not get flagged as spam, there are a lot of hoops to jump through. IMHO this is a net benefit, you still have small email providers competing against gmail, but much less spam.

                                                1. 15

                                                  Email is mostly unmolested because it’s decentralized and federated, and a huge amount of communication crosses between the major players in the space. If Google decided they wanted to take their ball and go home, they would be cutting of all of Gmail, Yahoo mail, all corporate mail servers, and many other small domains.

                                                  If we want to make other protocols behave similarly, we need to make sure that federation isn’t just an option, but a feature that’s seamless and actively used, and we need a diverse ecosystem around the protocols.

                                                  To foster a diverse ecosystem, we need protocols that are simple and easy to implement, so that anyone can sit down for a week in front of a computer and produce a compatible version of the protocol from first-enough principles, and build a cooperating tool, to diffuse the power of big players.

                                                  1. 9

                                                    So how do you not get flagged for spam? I want to join you. I run my own e-mail server and have documented the spam issue here:

                                                    https://penguindreams.org/blog/how-google-and-microsoft-made-email-unreliable/

                                                    The only way to combat Google and Microsoft’s spam filters is sending my e-mail, texting my friend say, “Hey I sent you an e-mail. Make sure it’s not in your spam folder.” Usually if they reply, my e-mail will now get through .. usually. Sometimes it gets dropped again.

                                                    I have DKIM, DMARC and SPF all set up correctly. Fuck Gmail and fuck outlook and fuck all the god damn spammers that are making it more difficult for e-mail to just fucking work.

                                                    1. 3

                                                      Forgive the basic question: do you have an rDNS entry set for your IP address so a forward-confirmed reverse DNS test passes? I don’t see that mentioned by you in your blog post, though it is mentioned in a quote not specifically referring to your system.

                                                      It’s not clear who your hosting provider (ISP) is, though the question you asked them about subnet-level blocking is one you could answer yourself via third-party blacklist provider (SpamCop, Spamhaus, or many others of varying quality) and as a consequence work with them on demonstrable (empirical) sender reputation issues.

                                                      1. 8

                                                        Yes I’ve been asked that before and haven’t updated the blog post in a while. I do have reverse DNS records for the single IPv4 and 2 IPv6 addresses attached to the mail server. I didn’t originally, although I don’t think it’s made that big a difference.

                                                        I’ve also moved to Vultr, which blocks port 25 by default and requires customers explicitly request to get it unblocked; so hopefully that will avoid the noisy subnet problem so often seen on places like my previous host, Linode.

                                                        I think a big factor is mail volume. Google and Microsoft seem to trust servers that produce large volumes of HAM and I know people at MailChimp that tell me how they gradually spin up newer IP blocks by slowly adding traffic to them. My volume is very small. My mastodon instance and confluence install occasionally send out notifications, but for the most part my output volume is pretty small.

                                                        1. 8

                                                          Email is inherently hard, especially spam filtering; Google and Microsoft just happen to be the largest email providers, so it appears to be a Google or Microsoft problem, but I don’t think it is.

                                                          E-mail was once the pillar of the Internet as a truly distributed, standards-based and non-centralized means to communication with people across the planet.

                                                          I think you’re looking through rose-tinted glasses a bit. Back in the day email was also commonly used to send out spam from hijacked computers, which is why many ISPs now block outgoing port 25, and many email servers disallow emails from residential IPs. Clearly that was suboptimal, too.

                                                          Distributed and non-centralized systems are an exercise in trade-offs; you can’t just accept anything from anyone, because the assholes will abuse it.

                                                          1. 4

                                                            Cheap hosting is very hard to run a mailserver from because the IP you get is almost certainly tainted.

                                                            Having valid rDNS, SPF & DMARC records helps.

                                                      2. 13

                                                        It’s also not really a Google issue; many non-Google servers are similarly strict these days, for good reasons. It’s just that Google/Gmail is now the largest provider so people blame them for not accepting their badly configured email server and/or widely invalid emails.

                                                        I’ve worked a lot with email in the last few years, and I genuinely and deeply believe that at least half of the people working on email software should be legally forbidden from ever programming anything related to email whatsoever.

                                                        1. 2

                                                          In other words, Google didn’t have to break email because email has been fundamentally broken since before they launched GMail.

                                                          Worse, newer protocols like Matrix and the various relatives of ActivityPub and OStatus don’t fix this problem.

                                                          1. 7

                                                            Matrix, ActivityPub and OStatus don’t fix Email? Well it’s almost as if they are trying to solve other problems than internet mail.

                                                            1. 3

                                                              You completely and utterly missed the point.

                                                              Mastodon, Synapse, and GNU Social all implement a mixture of blacklists, CAPTCHAs, and heuristics to lock out spambots and shitposters. The more popular they get, the more complex their anti-spam measures will have to get. Even though they’re not identical to internet mail (obviously), they still have the same problem with spambots.

                                                              1. 11

                                                                Those problems are at least partly self-inflicted. There’s nothing about ActivityPub which requires you to rehost all the public content that shows up. You can host your own local public content, and you can send it to other instances so that their users can see it.

                                                                Rehosting publicly gives spammers a very good way to see and measure their reach. They can tell exactly when they’ve been blocked and switch servers. Plus all the legal issues with hosting banned content, etc.

                                                                1. 2

                                                                  You’re acting as if that ONE problem (abusive use) is THE only problem and the rule and guide with which we should judge protocols.

                                                                  While a perfectly reasonable technocratic worldview, I think things like usability are also important :)

                                                                  1. 9

                                                                    In general, you’re right. A well-designed system needs to balance a lot of trade-offs. If we were having a different conversation, I’d be talking about usability, or performance, or having a well-chosen set of features that interact with each other well.

                                                                    But this subthread is about email, and abusive use is the problem that either causes or exacerbates almost every other problem in email. The reason why deploying an email server is such a pain is anti-spam gatekeeping. The reason why email gets delayed and silently swallowed is anti-spam filtering. The reason why email systems are so complicated is that they have to be able to detect spam. Anti-backscatter measures are the reason why email servers are required to synchronously validate the existence of a mailbox for all incoming mail, and this means the sending SMTP server needs to hold open a connection to the recipient while it sifts through its database. The reason ISPs and routers block port 25 by default is an attempt to reduce spam. More than half of all SMTP traffic is spam.

                                                                    If having lots of little servers is your goal, and you don’t want your new federated protocol to have control under a small number of giant servers, then you do need to solve this problem. Replicate email’s federation method, get emails emergent federation behavior.

                                                          2. 5

                                                            XMMP has a lot of legitimate issues. Try setting up a XMMP video chat between a Linux and macOS client. I’d rather lose my left arm than try doing that again.

                                                            1. 7

                                                              Desktop Jingle clients never really matured because it wasn’t a popular enough feature to get attention.

                                                              These days I expect everyone just uses https://meet.jit.si because it works even with non-XMPP users and no client

                                                              1. 0

                                                                I just got jitsi working w/ docker-compose meet.dougandkathy.com – not headache free, but no way I could build it myself

                                                              2. 1

                                                                Audio, video and file transfer is still very unreliable on most IM platforms. Every time I want to make audio or video call with someone we had to try multiple applications/services and use the first one that works.

                                                                1. 0

                                                                  Microsoft Teams does this pretty well, across many platforms. Linux support is (obviously, I guess) still a bit hacky, but apparently is possible to get to work as well.

                                                              1. 3

                                                                A pet peeve of mine: Why does Mozilla participate in the WHATWG group? Who benefits from a Living Standard (an oxymoron if ever there was one)? Isn’t this exactly what is happening there too?

                                                                1. 10

                                                                  Why would they leave WHATWG? Mozilla doesn’t have more power to sway standards going alone than within WHATWG.

                                                                  WHATWG mostly focuses on documenting the reality, so it merely reflects the power dynamics between browser vendors. If something exists and is necessary for real-world “web compatibility”, even if that’s Chrome’s fire-and-motion play, it still gets documented, because that’s just what has to be supported to view the web as it exists.

                                                                  1. 6

                                                                    WHATWG mostly focuses on documenting the reality

                                                                    This is how you get to OOXML. Why should Mozilla invest resources in documenting what Chrome does? Further, why should Mozilla legitimize what Chrome does by implementing non-standard conforming behavior? What was wrong with the original URL standard from IETF?

                                                                    Also, do read google’s critique of OOXML, especially the “why multiple standards aren’t good” question, which is relevant in this case (because a living standard is no standard at all).

                                                                    1. 3

                                                                      What was wrong with the original URL standard from IETF?

                                                                      It wasn’t reflecting the reality.

                                                                      The old W3C and IETF specs are nice works of fiction, but they’re not useful for building browsers, because they don’t document the actual awful garbage the web is built from.

                                                                      1. 2

                                                                        This is how you get to OOXML.

                                                                        If OOXML had been a reasonably complete standard, good enough to actually allow you to interoperate with Microsoft Office, I for one would have wholeheartedly supported it. It doesn’t.

                                                                    2. 5

                                                                      Could elaborate on what you’d prefer they do instead of continue to participate in WHATWG, of which they’re one of the 3 founding members?

                                                                      Do you believe they should:

                                                                      • stop adding things to the Web platform and just focus on fixing existing bugs?
                                                                      • keep advancing the Web platform but do outreach primarily through their own open mailing lists and open bug tracker?
                                                                      • do outreach via the W3C? (Note that Mozilla founded WHATWG, along with Apple and Opera—and without Google, which wouldn’t release Chrome for another 4 years—because of their frustrations with the W3C.)
                                                                      • something else entirely?

                                                                      Genuine question, tried to avoid making any option sound unreasonable.

                                                                      Similarly, why is a Living Standard an oxymoron? I believe all stakeholders in the Web platform benefit from the Living Standard:

                                                                      • implementors of user agents have a centralized place to research and discuss interoperability of every API in the Web platform, with significantly more detail than MDN or caniuse.com (the next best places I know of)
                                                                      • web developers both benefit indirectly, from the improved interoperability, and benefit directly by being able to go to the centralized place when they need spec-level detail about the operation and interop of Web APIs
                                                                      • web consumers benefit from the improved compatibility between web pages/webapps and user agents
                                                                      1. 3

                                                                        why is a Living Standard an oxymoron?

                                                                        If you have a ‘living standard’, then there is a very interesting ramification. You can have a browser that correctly implements html5, and a webpage that’s written in correct html5, and yet the webpage will not render correctly. Now, ‘html5’ doesn’t mean anything anymore.

                                                                        1. 2

                                                                          Browsers add, change, fix and break things all the time. Pages roughly follow what browsers support at the time. The living standard is just realistic about the living nature of the web.

                                                                          1. 2

                                                                            The living standard is just realistic about the living nature of the web

                                                                            But it doesn’t have to be that way. C is standardized, versioned, and while there are extensions they are also standardized. Compilers generally (looking at you, msvc) advertise support for, and do in fact support a given version. Most programming languages are like this. Python has no official standard, but it’s still very definitely versioned and other implementations like, e.g., pypy, say ‘we support python x.y’ and that means something. What does it mean to say ‘we are written in html5’ or ‘we support html5’? Nothing.

                                                                            1. 2

                                                                              W3C tried for over 15 years the approach of telling everyone they are wrong and should be ashamed of their non-standard markup, and that only lead to W3C losing control of the HTML spec.

                                                                              In practice, the spec needs to define that you can have to allow exactly 511 unclosed <font> tags, no more no less, because that’s what IE did, and there are pages that rely on exactly this. And supporting HTML5 means supporting these things, which is much much more meaningful than “we support HTML4” meant where the syntax was hand-waved as “just use sort-of SGML”, and that was so disconnected from reality it meant nothing useful for browser vendors.

                                                                              1. 1

                                                                                That’s because HTML5 is too broad a term (and nebulous to boot since often times people mean HTML5 and CSS3 and ES6+). You can still make meaningful statements like “we support such-and-such attributes from HTML5”, or “we are compatible with Firefox 67”, because what you really care about is the set of features that are supported, not the version number.

                                                                                Besides, what’s the proposed alternative? Wait a couple years for the new HTML version to come out and then everybody implements and ships that? That’s been tried before and everyone hated it. That’s the reason evergreen browsers are such a big deal. Of course you can say, well we’ll implement what’s in the working drafts and not wait for the final publication to start implementing, but then you’re back to following a living document. Better to just make it explicit, which is what WHATWG does.

                                                                                1. 3

                                                                                  You can still make meaningful statements like “we support such-and-such attributes from HTML5”

                                                                                  Which version of their behaviour?

                                                                                  “we are compatible with Firefox 67”

                                                                                  Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?

                                                                                  Wait a couple years for the new HTML version to come out and then everybody implements and ships that? That’s been tried before and everyone hated it

                                                                                  I wasn’t really around when html4 was going on but why, exactly, did people hate that? It works fine in other languages.

                                                                                  Of course you can say, well we’ll implement what’s in the working drafts and not wait for the final publication to start implementing, but then you’re back to following a living document

                                                                                  No you’re not! Because if you do it this way, the browsers have to support multiple versions at once. Just like the c compilers—now, c never breaks backwards compat so this is hardly an issue, but now you can make drastic changes to html, progress can actually be made more quickly! Html 2017 can break compatibility with html 2014, be soo much better than html 2014. But because there’s a version specifier at the top, browsers can correctly implement both html 2014, 2017, and 2020-draft, and websites written in all 3 will still work. Standards will advance more quickly, and websites don’t break unless they choose to use a -draft specification, in which case they’re in no more danger of breaking than they are with the html5/WHATWG.

                                                                                  1. 1

                                                                                    Which version of their behaviour?

                                                                                    “xyz with support for abc advanced feature” is just another feature. If you look at any MDN page that’s how it’s treated; the browser support section will have a “basic support” column with additional columns for more advanced extensions added later.

                                                                                    Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?

                                                                                    I’m confused as to what the point is here, can you clarify? Might just be that the text is hard to read because of the lack of facial cues, body language, etc.

                                                                                    I wasn’t really around when html4 was going on but why, exactly, did people hate that? It works fine in other languages.

                                                                                    Because people weren’t willing to wait. Everybody was excited about the new shiny but no one wants to hear about a technology that will be super exciting to use and then wait a full two or three years to use it. There’s also the problem of real-world implementations. If you design the standard in an ivory tower and then everybody implements it, how do you know it won’t be garbage to use in practice? Speaking as someone who worked (a little) on ActivityPub it’s very easy to get in your head and think you have a great design that works and then someone implements it and runs into all sorts of corner cases or problems or whatever that you didn’t consider. People actually using the technologies is still the best way to flush out design issues that we as a field know of.

                                                                                    Now that I’m thinking about it I think a big part of this is that in a language like C, you can already do most to all of what you want to do without the new version (I’m not familiar enough with C to comment in detail on how much more easy or expressive the newer extensions make it). But because the web is so sandboxed that a lot of what gets added adds fundamental expressive power or otherwise fundamentally changes the platform. You can’t polyfill <audio>. You can’t polyfill <video> or <canvas> either.

                                                                                    Standards will advance more quickly, and websites don’t break unless they choose to use a -draft specification, in which case they’re in no more danger of breaking than they are with the html5/WHATWG.

                                                                                    That’s not true - they’ll be in far more danger because with the current HTML Living Standard one of the golden rules is to not break backwards compatibility. That’s why the HTML Living Standard has so many gross hacks in it (as discussed elsewhere in this thread); it’s all to preserve compatibility. Compatibility is broken very sparingly, and it requires lots of discussion, browser telemetry to see how much would break in the wild by the change, etc.

                                                                                    1. 1

                                                                                      Congratulations, you’ve implemented versioning! Now that wasn’t hard, was it?

                                                                                      I’m confused as to what the point is here

                                                                                      If you say “we’re pegged to firefox 67,” then ‘firefox 67’ is as much a version as ‘html 4’.

                                                                                      they’ll be in far more danger because with the current HTML Living Standard one of the golden rules is to not break backwards compatibility. That’s why the HTML Living Standard has so many gross hacks in it (as discussed elsewhere in this thread); it’s all to preserve compatibility. Compatibility is broken very sparingly, and it requires lots of discussion, browser telemetry to see how much would break in the wild by the change, etc.

                                                                                      I think you are missing my point. If a webpage specifies which version it wants, then versions are allowed to break compatibility and don’t need gross hacks or long discussion.

                                                                          2. 1

                                                                            I think that working within the standard is precisely the right thing here. I believe that WHATWG was the wrong approach (as seems evident from this chain of events mentioned here).

                                                                        1. 9

                                                                          In case someone didn’t read the entire article: uBlock and uBlock Origin are two different extensions. If you are using uBlock Origin then you are safe. In fact uBlock Origin is the one you should be using - for more info check what the difference between the two extensions is.

                                                                          1. 1

                                                                            Just for context: uBlock Origin does not allow arbitrary filter lists to inject scripts, but instead provides a blessed library of scripts that a filter list can inject into a page.

                                                                            1. 1

                                                                              I would bet that several of these script are easily abused by offensive filter. Did not test thought.

                                                                              1. 3

                                                                                At the end of the days, you gotta trust somebody. There’s not enough time to only trust yourself.

                                                                          1. 17

                                                                            Yes, but there’s two traps to know about.

                                                                            1. Companies will often treat “doing good” as an employee benefit that substitutes for other benefits, like salary or good health plans. This may or may not be a price you’re okay paying.

                                                                              A price you should NOT be okay paying is trading “doing good” for “doing well”. Sometimes people will work at disfunctional or abusive companies because they believe in the mission. That’s a quick route to burnout.

                                                                            2. Often the most meaningful work you could be doing is not the most interesting work you could be doing. In fact, the most helpful stuff is probably gonna be boring. You can probably do the a huge amount of good by going to nonprofits and cleaning all the bloatware off their laptops. Or teaching them how to use Office templates. Or adding content to their WordPress.

                                                                            1. 2

                                                                              Companies will often treat “doing good” as an employee benefit that substitutes for other benefits, like salary or good health plans. This may or may not be a price you’re okay paying.

                                                                              Everything has a price, especially in business, also “doing good”. “Doing good” at the very least requires that the management/owners wants to do that – and I believe having management/owners like that has a price. And sometimes merely the acts of doing good cost actual money as well.

                                                                              Now it might be that the business is doing so overwhelmingly well that the price is peanuts. That’s just not true for most businesses – and in fact you might say that there are market failures (not enough competition, cartels, corruption, whatever) if the business can throw money away like that.

                                                                              1. 2

                                                                                By business owners wanting to do good having a price, are you referring to the fact that they probably aren’t playing the game of business as optimally as their opponents?

                                                                                1. 1

                                                                                  Yes, and also that if they are both good and optimal, they will be either very rare or expensive.

                                                                              2. -1

                                                                                You do exactly the same amount of good by going to a for-profit and cleaning all the bloatware off their laptops. You will probably do more good because forprofits need to satisfy customers to exist while nonprofits don’t actually have to achieve anything.

                                                                                1. 13

                                                                                  That’s an annoying half-truth.

                                                                                  It’s true that non-profits might not actually be improving the world. It’s false to assume that, by making a profit, a for-profit venture must actually be improving the world. Some for-profits produce things that are actively bad for their customers, like nuclear bombs and methamphetamine. Others serve their clients at the expense of others, like click fraud and fake IDs.

                                                                                  1. 11

                                                                                    It doesn’t have to be so extreme: any data-extraction platform like Facebook, Twitter or Google Search are actively impacting their users in a negative way. Even more so if they try to condition the behavior of their user to maximize their permanence on their platform.

                                                                                    1. 4

                                                                                      I intentionally picked extreme examples so that I wouldn’t have to argue about whether something is on the positive or negative side of a trade-off. For-profit companies exist that do not perform good; arguing against that premise means arguing that meth either doesn’t sell or isn’t bad for you. Axe-grinding about ad-supported websites would have undermined the point by starting an argument about whether ads are evil or not (a controversial topic) rather than making the point more conclusively by using a less controversial example (meth is literally a neurotoxin).

                                                                                      1. 2

                                                                                        Is it still controversial though? I mean, ads in themselves are debatable but the exploitation in the attention-economy seems to be a mostly non-controversial problem. I mean, even Facebook had to tone down some of their most exploitative activities and admit they are having a negative impact on the social fabric.

                                                                                        Anyway, I get this is not the point and you’re right.

                                                                                    2. -5

                                                                                      a nuclear bomb is a very effective deterrent against invasion. It is clearly not ‘actively bad’ for their customers.

                                                                                      Methamphetamine makes you feel really good. Depending on your philosophical viewpoint, this might be a net good.

                                                                                      Or do you think you know better than the person who has decided to take meth into his own body?

                                                                                      1. 5

                                                                                        And… Of course someone wants to argue the benefits of methamphetamine.

                                                                                        Source: a Rebel & Divine volunteer

                                                                                        Yes, that stuff makes you feel good. It makes you feel so good, that when you run out of money, you’ll steal to get your fix. It’s so potent, that when you go into withdrawal and start sweating it out, the volunteer that hugs you gets high off your sweat. It’s a neurotoxin, as in taking it literally makes you dumber.

                                                                                        I do, in fact, believe that I know better than someone who takes meth. That stuff impares your ability to rationally evaluate the trade-off surrounding the use of it.

                                                                                        The worst part about parent’s groups that lied about cannabis is, in a classic “boy who cried wolf” scenario, it caused people to dismiss accurate warnings about drugs that really are that bad.

                                                                                        No, actually, the worst part is that a lot of people take street drugs to self-medicate for other, serious problems. The actual medical system failed them, and now they’re self-administering psychiatric drugs without a psychiatrist to try to keep them grounded.

                                                                                        1. 3

                                                                                          LibertarianLlama is a troll. I’m sure they’ll be making comments about social darwinism next – that happened before. Don’t waste your time!

                                                                                          1. 1

                                                                                            If you actually believe that please report to mods, etc. We shouldn’t have to have such mudslinging comments in an invite-only moderated community.

                                                                                            1. 2

                                                                                              Since when did factual comments become mudslinging? I’m just trying to save people some time and pointing out that a discussion with LibertarianLlama is unlikely to be productive. Here is LibertarianLlama’s previous comment from a discussion on the same topic (and that whole thread didn’t go anywhere constructive): https://lobste.rs/s/orbiuh/ask_is_your_work_meaningful#c_pekyua

                                                                                              1. 2

                                                                                                That comment has enough dark, factual truth in it mixed with the ideological beliefs that it doesn’t seem like trolling to me. Easy guess when I first showed up. Time passes with more uncertainty if aiming for accuracy. I keep thinking the Llama is doing some kind of long-term act that ties libertarian beliefs to every situation, even what most wouldn’t talk about, to see how it plays out from ridiculous to smart. It might be to push it, mock/attack it, or just the fun of it. Whether ideological or an act, I agree it’s non-productive to debate the person if it’s about just changing their ideology. They’re too vested into it.

                                                                                                Whereas, discussions or debates might be productive if they are presented within their viewpoint with evidence. That’s happened before, too. It’s also a general principle that applies to discussions with anyone about strongly-held views. I’m not going with pure troll, though. Could be true but this person’s comments is in about three categories simultaneously that I know of. Also, it’s worth noting that the Llama, in the middle of all this crazy or ideological shit, occasionally drops some damned, good points from same perspective. So, I read them for either a good, eye roll that refines my ability to combat libertarian falsehoods or what truth they teach/reinforce.

                                                                                          2. -1

                                                                                            Thanks for this perspective.

                                                                                            1. -2

                                                                                              that when you run out of money, you’ll steal to get your fix.

                                                                                              Same can be said of food and water.

                                                                                              I do, in fact, believe that I know better than someone who takes meth.

                                                                                              Why then can’t I just take all your money and claim that I know better than you what’s good for you. That having all that money impairs your judgement and you will be better off without.

                                                                                              Alcohol is also a neurotoxin. Have you ever taken any?

                                                                                              1. 2

                                                                                                Same can be said of food and water.

                                                                                                The thing about alcohol and money kind of makes sense, but if you actually think the human dependency on food and water is analogous to meth addiction, then you are a moron.

                                                                                                1. -1

                                                                                                  Why?

                                                                                                  1. 2

                                                                                                    Because one provides essential nutrients to survive, the other you can survive without.

                                                                                                    1. 1

                                                                                                      The inherent assumption here then is that survival is good enough to justify thievery, but pleasure is not.

                                                                                                      This assumption is based no your genetic programming. Just because you want to be a slave to your own genes doesn’t mean others should.

                                                                                        2. 4

                                                                                          Many of the so-called needs that exist in the business sphere are idiotic and artificial, like “no one is raising my brand awareness on Twitter.”

                                                                                          1. 1

                                                                                            Yeah, a lot of what people work on is merely the result of capital’s imperative to increase value volume and velocity–essentially, to reproduce itself. That people are surprised time and time again that the result is not congruent with their (or any individual person’s, and many groups’) well-being or values always befuddles me. And under optimizations, misalignment gets amplified.

                                                                                            1. 0

                                                                                              Wow. It’s interesting to see something straight out of Marx expressed in a technical blog in 2019.

                                                                                              1. 2

                                                                                                Welcome to Lobsters, we are either Communists or Capitalists, worthy of our own rap battle.

                                                                                                1. 2

                                                                                                  Took me a couple of days to remember what I was reminded of from https://www.marxists.org/archive/marx/works/download/pdf/Economic-Philosophic-Manuscripts-1844.pdf

                                                                                                  Thus political economy – despite its worldly and voluptuous appearance – is a true moral science, the most moral of all the sciences. Self-renunciation, the renunciation of life and of all human needs, is its principal thesis. The less you eat, drink and buy books; the less you go to the theater, the dance hall, the public house; the less you think, love, theorize, sing, paint, fence, etc., the more you save – the greater becomes your treasure which neither moths nor rust will devour – your capital. The less you are, the less you express your own life, the more you have, i.e., the greater is your alienated life, the greater is the store of your estranged being. Everything which the political economist takes from you in life and in humanity, he replaces for you in money and in wealth; and all the things which you cannot do, your money can do. It can eat and, drink, go to the dance hall and the theater; it can travel, it can appropriate art, learning, the treasures of the past, political power – all this it can appropriate for you – it can buy all this: it is true endowment. Yet being all this, it wants to do nothing but create itself, buy itself; for everything else is after all its servant, and when I have the master I have the servant and do not need his servant. All passions and all activity must therefore be submerged in avarice

                                                                                                  1. 2

                                                                                                    Where is that quote from?

                                                                                        1. 1

                                                                                          A character is a fairly fuzzy concept. Letters and numbers and punctuation are characters. But so are Braille and frogs and halves of flags. Basically a thing in the Unicode table somewhere.

                                                                                          That’s not a “character.” That’s a “Unicode scalar.” https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-unicode-code-points/

                                                                                          1. 2

                                                                                            Anecdotally I’ve heard that the NSA and GCHQ are not currently particularly concerned with fundamentally breaking the Tor network.

                                                                                            What a remarkably convenient thing for the NSA that people believe that!

                                                                                            I’ve long speculated that TLAs aren’t concerned with fundamental breaks because Tor is rotten to the core: it exists for pro-Western assets in foreign countries to communicate without local surveillance over a network predominately operated by those TLAs.

                                                                                            We know the NSA has supplied the DEA with information which they laundered into warrants via parallel construction. I wonder how many of those “poor opsec moments” that were described in court records were bootstrapped from intelligence gathered elsewhere.

                                                                                            1. 2

                                                                                              In an interview Keith Alexander said that he “wished all the bad guys were on one part of the internet”

                                                                                              Either brilliant subtle reference to using Tor, or reverse psychology regarding using Tor, impossible to tell which.

                                                                                              1. 3

                                                                                                I like to speculate what the NSA would do if it did have control of Tor.

                                                                                                The fundamentals behind onion routing are logical - Tor isn’t designed against Five Eyes running the damn network. I just looked on the Tor project stats page and there are around 6k nodes, which seems feasible for one of those agencies to attack on their own. But against a country we want to destabilize? Probably holds up well enough.

                                                                                                But you want some real inbound traffic to your honeypots to ensure your agents can hide in the noise. So you allow the nasty stuff to come in; that’ll ensure a consistent base of traffic. And the presence of long lived criminal elements plus support from the tech world makes non-criminals interested in privacy consider it — surely if it wasn’t totally secure it’d have been shut down by now!

                                                                                                And when the open air drug and gun markets get a little too hairy, you feed a name to your colleagues in the FBI/ATF/DEA. Oops, looks like that hidden site that hosted illegal pornography had a handy PHPBB vulnerability we just happened to find. And maybe you burn a TorBrowser 0-day to sell the ruse and launder the evidence you need in court against the users. From the outside looking in it reinforces confidence in the network — if the FBI has resorted to burning 0-days, that means the network has to be safe!

                                                                                              2. 2

                                                                                                I think people forget sometimes that the US Government and specifically the US Military (Office of Naval Research) was responsible for funding the creation of TOR and is still responsible for quite a bit of its funding. Yes, I’m sure TOR is inconvenient for parts of the USG occasionally, but on the balance, they prefer it exists.

                                                                                                1. 1

                                                                                                  Anecdotally I’ve heard that the NSA and GCHQ are not currently particularly concerned with fundamentally breaking the Tor network.

                                                                                                  What a remarkably convenient thing for the NSA that people believe that!

                                                                                                  If I wanted to convince people that I hadn’t already compromised something, or wasn’t actively trying to, I would go around telling them that it is completely uninteresting to me.

                                                                                                  1. 4

                                                                                                    I’d just as soon believe that 9/11 was a false flag.

                                                                                                    It’s easy enough to spell out why it would be in the their best interests to fake it, and it’s easy enough to figure out how it could have been faked. But that’s it. Nobody has actually disproven the mainstream narrative, only attempted to prove that the alternative narrative is not completely impossible.

                                                                                                1. 23

                                                                                                  I spent close to 8 weeks over winter researching an article exactly along this line, and so it was with some panic that I read this attempt at characterizing the issue. Gladly for that effort, this paper omits the majority of inseparably related considerations and relies too heavily on narrative for its conclusion.

                                                                                                  Fork is a nuanced beast – it most certainly is a filthy hack and I set to write (as they have) how much I hated it, but the truth isn’t quite as absolute as presented here. Fork is deeply baked into UNIX, much in the same way processed beach dirt is baked into all our CPUs. Straight removal from any particular application isn’t possible for many reasons, not least since it regularly plays a dual role as a communication mechanism (including in brand new designs like systemd) that require deep rearchitecting to remove.

                                                                                                  It has also deeply influenced surrounding UNIX APIs. For example, it is not possible to fully transition into a Linux container namespace without a double fork, or fully detach from a parent process on any UNIX (that I know of) without the same. Consider just that alone – how the classic daemonize() would be implemented without fork(), and what new and exotic interfaces / bending of process management rules would be needed to provide an equivalent interface. No suggestion of ‘removing’ fork is worth taking seriously without addressing problems like these.

                                                                                                  The UNIX incantation of fork was a hack that borrowed its name from an only slightly more formalized concept (through a series of Chinese whispers), all of which predated the actualization of, and were hopelessly inadequate at addressing the problem they set out to solve – Conway’s fork predates the arrival of contemporary SMP architecture by more than 20 years.

                                                                                                  1. 7

                                                                                                    I think it is reasonable to say, for example, all uses of fork that can be replaced by posix_spawn, should be.

                                                                                                    1. 7

                                                                                                      That doesn’t really address many of the listed complaints. The problem, as stated, is that fork requires duplicating state which makes microkernels impossible, etc. How is posix spawn defined to work?

                                                                                                      All process attributes, other than those influenced by the attributes set in the object referenced by attrp as specified above or by the file descriptor manipulations specified in file_actions, shall appear in the new process image as though fork() had been called to create a child process and then a member of the exec family of functions had been called by the child process to execute the new process image.

                                                                                                      1. 2

                                                                                                        For sure! That’s already pretty mandatory in many kinds of apps with huge heaps, but this paper advocated removing the OS interface entirely, i.e. resort to slow and racy emulation like Cygwin

                                                                                                        1. 2

                                                                                                          Well, the paper also pointed out that in 2016, 1304 Ubuntu packages used fork, but only 41 used posix_spawn. This is indeed an extremely sad state of affair, and I sympathize with the author that education is to blame.

                                                                                                          1. 4

                                                                                                            I suspect this is because for many classes of application fork() is actually fine, and is also a much simpler interface than posix_spawn.

                                                                                                            1. 2

                                                                                                              A lot of the criticism of “fork” is justified, but a lot also reminds me of what I saw in Linux kernel code years ago - many efforts to “clean up” code by people who did not understand the tradeoffs and had nothing valuable to contribute.

                                                                                                            2. 2

                                                                                                              glibc should just add __attribute__((deprecated)) to the declaration of fork() and we can revisit in 10 years.

                                                                                                              1. 3

                                                                                                                and we can revisit in 10 years.

                                                                                                                And see that not much has changed….

                                                                                                        2. 4

                                                                                                          Back then, there was fork on UNIX vs spawn on VMS vs language-based or thread-like that Brinch Hansen was doing. My research showed spawn did things like set privileges, allow CPU/RAM metering, etc. Basically, the kind of stuff clouds added to Linux decades later. So, at least they’d argue spawn was better. Language-based methods are coming back with less overhead and more flexibility than UNIX or VMS process model. Worse isolation but hardware attacks are making me wonder how much it matters. fork is objectively worse than many ancient and new designs. It’s legacy cruft now.

                                                                                                          Compared to systems then and shortly after, it does look like fork was a hacked together design driven primarily by hardware considerations of their underpowered machine. Maybe combined with their preference for minimalism. We know that quite a few things are in there solely due to them being on a PDP-11, though. So, that should be a strong consideration for any other design element that seems unnecessarily limited or prone to problems. My favorites being lack of prefix strings and incoming data flowing toward stack pointer rather than away from it (eg MULTICS).

                                                                                                          1. 6

                                                                                                            were hopelessly inadequate at addressing the problem they set out to solve

                                                                                                            And yet, 40 years later, it is the basis for 90% of the infrastructure software in the world. I’d like to create something that “hopelessly inadequate”.

                                                                                                            1. 3

                                                                                                              It’s fair to say most infrastructure uses fork, if you mean to start up, but the majority of modern client/server infrastructure software thankfully does not rely on it for data plane processing – any software due to be exposed to Internet-sized traffic by necessity must use event-driven IO or some form of threading (or a mixture of both), all of which are competing multiprocessing paradigms, and required to achieve anything close to sensible performance on modern hardware.

                                                                                                              The UNIX model was conceived at a time when the intended notion of a ‘client’ was literally an electronic typewriter attached to a fixed number of serial lines. I hope you can imagine the horror on the face of its designers were you to explain how this model was expected to cope with any subset of humans anywhere on the planet at any random moment demand one process to be dedicated to each of them.

                                                                                                              1. 3

                                                                                                                I just wanted to point out that the “hopelessly inadequate” UNIX fork/exec design was a hugely successful design. Not only is it still the basis for a wide range of widely used software, but it was designed in a way that evolution to things like rfork and clone was practical.

                                                                                                                BTW: “event driven” software seems to me to have been motivated by the shortage of programmers able to understand multi-process/multi-thread design more than anything else.

                                                                                                                1. 5

                                                                                                                  That’s what I call the COBOL and VB6 argument. A lot of people used it. It’s still around. So, it was a great design. Alternatively, they spread for business, social, and ecosystem reasons as much as for design. Then, design can’t take credit for popularity. Popularity can’t excuse the bad or outdated elements of design.

                                                                                                                  Same with fork. In this case, UNIX got wildly popular with a huge ecosystem. It trains people to use available features like fork. It doesn’t tell them about lots of old and new ways to do the job better. So, they use the thing that they were taught to which is there and works well enough. When running into its problems, they use some other non-UNIX-like approach such as monolithic, multi-threaded apps or event-driven design.

                                                                                                                  1. 4

                                                                                                                    But those were very succesful designs. If they have become outdated, it’s good to try to understand what made them work so well for so long. Programmers have a really bad attitude about this. You are not going to find civil engineers who sneer at Roman Aquaducts or aviation engineers who think the DC3 was the product of stupid people who just didn’t get it. This is why programmers keep making the same errors.

                                                                                                                    1. 4

                                                                                                                      Civil engineers would behave the same way if the difference between Roman Aquaducts and modern aquaducts was as invisible as the difference between pieces of software is. It’s not that programmers are a different kind of people: it’s that noticing the differences, and hence needing explanations for the differences, requires much more effort.

                                                                                                                      Seeing the ancient aquaduct immediately reminds you of the materials they had to work with, minor general historical knowledge tells you of the inferior technology/tools they had to work with and minor specific historical subject knowledge (of science, physics, civil engineering) tells you of the lack of knowledge they had to work with. That makes the primitive nature of the Roman Aquaduct comprehensible and even impressive.

                                                                                                                      Seeing a COBOL program doesn’t demonstrate any reason why it should seem so primitive. Knowledge of general history tells you the world was pretty much the same 40 years ago, which doesn’t explain anything. Even with knowledge of the history of computing it remains easy to underappreciate the vast lack of knowledge, experiences and experiments (such as COBOL itself) that they had to work with and that we’ve been able to learn from.

                                                                                                                      So I don’t think blaming ‘programmers’ as lacking historical sensitivity is helpful or fair. Which doesn’t mean that we shouldn’t try to find ways of increasing the historical sensitivity of our craftsfolk, because it would indeed be really helpful if more of them learned from mistakes of the past or perhaps even restored something accidentally lost.

                                                                                                                  2. 3

                                                                                                                    Tell that to nginx, which does both.

                                                                                                                    1. 1

                                                                                                                      Well, event programming is embedded in web programming just as fork/excec is embedded in unixes. If you want a widely used web server, you don’t have much choice. But I take back what I wrote which was unintentionally snotty about event driven programming. lo siento.

                                                                                                              2. 3

                                                                                                                Are the problems with fork() not mostly implementation issues? I get what you’re saying, but the existence of e.g. spawn() suggests that the concept is sound.

                                                                                                              1. 7

                                                                                                                I couldn’t not post something when I read the title. I’m currently waiting for a test which takes about 15 minutes. Thing is, I have to compare the output to values defined in a CSV, and the biggest chunk of the time is waiting for the input to be loaded. Usually, there are a lot of errors immediately, but I only see this after 15 minutes. Then, I have to open the Excel file which computes the values used for checking (takes ~10 minutes), and find the corresponding computation in both the program and the Excel file. When I fix the error, I have to fix the error in the code or in the Excel file. When it’s in the Excel file (it usually is), I have to run it (takes about 20 minutes), and update the CSV in some specific location, and run a program to put the new values in the database. This again, takes about 20 minutes.

                                                                                                                So all in all, I spend about an hour for every error I find. This can be (and often is) something as simple as a typo in the Excel file. I have complained about this process before, but I don’t have the time/authority to change this (there are about 15-20 people working on this project, so I can’t just change the workflow if it’s not a task that is assigned to me). I don’t know why others think this is acceptable. Maybe because I’m usually the one ending up doing this tedious task, because I’m the ‘technical guy’ in my team.

                                                                                                                This all used to really drag me down to the point of taking my work home and having a bad mood because of it. Now I’m a bit apathetic. If they don’t fix this, they’ll just pay me to do dumber work and be less productive. Their loss.

                                                                                                                1. 16

                                                                                                                  And people wondered why I favored unit tests over integration tests in Working Effectively with Legacy Code.

                                                                                                                  1. 2

                                                                                                                    My favorite was a discussion about a review of your book the other day where someone complained that unit tests cause tests to be too fine grained, and that logic of your solution creeps into your tests. Seems like that’s a good litmus test for complexity getting out of hand.

                                                                                                                    My work code base mingles what are clearly integration tests with unit tests and as a result no one runs the full suite before pushing new code and means master has a 50/50 chance of being broken at any given time. It’s terrible.

                                                                                                                  2. 3

                                                                                                                    Consider yourself lucky – the test suite on a project I worked on 10 years ago took over 6 hours to run (the developers didn’t consider the speed of the tests when writing them – since the tests are “not in the fast path”).

                                                                                                                    One thing we did do well is make sure that each test in the suite tested one and only one thing. If the suite failed, it was possible to re-run just one of the failing tests (which would take 10-20 seconds), rather than re-running the entire suite. In your case, it sounds like the developers of the tests might benefit from AAA (Arrange, Act, Assert) – which applies mostly to unit tests, but can also be used in integration tests.

                                                                                                                    1. 2

                                                                                                                      I’m talking about an individual test, which takes about 15 minutes (we have some which take up to half an hour). If you run all tests, it takes like 4 hours, I guess. So it often happens that you run tests before pushing to develop, and while the tests are running, someone pushes to develop, so you have to merge and run the tests again (and hope that no one pushes this time). Or, you just run some important tests and push. Then, if you break develop, you’ll know after 4 hours (and of course, people will have pulled from and pushed to develop).

                                                                                                                      1. 3

                                                                                                                        That’s solved by having a good integration flow, like bors-ng or zuul, in place. If your patch breaks the test suite, your patch should not land.

                                                                                                                    2. 3

                                                                                                                      I feel like there has to be a way in which you can take advantage of everyone’s apathy and disinterest in that task, artificially increase the cost of it over time, like pretend the spreadsheet takes an hour to load, and then try to do something about it in the newly created dead time? It’ll be slow work, but incremental improvements do lead places.

                                                                                                                      1. 3

                                                                                                                        Start getting paid by hour and detach yourself from the process and see your joy and happiness raise dramatically.

                                                                                                                        You’ll be absolutely delighted to know that tests have slowed down, as it allows you an extra cup of coffee/tea and another round of play with your doggo (or catto) :P

                                                                                                                        1. 1

                                                                                                                          Usually, there are a lot of errors immediately, but I only see this after 15 minutes.

                                                                                                                          Surely there’s a way to make your test framework fail fast? If it’s a case of loading everything into memory (which I doubt), then again surely there’s some streaming library for your language.

                                                                                                                          1. 1

                                                                                                                            Jup, it’s not even very hard. It’s just not a priority, so it’s never fixed. Also, most team leads don’t usually don’t do the job of running and fixing these tests, so they don’t really feel the pain of having slow tests.

                                                                                                                            I estimate that many steps in the process do about 10 to 1000 times the strictly necessary work:

                                                                                                                            • If you want to compare tests results you have to obtain output from a big excel file, that is about 60 MB big (even though you just need one sheet).
                                                                                                                            • If you load new input, you have to reload all the output (100s of MBs)
                                                                                                                            • If you do a test, you first load all the values into memory
                                                                                                                        1. 1

                                                                                                                          Some thoughts:

                                                                                                                          Git is also incredibly opinionated about how you should work with e-mail: one mail per commit, nicely threaded, patches inline.

                                                                                                                          This is because it was made for the kernel project. You can easily modify this behavior with scripting.

                                                                                                                          Collaboration includes receiving feedback, incorporating it, change, iteration

                                                                                                                          With e-mail this is well, via e-mail, who anyone can set up as they want (and likely already have)

                                                                                                                          I wasn’t subscribed at the time, and the mailing list software silently dropped every mail

                                                                                                                          This is incredibly bad. I really think mailing list software and mail servers are to blame: if you aren’t subscribed, you should get a confirmation of success or failure. If you are classified as spam, the mail should not be delivered and thus get a notice back.

                                                                                                                          Having to subscribe to a list to meaningfully contribute is also a big barrier: not everyone’s versed in efficiently handling higher volumes of e-mail (nor do people need to be).

                                                                                                                          Again I think this is the mailing list software’s fault. Most are ancient and are not designed with this workflow in mind. Ideally you should have subscription options (only get updates to your threads, or threads you are active in, etc) with sane defaults.

                                                                                                                          In all of these cases, I had no control. I didn’t set the mailing lists up, I didn’t configure their SMTP servers. I did everything by the book, and yet…

                                                                                                                          Not that you are wrong at all, but I feel the need to add that it is also true about forges!

                                                                                                                          Patches never arrive out of order and with delays

                                                                                                                          If patches are prefixed with [PATCH v1 0/7] the mailing list should queue them up to guarantee in order and without delays between them delivery.

                                                                                                                          have easy access to all discussions, all the commits, all the trees, from the comfort of your IDE

                                                                                                                          I agree this is better than looking through your list archives or online ones. But there could be a tooling solution for that.

                                                                                                                          No need to care about SMTP, formatting patches, switching between applications and all that nonsense.

                                                                                                                          Forges make the boring, tedious things invisible.

                                                                                                                          Tooling can be distributed within the repository if the project chooses this workflow.

                                                                                                                          I think all in all the problems you saliently describe are the fault of (1) lack of appropriate software to assist the workflow (2) lack of effort to support the workflow from the maintainer’s side. Ideally I would like a forge-mailing list that supports both workflows with the features I suggested above because not everyone is a power user but most people eventually turn into one.

                                                                                                                          1. 4

                                                                                                                            This is because it was made for the kernel project. You can easily modify this behavior with scripting.

                                                                                                                            Tell that to a newbie, and see them run away never to be seen again.

                                                                                                                            If you are classified as spam, the mail should not be delivered and thus get a notice back.

                                                                                                                            Alas, that’s now how many systems work in practice. It would be nice if they did, but they don’t, and I have zero control over that.

                                                                                                                            Not that you are wrong at all, but I feel the need to add that it is also true about forges!

                                                                                                                            The difference is that I have not come across a forge yet that let me submit invalid PRs or issues, sat on it for hours, and (at best) mailed me a response much later. I have had e-mail issues countless times on the other hand.

                                                                                                                            If patches are prefixed with [PATCH v1 0/7] the mailing list should queue them up to guarantee in order and without delays between them delivery.

                                                                                                                            No list does that currently, as far as I’m aware. Probably because it would be so very easy to abuse it: make sure some of the series is spammy, so it gets held up, or even rejected, and never send a good replacement. The mailing list keeps the rest queued, wasting resources. You can, of course, combat this by expiring stuff after a while, or building various abuse prevention methods into the mailing list software, but then we’re building workarounds again, instead of using purpose-built tools which don’t even have this problem.

                                                                                                                            Sending patches attached would solve 90% of the problems listed in the blog post, yet, core git doesn’t support that.

                                                                                                                            But there could be a tooling solution for that.

                                                                                                                            Could be, but there isn’t, even though e-mail and the desire to use it for All Kinds Of Stuff has existed for far longer than the Forges. Yet, noone built anything like the forges over e-mail, until Sourcehut, very recently.

                                                                                                                            Tooling can be distributed within the repository if the project chooses this workflow.

                                                                                                                            Uhm, yeah, no. You’ll never be able to support IDE integration at the level magit/forge (and many other integrations do) that way. You may be able to support an IDE or two. With a Forge that has an API, IDEs can support the Forge, and have it work for all projects using that forge. Resources far better spent.

                                                                                                                            1. 1

                                                                                                                              Uhm, yeah, no. You’ll never be able to support IDE integration at the level magit/forge (and many other integrations do) that way.

                                                                                                                              Yes I know, I was talking about e-mail. :)

                                                                                                                              Your other points can be summed up as “no software that does this exist” and this was exactly my point: it’s basically lack of appropriate & user-friendly tooling that makes e-mail version management so difficult for unaccustomed users. This is a problem the community should solve. I’m not sure if the git maintainers would accept such patches, but it’s worth trying.

                                                                                                                            2. 3

                                                                                                                              What you are describing is not a bad thing, but you’re not really doing email at that point. You’re designing a Code Forge Peering Protocol, and you just happen to have built it on top of SMTP.

                                                                                                                              I’ve been horribly tempted to do just that, by taking git-am/sr.ht’s email behavior and designing a forge that speaks the same protocol but bridges it with a pull request based web UI. You could fork a repository from any git:// URL, submit a “pull request” by sending email to whichever email list the project uses, and we would provide an email address for you that’s only used for code review (and, thus, only accepts replies, not unsolicited messages, because we don’t want to do anti-spam heuristics if we can avoid it). All of this could be done entirely through your browser, though you could clone your fork and do local work exactly like GitHub does. The only part where I’d have to do even remotely novel work would be in providing a good UI for git rebase -i, so that you could appropriately convert your in-progress commits into a good patchset that the ever-picky project owner would accept, again without forcing you to touch the git CLI.

                                                                                                                              And since we directly know that all incoming unsolicited emails must be patchsets, we could both do the queueing work that you describe to ensure that patchset submission is atomic, and we could reject 99% of incoming spam instantly.

                                                                                                                              Alas, not nearly enough hours in the day to build everything that I’m horribly tempted to build.

                                                                                                                              1. 1

                                                                                                                                providing a good UI for git rebase -i, so that you could appropriately convert your in-progress commits into a good patchset that the ever-picky project owner would accept, again without forcing you to touch the git CLI.

                                                                                                                                Wait, do existing web-based git management things (git{lab,hub}, sr.ht, etc?) provide a way to interactively rebase? I think having everything else you proposed would be amazing.

                                                                                                                                1. 1

                                                                                                                                  Hm I don’t think that’s possible on the web, since you have no way to resolve conflicts unless you startup a web editor. However you can do it with git “clients”, ie wrappers around the command line.

                                                                                                                                  1. 1

                                                                                                                                    Yea I didn’t think so, which is why I was a bit confused to see GP basically say “I could implement everything but I cannot implement this one thing that no one else has so I won’t implement any of it” (my rough paraphrasing)

                                                                                                                                2. 1

                                                                                                                                  You’re designing a Code Forge Peering Protocol

                                                                                                                                  Yeah, just like mailing lists are a discussion forum on top of SMTP.

                                                                                                                                  If you ever want to start this thing together send me a message :).

                                                                                                                                  (By the way, I’ve a tool like your ammonia project. I use it as a filter for html view in a cli MUA i’m working on.)

                                                                                                                              1. 2

                                                                                                                                What’s been the appeal of Medium? I don’t understand why so many people switched o to it. I must be missing something.

                                                                                                                                1. 5

                                                                                                                                  The reading experience didn’t used to suck. Also, it has a built-in monetization system.

                                                                                                                                  1. 1

                                                                                                                                    “Built-in monetization system” aka you work for them now.

                                                                                                                                1. 1

                                                                                                                                  Can’t provide screenshot, but text is truncated on mobile safari, so we can’t read the end of the article.

                                                                                                                                  1. 1

                                                                                                                                    @pims @dstaley

                                                                                                                                    After fiddling around with it in Android Chrome, I have a version that works in there (as well as Firefox and Edge). Does it act correctly in Mobile Safari now?

                                                                                                                                    1. 1

                                                                                                                                      It did fix it for me on mobile safari. Thanks!

                                                                                                                                  1. 11

                                                                                                                                    I like the argument about the intuitiveness of hashtagging and linking as a layer on top of plaintext, but couldn’t the same be done with bolding for example like **bold** i.e. the markup is not stripped.

                                                                                                                                    1. 9

                                                                                                                                      Good idea. You could do this with other formatting too, such as making # headers bigger while still showing the #s.

                                                                                                                                      You can see an example of this style of formatting in the screenshot of the Mou Markdown editor on its home page. The editor pane on the left shows some of this hybrid formatting. (To try it yourself, download the open-source MacDown editor.)

                                                                                                                                      Taken to its conclusion, this style of formatting is just really good syntax highlighting.

                                                                                                                                      1. 5

                                                                                                                                        and a really nice thing about it is you can copy and paste it just fine!

                                                                                                                                      2. 4

                                                                                                                                        That’s possible, but it still has the factor that you can accidentally invoke the markdown syntax. Let’s say, for example, that you want to place a shruggie onto a platform like this:

                                                                                                                                        If you do not implement escape characters at all, then you wind up with ¯\_(ツ)_/¯ where the anatomy is all visible, but the face is tilted sideways.

                                                                                                                                        If you do implement escape characters, and you hide the escape characters, then you wind up with ¯_(ツ)_/¯ the classic “missing arm” broken shruggie.

                                                                                                                                        If you implement escape characters, and show them, then you wind up with a correct ¯\_(ツ)_/¯. This is the outcome we want, but that policy is NOT generally good. I had to use escape characters extensively in this site, and I would not have benefited from such a policy.

                                                                                                                                        1. 3

                                                                                                                                          You can accidentally invoke the hashtag syntax on Twitter, by using the number sign in front of a number as per normal (see post #1). All the rest of this is just a matter of degree.

                                                                                                                                          1. 3

                                                                                                                                            You can accidentally write an unintentional url, too. Twitter also makes it hard to write a url that will appear as text without mangling and truncation, so even knowing what will happen may not help.

                                                                                                                                          2. 2

                                                                                                                                            How about (sic)? I imagine it as an escape mechanism which nullifies the special formatting of whatever directly precedes it. And, it is already considered “transparent, not part of the text” by most English readers. (Does something like it exist for non-English readers?)

                                                                                                                                            So:

                                                                                                                                            The robot prefers to be addressed as 345ART0.

                                                                                                                                            vs.

                                                                                                                                            The robot prefers to be addressed as 345*ART*0 (sic).

                                                                                                                                            I will abstain from speculating about how to handle edge cases, like, a sentence that contains the names of ten robots…

                                                                                                                                            1. 1

                                                                                                                                              You could have it only format whole words. Then users can invent their own ad-hoc ._escaping_, or prepend with any other character to disable formatting. Sure, they then can’t format within words (I almost never do), but it avoids the inadvertent formatting problem.

                                                                                                                                              Or have [formatting _not apply_ within square brackets] or some other infrequent character sequence.

                                                                                                                                              I really like this whole concept, by the way.

                                                                                                                                              1. 2

                                                                                                                                                Or have [formatting not apply within square brackets] or some other infrequent character sequence.

                                                                                                                                                Sadly that would go against markdown (or at least commonmark, to be exact), which I think is worth preserving, especially since it has proliferated itself to become the default limited-markup for websites. And compared to alternatives like bbcode or the markup wikis use, I find it a lot more comfortable.

                                                                                                                                            2. 3

                                                                                                                                              Yeah, that’s generally how Emacs handles styling (‘font-locking’) for Org-mode markup.

                                                                                                                                              1. 2

                                                                                                                                                If you have a look at the Ulysses app on macOS it does this. It’s a writing app that lets you write in markdown. But what it does is for instance for headings, it has a gutter on the left like a code editor which shows the heading level while still retaining the markdown syntax in the view and stying it based on the heading level selection or markdown. It also extends certains items with inline controls like images, etc…

                                                                                                                                                You can see an example on their features page https://ulysses.app/features/ it’s a paid product though. I bought it years ago, then it went to subscription and the version I bought was left to rot and constantly crashed and got worse on subsequent versions of macOS.