1. 5

    This is really cool! I’ve watched the video linked at the bottom and find the reversing-process itself to be very interesting.

    Some reversers take the effort to try to generate byte-by-byte-matching binaries, but this doesn’t seem to be the goal here. I wonder if it makes sense to reproduce the exactly matching binary and then applying the bugfixes as a set of patches. Indeed, this makes the general process harder, but it would provide more insight from the outside which bugs were originally present in the engine and make it possible for people to do “speed runs” on confirmed vanilla-engines, just to give one idea.

    Let’s hope Take-Two appreciates the fan-driven effort instead of trying to shut it down. I don’t think there’s anything in the engine one would still consider a trade-secret or bleeding-edge-development.

    Reverse-engineering is the only way to keep old games alive, because you can’t compile or execute “intellectual property” on your computer as soon as the old binaries stop working. Thus, in my opinion, IP shouldn’t be valued so highly in such obvious cases, comparable to how I think patents should not be possible to be kept by those that don’t make use of them.

    1. 5

      Game companies sometimes don’t want to keep old games alive, because if people are playing the old games they aren’t playing/buying the new ones, or they lose the opportunity to sell them again as cheap ports to new platforms (see Super Mario 3D All-Stars). I really hope that logic will not apply here!

      1. 3

        Since you need to buy the assets anyway, I absolutely can’t see how that logic may apply, ever. Every alternative platform engine is a net gain from this perspective, since it gives motivation to buy the game for its assets to people who would never buy it otherwise.

        1. 3

          Totally agree. Now please explain that Nintendo, Take-Two, Activision-Blizzard and friends, with mountains of dead fan ports in the backyard ;_;

          In the recent decade even major strides of the modding community were killed off, even though modding is an “obvious” net-benefit to the game’s value as well…

          1. 3

            Someone buying the PC port for $10 (or, more likely, pirating it and throwing away the DRM’d executable) to play is probably unlikely to spend $30 or more on an official port to a next-generation system. id Bethesda Microsoft isn’t likely to see me buy their new port of Dooms 1 or 2 because I’ve spent so much time with free source ports and the asset files I bought for $3 a decade ago.

            1. 2

              Let’s take Mario 64 for instance. You already own it on N64, but your N64 is sitting unplugged on a shelf since it won’t even work on your new TV and your controller is broken. N64 decompilation project comes along, you paid for the assets 25 years ago, you can happily play the game on any platform you want. Why would you pay nintendo again for the switch emulator version? You already own the game, and you can now play it comfortably at 4K. Or you might be less inclined to buy a newly released game or console, since you are already busy playing these old games, with their new mods and all.

              In a sense, videogame sales compete against other new games but also against all past existing games, unless those past games are unavailable due to system obsolescence.

            2. 2

              It’s a bit more complicated than that.

              A successful game title is valuable intellectual property. It would be irresponsible towards the IP’s once and future owners to let part of it “get away” and maybe be used in a way that’s harmful to the parent company. Obviously it’s not a big deal for Rockstar is someone makes a lewd version of GTA (but see the Hot Coffee mod!) but for companies like Nintendo it’s unthinkable.

              1. 1

                I think that’s why GP wrote

                IP shouldn’t be valued so highly

                because, yeah, perhaps people shouldn’t have to care what Nintendo thinks after all.

                1. 1

                  I was simply replying to the statement that “companies don’t free old games because they’d sell less new ones”.

                  I’m all for comprehensive IP reform personally, and hopefully efforts like this will work towards that.

          1. 15

            There’s no such thing as a free lunch!

            Anyway, what’s the purpose of Cloudflare anyway? Rent a server in a good datacenter and pay for a DDoS-plan if you’re so inclined. Too many websites use Cloudflare and give it too much power over what content can be seen on the internet. Using Tor? Blocked. Coming from an IP we don’t like? Blocked. Javascript disabled? Sorry, but you really need to fill out this Captcha.

            On top of that, it’s one giant MITM and I am seriously shocked this hasn’t been discussed much more intensely. It would be trivial (if it hasn’t happened already or was the whole purpose of this shebang) for a five-eye-agency to wiretap it.

            The NSA et. al. don’t like that more and more traffic is being encrypted. It woule be a great tactic of them to spread mindshare about Cloudflare about it being almost essential and at least “good to have” for every pet-project. “Everybody loves free DDoS-protection, and Google has it too!”

            1. 19

              Anyway, what’s the purpose of Cloudflare anyway?

              The purpose is that they’re a CDN

              Rent a server in a good datacenter and pay for a DDoS-plan if you’re so inclined.

              This doesn’t replicate a CDN

              On top of that, it’s one giant MITM and I am seriously shocked this hasn’t been discussed much more intensely. It would be trivial (if it hasn’t happened already or was the whole purpose of this shebang) for a five-eye-agency to wiretap it.

              I don’t know about you, but the threat model for my personal website (or indeed a professional website) does not include defending against the intelligence services of my own government (“Five Eyes”). That is a nihilistic security scenario and not one I can really take seriously.

              For my money, I think the author of TFA has (wildly) unrealistic expectations of a free service. I’m only sorry that Cloudflare have to put up with free tier customers loudly complaining that they had a problem and needed to make at least a notional contribution in order to get it resolved.

              1. 9

                Sure, it doesn’t have to fit your threat model but by using Cloudflare you’re actively enabling the centralization of the web.

                1. 10

                  In my defense I must say that I am merely passively enabling The Centralisation of The Web, at most, as I have formed no opinion of it and am taking no special action either to accelerate it or reverse it, whatever it is.

                  1. 3

                    What’s a good, existing, decentralized solution to DDoS protection?

                    1. 1

                      Not necessary good, but very much existing and decentralized, is IPFS. Comprises quite a bit more of the stack than your standard CDN; nevertheless, it has many of the same benefits, at least as far as I understand it. There’s even a sort of IPFS dashboard (it’s FOSS!) that abstracts over most of the lower-level steps in the process.

                      If you are at all dismayed that the current answer to your question is “nothing”, then IPFS is definitely one project to keep an eye on.

                      1. 1

                        Ironically, one of the first results when googling about how to set up IPFS is hosted on… Cloudflare:

                        https://developers.cloudflare.com/distributed-web/ipfs-gateway

                2. 18

                  Cloudflare’s S1 filing explains how it makes money from free users. Traffic from free users gives Cloudflare scale needed to negotiate better peering deals, and more cached sites save ISPs more money (ISPs prefer to get these free sites from a local Cloudflare pop, instead of across the world from aws-us-east-1).

                  1. 7

                    I’m digging for the blog post that references this, but Cloudflare in a past RCA has said that their free tier is, essentially, the canary for their deployments: changes land there first because it is better to break someone who isn’t paying for your service than someone who is.

                    (FWIW, I don’t think this is a bad thing; I’m more than happy to let some of my sites be someone else’s guinea pig in exchange for the value Cloudflare adds.)

                    E: Found it!

                    https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/

                    If the DOG test passes successfully code goes to PIG (as in “Guinea Pig”). This is a Cloudflare PoP where a small subset of customer traffic from non-paying customers passes through the new code.

                    1. 4

                      Yes, free users sometimes get releases earlier. However, the PIG set is not all free customers, but only a small fraction. In this case “non-paying” meant “owes money”.

                  2. 3

                    Have to agree. Besides, their preloading page in front of websites is really annoying and I wouldn’t use that for the sake of UX. Each time I get one, I just bounce instead of waiting 5 secs.

                  1. 2

                    That’s … not idiomatic C code there. I would encode his TIFF example as:

                    typedef enum
                    {
                      TIFFEntryValueTagSmall,
                      TIFFEntryValueTagBlock,
                    } TIFFType__e;
                    
                    typedef struct
                    {
                      TIFFType__e type;
                      uint32_t    value;
                    } TIFFEntryValueTagSmall__s;
                    
                    typedef struct
                    {
                      TIFFType__e type;
                      size_t      size;
                      void const *data;
                    } TIFFEntryValueTagBlock__s;
                    
                    typedef union
                    {
                      TIFFType__e               type;
                      TIFFEntryValueTagSmall__s small;
                      TIFFEntryValueTagBlock__s block;
                    } TIFFEntry__u;
                    
                    inline static TIFFEntry__u TIFFEntryValueSmall(value)
                    {
                      return (TIFFEntry__u) {
                        .small = { .type = TIFFEntryValueTagSmall , .value = value }
                      };
                    }
                    
                    inline static TiffEntry__u TIFFEntryValueBlock(size_t size,void const *ptr)
                    {
                      return (TIFFEntry__u) {
                        .block = { .type = TIFFEntryValueTagBlock , .size = size , .data = ptr }
                      };
                    }
                    

                    That way, a type of TIFFEntryValueSmall does not have access to the data or size fields of the TIFFEntryValueTagBlock. And it does not involve adding new syntax to C macros.

                    1. 1

                      That’s … not idiomatic C code there

                      That’s how I would write it. (Assuming you’re talking about the original code with the anonymous union; not the macro soup, which is admittedly impressive.) Your code takes advantage of aliasing and, though not technically undefined behaviour (a struct is allowed to alias its first member), doesn’t sit well and is much easier to mess up.

                      1. 1

                        I totally agree with that. As I said in my other reply, it’s also just superfluous to define so much boilerplate for what effectively ends up in a controlled-initialization anyway.

                        1. 1

                          Perhaps I’ve done too much Xlib programming and dealing with XEvents (which is done in the manner I presented), but I prefer that approach, as it makes clear which TIFF objects have which fields (add about half a dozen more TIFF types and see how clear the original remains), and allows one to write functions that take particular TIFF types (if that makes any sense). It’s not always about initialization, but usage as well.

                          1. 3

                            it makes clear which TIFF objects have which fields (add about half a dozen more TIFF types and see how clear the original remains), and allows one to write functions that take particular TIFF types

                            It depends highly on the specific situation and how large the types are, but I frequently define intermediate structure types. I just don’t like aliasing the first member. Something like this:

                            typedef enum {
                            	TiffEntry_Small,
                            	TiffEntry_Block,
                            } TiffEntryType;
                            
                            typedef struct {
                            	const void *ptr;
                            	size_t len;
                            } TiffEntryBlock;
                            
                            typedef struct {
                            	TiffEntryType type;
                            	union {
                            		uint32_t small;
                            		TiffEntryBlock block;
                            	};
                            } TiffEntry;
                            
                            TiffEntry tiff_entry_make_small(uint32_t value) {
                            	return (TiffEntry){.type=TiffEntry_Small, .small=value};
                            }
                            
                            TiffEntry tiff_entry_make_block(void *ptr, size_t len) {
                            	return (TiffEntry){.type=TiffEntry_Block, .block={.ptr=ptr, .len=len}};
                            }
                            

                            (I probably wouldn’t define a whole new structure type for something with only 2 members; but that example is still illustrative, and I would almost certainly do it for one that had ≥3 or 4.)


                            Ultimately, the problem I have with structures like TIFFEntryValueTagSmall__s is that it’s possible to construct a TIFFEntryValueTagSmall__s which is not tagged with TIFFEntryValueTagSmall. That makes the whole structure very brittle. Whereas something like my TiffEntryBlock is always correct on its own, and can trivially be built up into a TiffEntry when the generic structure is needed.

                      2. 1

                        Talking about idiomatic code (of which there is no one true definition), I personally don’t like your __-suffixes indicating the type while typedeffing them. We can surely have a debate on that, but I think you should only typeset when dealing with an opaque type (and I mean opaque as in FILE or something; the type-field in tiff_entry below is still useful for the normal “consumer” and thus the type is not opaque). Given you prefix anything with “struct”, “enum” or “union” otherwise anyway (by language norm), you are solving a problem with the typedefs that you introduced in the first place. The direct struct-return is a bit iffy as well and I’d rather define a function that is passed a pointer and a value to fill.

                        In total, I’d go with the following approach. There’s no need to gobble up your namespace with so many types, especially because it doesn’t enforce anything anyway. The less code you write the better. Here’s the approach I would take:

                        enum tiff_entry_type {
                           TIFF_ENTRY_SMALL,
                           TIFF_ENTRY_BLOCK,
                        };
                        
                        struct tiff_entry {
                           enum tiff_entry_type type;
                           union {
                              uint32_t small;
                              struct {
                                 const void *ptr;
                                 size_t len;
                              } block;
                           } data;
                        };
                        
                        void
                        tiff_entry_set_small(struct tiff_entry *e, uint32_t value)
                        {
                           e->type = TIFF_ENTRY_SMALL;
                           e->data.small = value;
                        }
                        
                        void
                        tiff_entry_set_block(struct tiff_entry *e, void *ptr, size_t len)
                        {
                           e->type = TIFF_ENTRY_BLOCK;
                           e->data.block = { .ptr = ptr, .len = len };
                        }
                        

                        I’ll let anybody be the judge on what is more readable and more maintainable.

                        1. 1

                          Fair point about the suffixes and when to typedef. But your code is about the same as the original code, which the author complained that a TIFF_ENTRY_SMALL had access to TIFF_ENTRY_BLOCK. At least with the code I presented, you can’t mistakenly set small with a TIFF_ENTRY_BLOCK. I also followed the structure return of the original code. I wouldn’t necessarily reject a direct-returned structure, but for me, it depends upon the size of the structure and the context it’s used in.

                        2. 1

                          Considering the replies you got, I don’t think there is such a thing as idiomatic C. The language doesn’t specify much, and I doubt a style is included. It seems to be what platform and project you’re used to. I’m sure if you asked a Win32 weenie you’d get plenty of struct tagTIFFDATA { LPVOID lpvData; ...}.

                        1. 4

                          Typewriters are incredibly complex and precise piece of machinery. At their peak in the decades around World War II, we built them so well that, today, we don’t need to build any typewriters anymore.

                          Cool, now it’s time to find all the many typewriters to help me type in my native script. Oh wait a minute, they don’t really exist. This is also a great solution for CJK languages which have multiple scripts and large ideographic orthographies.

                          A heavier and well-designed object feels different. You don’t have it always with you just in case. You don’t throw it in your bag without thinking about it. It is not there to relieve you from your boredom. Instead, moving the object is a commitment. A conscious act that you need it. You feel it in your hands, you feel the weight. You are telling the object: « I need you. You have a purpose. »

                          Right, so I should tell my partner, who has a lot less upper body strength than I do, that she needs to carry a metal weight in her bag to help her feel connection with her writing device, and also give her back pain? Come on. Portability is a huge leveller. It helps folks ride around on bicycles or walk instead of driving with their goods. It helps women, who have less upper body strength, carry things around. It lets kids, the elderly, and anyone who has issues hauling things be enabled to use the device. This feels like an anti-accessibility measure to me.

                          Instead of being mass-produced in China, ForeverComputers could be built locally, from open source blueprints.

                          Nice we got some xenophobia here as well.

                          Geeks and programmers know the benefit of keyboard oriented workflows. They are efficient but hard to learn.

                          With the way my RSI is going, I’m really hoping we as a society can move away from keyboard oriented workflows, but okay. I’m glad our vision of the future only has people with full range of motion with their 10 digits as writers.


                          While I like some of the ideas here, I really want to question these choices that the author has made. Who actually wants to use these devices? Certainly not my parents, my partner, nor I. These are things that a certain subset of the software community values, but far from universal. There’s also a lot of implicit eurocentrism in the typewriter. Modern computers have dramatically increased the accessibility of reading and writing to folks with poor vision or dexterity, and we don’t remember the typewriters that did break or jam frequently. Let’s not throw away accessibility due to some nostalgia that a programmer has.

                          1. 5

                            Instead of being mass-produced in China, ForeverComputers could be built locally, from open source blueprints.

                            Nice we got some xenophobia here as well.

                            What’s wrong with localized production instead of long-distance mass-production in typical mass-producing-nations like China? His remark neither meant China specifically (but rather used it as a device), nor did it address the Chinese people but the nature of China’s economy. Or are you going to argue that China does not primarily focus on mass-production?

                            Apart from that, I agree with your statements.

                            1. 2

                              Because it’s naive if the point is about mass production and shipping. If you’re trying to argue that your device is made externally, then I’m pretty sure the entire thing is not assembled in China. The chip may have been fabricated in Taiwan (through TSMC), other electronic parts in China, with small parts from, say, Indonesia. Modern supply chains are complex, and assuming something comes solely from China feels disingenuous.

                              It would have been simpler to say “Instead of being assembled and shipped over large distances, ForeverComputers could be built locally”

                              1. 3

                                It may very well be naive, yes. But I still don’t see how it’s “Xenophobic” to say what they’ve said.

                                1. 1

                                  Sure, I can go either way on it. I wasn’t inclined to give the piece the benefit of the doubt when the rest of it seemed so out of touch, but I can see it being both naivite/figure-of-speech or mild Xenophobia.

                          1. 18

                            Just use federated systems like Matrix or Tox. Signal is just yet another silo and not a long-term solution amid increasing government censorship. The same applies to Threema, Telegram and others.

                            1. 13

                              Now I’m feeling like a broken record, but…

                              Domain-name-based federation is a half-assed solution to data portability. It gives special privileges to people who can run always-on nodes, which not everyone can or should be doing. It’s also tied to the domain name system, which is neither practical nor ideal.

                              Either do real P2P, or don’t bother pretending.

                              1. 10

                                I really want to disagree with you but I have come to think the same way over the last few months, having myself run a matrix home server and an XMPP server.

                                • If my server gets taken down, people have to regroup and find a new one somehow.
                                • If Signal goes down, people have to regroup and find a new service somehow.

                                There’s not much difference here to the average user. If they can’t talk they can’t talk, regardless of everyone else.

                                Sure, the first option is better because the rest of network stays up, but it’s not enough of an advantage compared to the benefits of a centralised system.

                                If anything, the ease of moving from WhatsApp to Signal highlighted just how easy it can be to go from silo to silo. It doesn’t even feel like you have an ‘account’ in the traditional sense.

                                There are lots of big problems to solve with P2P, most of them to do with mobile and multiple devices, but until someone gets there I’m just glad that people are looking at Signal over WhatsApp.

                                1. 4

                                  Who says you need to use matrix.org? There are many, many options…

                                  Far more than just the 1 single option you get with moxiechat.

                                2. 3

                                  I tend to believe that federated networks, while obviously being harder to block than centralized ones, are also no panacea against government censorship. Because as censor, you now have to block not a single entity, but multiple, which is also doable. And the nodes in a federated network are also inflexible, as they have a unique name that identifies them. Which is nice for users, but also helps the censor to track them. As soon as a node is one the censor’s list, it’s only option is to reappear under a different name (which is bad for users).

                                  Not sure if this applies to all federated networks, but probably to most. If you have counterexamples then please share and explain how they avoid those, IHMO inherent properties of federated networks.

                                  1. 1

                                    How do federated systems approach problems that require hardware solutions (e.g. Signal’s use of SGX)? Is there a way to guarantee that whatever server is running for a particular federated node is using the correct hardware?

                                    1. 3

                                      That’s exactly what SGX does - it guarantees that the in-enclave code matches the recorded signature (or that intel have been compromised). Every federated node would need an intel SGX-compatible CPU but no other issues.

                                      In the case of signal, the server sends a blob signed by intel (very difficult to forge) which confirms

                                      • The hash of the server code
                                      • The hash of signals public key
                                      • The version of intel CPU / enclave
                                      • Arbitrary data sent by the signal server

                                      One approach from there would be: the ‘arbitrary data’ bit contains a public key, which your signal client can use to encrypt messages to the server. The corresponding private key does not leave the enclave (and you can verify that by comparing the open source implementation with the hash of the server code).

                                  1. 1

                                    There is nomenclature for this, and you just have to apply a “higher-level” concept, namely a material conditional. The expression ¬(a→b) is equvalent to a∧¬b (a not implying b is equivalent to a being true and b not being true). In your case, we can translate your expression as follows:

                                    a∧(b∨c) ⇔ a∧¬(¬b∧¬c) ⇔ ¬(a→(¬b∧¬c))
                                    

                                    For that to be true, the expression a→(¬b∧¬c) must be false, which can only happen if a is true and (¬b∧¬c) is false (i.e. b or c is true) (given the expression (t→f) is the only true-false-combination in the material conditional which yields false).

                                    Your second expression is even simpler (counterintuitively), using the fact that a→b is equivalent to ¬a∨b:

                                      (a∧b) ∨ {c ∧ [(a∧d)∨(a∧e)] }
                                    ⇔ (a∧b) ∨ {c ∧ [a ∧ (d∨e) ] }
                                    ⇔ (a∧b) ∨ {c ∧ a ∧ (d∨e) }
                                    ⇔ ¬(¬a∨¬b) ∨ { c ∧ a ∧ (d∨e) }
                                    ⇔ (¬a∨¬b) → (c ∧ a ∧ (d∨e))
                                    

                                    For that to be true, a being true is a necessary condition, otherwise the left side would always be true and the right side would always be false, and the expression (t→f) is always false, yielding your result formally. If you don’t like going a level up, you can also write down a truth table for all involved parameters. It’s a valid form of proof.

                                    I hope this helped. :)

                                    1. 22

                                      This and many many other events in the last few months have shown everybody that we must get out of the silos (Google, WhatsApp, Facebook, Twitter, Reddit, Amazon, etc.). I’m glad that I’m using a Google-free Android (LineageOS), and even though it’s sometimes more work, freedom is never free, and there are many other great federated services around. You should also give an F-Droid-based Android like LineageOS a try.

                                      If I may give advice to those promoting alternative silos (Signal, Threema, Telegram, etc.): It won’t take long until legislators, companies, etc. double down on them as well. The only way out is federation and you should definitely give Matrix a try.

                                      1. 1

                                        I can install any android app I want while also making use of the Play Store.

                                        For now there is no major functional difference for most users.

                                        1. 10

                                          Being able to use the Play Store is a functional difference.

                                      1. 3

                                        Just try compiling a program on a Raspberry Pi that uses Autohell. The feature-checks take much more time than the compilation itself, while they are completely unnecessary. It’s true that it’s often the author’s fault, because it’s such a mess most people just end up copying one and the same 10k-LOC autohell-macro. Even worse if you suddenly have multiple autohell-runs. Even worse, autohell often gets in my way when it’s feature-test-macros have regressions (that happen quite often when I upgrade gcc or something else) that are thrown as errors, preventing me from continuing compiling even though I know it’s just autohell’s stupidity at play.

                                        I don’t want to know how much time and energy has been and is being wasted on this superfluous mess, but it could’ve been spent on much more pressing issues in the free software ecosystem.

                                        For portability, we at suckless usually have a config.mk that is included in the Makefile, and it contains all the portability-related aspects (library locations, man-paths, etc.) and I have yet to see a distribution where “porting” a suckless program involves more than, if it not works out of the box, changing a few of those lines; the Makefile always remains untouched. Admittedly, there are more complex programs around, but even there you can most likely work it out with a good Makefile-system (possibly using a configuration-include) and an impossibility probably means a fault in the program’s design.

                                        For what it’s worth, I can’t wait for the autohell to disappear. It had it’s purpose in the 90’s and early 2000’s, but now it’s just a waste of time and space and other much simpler and more streamlined build systems easily replace it.

                                        1. 3

                                          Can anyone suggest a xscreensaver alternative that doesn’t pull a bunch of dependencies?

                                          resolving dependencies...
                                          looking for conflicting packages...
                                          
                                          Packages (21) gdk-pixbuf-xlib-2.40.2-1  glu-9.0.1-2  libglade-2.6.4-7  perl-clone-0.45-2  perl-encode-locale-1.05-7  perl-file-listing-6.14-1  perl-html-parser-3.75-1
                                                        perl-html-tagset-3.20-10  perl-http-cookies-6.10-1  perl-http-daemon-6.06-2  perl-http-date-6.05-3  perl-http-message-6.27-1  perl-http-negotiate-6.01-8
                                                        perl-io-html-1.004-1  perl-libwww-6.52-1  perl-lwp-mediatypes-6.02-8  perl-net-http-6.20-1  perl-try-tiny-0.30-5  perl-www-robotrules-6.02-8
                                                        xorg-appres-1.0.5-2  xscreensaver-5.44-3
                                          

                                          I mean, is this reasonable for everyone?

                                          1. 10

                                            I use i3lock. Its direct dependencies look reasonable, although I don’t know what they recursively expand to.

                                            With that said, I don’t know whether it is “secure” or not because my threat model doesn’t really care if it is or not. I only use it to prevent cats and children from messing around on the keyboard. And for that, it works well.

                                            1. 4

                                              Try slock, which has no dependencies except X11 itself.

                                              1. 2

                                                Build from source and disable the savers/hacks that require the dependencies you aren’t happy about.

                                                1. 1

                                                  I don’t want any screensaver, just want my screen to lock reliably. I guess I’ll try that.

                                                    1. 2

                                                      It’s a great compromise when using X11, but the whole concept of screen savers on X11 is just so fragile. Actually suspending the session even if the screensaver should crash would be much cleaner (which is how every other platform, and also wayland handle it).

                                                      What I’m even more surprised about is that you said this compromise is possible with 25yo tech - why did no distro actually do any of this before?

                                                    2. 0

                                                      What about physlock?

                                                      1. 5

                                                        No idea about physlock or any other alternative, I am asking because this sentence kind of make me think:

                                                        If you are not running XScreenSaver on Linux, then it is safe to assume that your screen does not lock.

                                                        Though this person’s attitude kind of bothers me, if you run ./configure on xscreensaver you read stuff like:

                                                        configure: error: Your system doesn't have "bc", which has been a standard
                                                                          part of Unix since the 1970s.  Come back when your vendor
                                                                          has grown a clue.
                                                        

                                                        hm. Ok? I guess I don’t have to like it, I just don’t see the need for that.

                                                        1. 19

                                                          jwz ragequit the software industry some 20 years ago and has been trolling the industry ever since. Just some context. He’s pretty funny but can be a bit of an ass at times 🤷

                                                          1. 18

                                                            He’s also pretty reliably 100% correct about software. This may or may not correlate with the ragequitting.

                                                            1. 3

                                                              While ragequitting may not correlate with being correct about software, being correct about software is absolutely no excuse for being an ass.

                                                              1. 7

                                                                It’s not his job to put on a customer support demeanor while he says what he wants.

                                                                He gets to do as he likes. There are worse crimes than being an ass, such as being an ass to undeserving people perhaps. The configure script above is being an ass at the right people, even if it does editorialize (again, not a problem or crime, and really software could use attitudes!)

                                                                1. 4

                                                                  Lots of people in our industry seem to think that being a good developer you can behave like a 5 years old. That’s sad.

                                                                  1. 4

                                                                    Especially in creative fields, you may choose to portray yourself any way you choose. You don’t owe anybody a pleasant attitude, unless of course you want to be pleasant to someone or everybody.

                                                                    For some people, being pleasant takes a lot of work. I’m not paying those people, let alone to be pleasant, so why do I demand a specific attitude?

                                                                    1. 2

                                                                      Being pleasant may take work, but being an asshole requires some effort too. Unless you are one to begin with and then it comes naturally of course. :D

                                                                  2. 3

                                                                    How is the bc comment being an ass at the right people? Plenty of distros don’t ship with bc by default, you can just install it. What is a “standard part of unix” anyway?

                                                                    1. 9

                                                                      bc is part of POSIX. Those distros are being POSIX-incompatible.

                                                                      1. 8

                                                                        As a developer for Unix(-like) systems, you should be able to rely on POSIX tools (sh, awk, bc etc.) being installed.

                                                                    2. 2

                                                                      It sounds like you view software as an occupation. It is not. It’s a product.

                                                                2. 2

                                                                  Physlock runs as root and locks the screen at the console level. AFAIK the problems affecting x-server screenlockers aren’t relevant to physlock.

                                                        1. 8

                                                          How does slock compare to xscreensaver?

                                                          1. 9

                                                            I dare you to find a bug in it. ;)

                                                            Keep in mind the remarks on its manual page though, i.e. that you have to disable VT-switching and the X11-kill-switch. Apart from that, when the mouse or keyboard is grabbed by another application, it will wait for them to be released. Until then though, the screen won’t turn black, so you at least know that your screen is not locked. We can’t fix that limitation in X11, but apart from that, once your screen is black, you should be good to go.

                                                            More in-depth testing is appreciated though, but we discussed more or less all aspects of slock and X11-limitations deeply at the suckless conference in 2016.

                                                            1. 3

                                                              I wonder if you can do some sort of fuzz testing on the input with these kind of applications; that probably would have caught the “my kids are randomly smashing my keyboard”-case.

                                                              I don’t really care enough about the security of these kind of applications to work on it (I just use slock to prevent the “opportunist passer-by”-scenario), but this is probably the best way to test these kind of things.

                                                          1. 2

                                                            Really nice website with lots of information, but it wouldn’t have hurt to include some pictures.

                                                            1. 3

                                                              There are pictures, they are just not inline, eg.: https://www.hpmuseum.org/3qs/803q.jpg

                                                              1. 1

                                                                The site is delightfully old-school. “Image thumbnails?! What dark magic is that!”

                                                            1. 24

                                                              I don’t agree with the author’s opinion, because he paints bug reports as singular. However, you can have really bad bug reports, merely stating the problem with zero effort on their behalf, or really good ones with logs, extensive background and maybe also with a patch, where you see that the person spent their time to investigate such that the maintainer/developer saves as much time as possible with the given fix (and bugs need to be fixed at some point, imho).

                                                              We do it differently at suckless: Unless we see that someome is overwhelmed by the complexity of the problem, we ask them to send a patch. This person then can either spend the time writing one or even ask/pay somebody else to do it. People don’t have the right to demand “customer support”, but if they do the work for you (and bug-hunting is mostly testing and lottle coding), you miss a huge opportunity ignoring them. This insistence on patches is even stricter with quality-of-life change requests, and looser if someone reports an actual bug or security flaw. Overall, the system works very well.

                                                              Even though there is no fundamental flaw in the author’s reasoning, I wouldn’t want him to face a situation where a 0-day or major security issue is found in his software. I’d be very pissed to find his payment-bug-model out at the point when I’ve finished a bug-hunt. On the other hand, maybe the author doesn’t have much experience with projects that have actual security relevance.

                                                              1. 30

                                                                From what I can gather, the type of software this user writes mostly deals with non-technical users; it’s kind of a different thing than suckless, where people usually know at least roughly what they’re doing (well, mostly anyway … I’ve seen some folk on /r/suckless…) Suckless is a rather special project as it’s explicitly aimed at expert users; if you don’t know at least the basics of C then you probably won’t have a good time using suckless tools (which, IMHO, is probably its biggest selling point, rather than the minimalism).

                                                                This APK downloading thingy he makes sits in this kind of weird space where it’s a little bit technical but not too much, and in my experience this attracts a certain type of person. Let call them, ehm … “wanna-be power user”? That is, someone who wants to do all sorts of stuff with their computer (cool!) but also doesn’t really want to spend any time on learning anything (less cool) and crowdsources the actual legwork to various strangers on the internet (very uncool).

                                                                One of the big problems is that it’s very hard to distinguish the “interested user who wants to learn” and the “wannabe power user” from a single data point. I read a bit through his issue tracker when he posted his blog last time, there doesn’t seem to be that much activity there[1], but in one of the issues he mentioned that he just removed the CLI because he was tired of teaching people CLI fundamentals. Even if all these people were “interested user who wants to learn” (they almost certainly weren’t) it gets rather tedious quickly.


                                                                I’d expect the author will fix serious bugs and 0-days, but there’s a large class of bugs of the order “will only happen on Windows 10 patch 6316, if Firefox and Chrome and both installed, if it rains, and if Beyoncé released a new single in the last 4 months”. Fixing those tends to be where the time-consuming work is, with comparatively little benefit for the vast majority of users.

                                                                At the end of the day it’s all about expectations: I have a few projects where I work on in my spare time for my own purposes, and I’ve very clearly said “Thanks for the bug report, I’ll fix it when I feel like it” on a number of occasions (in pretty much those words, with a little bit of context). Sometimes I feel like it the next day, sometimes much later. Sometimes never. On other projects expectations are a bit higher: one of my projects is paying the bills right now.

                                                                I’ve recently come to think that we just need new terms for this; “Open Source” or “Free Software” (whatever you prefer) just doesn’t capture any of this. Most projects fall in three categories: a business who open source something, someone who makes something in their spare time and puts it online “because why not?”, and people work on it in their spare time and really like being maintainers for these kind of things (rather than just working on it for their own purposes).

                                                                There’s some overlap: I tend to write READMEs and docs and worry about backwards compatibility even if I work on it on my own purposes, and the “business open sources something” can be subdivided in various categories, but those seem to be the rough outlines.


                                                                [1]: Which is normal for non-tech open source projects, by the way. The most “popular” project I made for years was a little video downloader for the Dutch public broadcast service, and it has very little GitHub issues/stars, but I had quite a lot of emails with questions, bug reports, or just “thank you”s.

                                                                1. 7

                                                                  Even though there is no fundamental flaw in the author’s reasoning, I wouldn’t want him to face a situation where a 0-day or major security issue is found in his software. I’d be very pissed to find his payment-bug-model out at the point when I’ve finished a bug-hunt. On the other hand, maybe the author doesn’t have much experience with projects that have actual security relevance.

                                                                  This is speculative. The author’s commits suggest that they treat minor security issues as standard bugs and fix them promptly. There do not appear to be any outstanding CVEs for their project; in fact, there don’t appear to have been any CVEs ever filed for their project. It is premature to sneer at a project for failed security practices when they have not had an opportunity to fail, and misleading to sneer at a project which has no evidence of failed security practices.

                                                                  More importantly, you’ve confused the emotions on two sides of a multi-agent interaction. As the user, you might be pissed, but as the developer, why are you required to establish empathy with folks who you’ve already reasoned to be behaving detrimentally to you and the software being maintained? And on the other side, nobody should want to face a situation where their software has a 0-day exploit; it’s a hassle even if there are existing processes for developing and testing and deploying the fix.

                                                                  On the gripping hand, isn’t it kind of asking for security vulnerabilities when you’ve chosen C as your implementation language, rather than the memory-safe Java chosen by the author? Your approach to security is going to be different, because your implementation choices inherently lead to more security problems in supposedly-bug-free code.

                                                                  I think that your final paragraph could have been written more simply as:

                                                                  there is no fundamental flaw in the author’s reasoning

                                                                  1. 1

                                                                    It’s kind of an “inverse effort law”. The more effort the code owner has to go through to make sense of a submitted PR/bug/issue, the less likely they are to address it in a timely fashion (if at all).

                                                                  1. 4

                                                                    The only way to protest these harmful developments is to quit these websites altogether and use alternatives. If the information on reddit is so important, just read passively using teddit, but stop feeding your valuable data into a system you don’t support. Rather, help expand communities that actually respect their users with your knowledge. Someone has to start it.

                                                                    1. 5

                                                                      I recently bought a Logitech Streamcam (1080p60) with a good 1/2.0 aperture and very good image and sound quality. While browsing the market, it seemed to be the only sensible choice (who needs 4k when no service transmits at that resolution?). Much more important than image quality to me is high framerate (i.e. 60 FPS). The image quality of the Streamcam is really good and it works natively with the basic UVC-drivers in Linux, even though there still seems to exists a race-condition in its firmware that will probably be fixed soon.

                                                                      Anyway, I’d pick that if anyone asked me for a recommendation. The next step basically seems to be to use a DSLR and a USB-HDMI-streamer, but this is just overkill for meetings. I’d think about it if I was a high-ranking executive in a company being in meetings all day or something.

                                                                      1. 3

                                                                        Anyway, I’d pick that if anyone asked me for a recommendation. The next step basically seems to be to use a DSLR and a USB-HDMI-streamer, but this is just overkill for meetings. I’d think about it if I was a high-ranking executive in a company being in meetings all day or something.

                                                                        Some Canon cameras have a webcam driver, and i’ve been using it on and off (with a Rebel T6) since it came out earlier this year. The trade-off is you get great optics (I love how I look in a wide-angle lens, and a big lens makes the lighting vs. sensitivity & shutter speed trade-off less pronounced), but the weight of those optics (450g body + 385g lens) makes it less flexible to position than a normal webcam (Logitech C920 is 162g) on an articulated arm. I have done a few calls that really benefit from a big studio-type setup where I can spend time setting it up, but the daily or weekly meetings don’t really need that.

                                                                        1. 2

                                                                          I use a 2012 vintage Canon EOS M (which I picked up off Ebay for ~$150) and a cheap 1080p USB HDMI capture card. I was drawn to the EOS M because of its great support for Magic Lantern (an Open Source camera firmware) which allows for clean HDMI output.

                                                                          1. 1

                                                                            Oh wow, that’s a very cute camera :3

                                                                            Mine sadly doesn’t have Magic Lantern yet, and no clean HDMI either.

                                                                          2. 1

                                                                            Is that by using the EOS Webcam Utility or does the newer models come with webcam support built in?

                                                                            I’ve previously tried the webcam utility beta version with a camera when it came out (can’t remember if it was the 6D or 550D/Rebel T2i I used) and it wasn’t really usable since there was a very noticeable delay in the video input which wasn’t the case for the audio (it came directly into the laptop).

                                                                            I didn’t look into ways of delaying the audio to sync it up with the video, but I guess it could have been usable if I got that worked out.

                                                                            1. 1

                                                                              It’s the webcam utility, yeah. I didn’t notice any difference in latency between the camera and a USB mic or Bluetooth headphones.

                                                                        1. 20

                                                                          You don’t need a blockchain for a verifiable voting process. Even though I’m a big fan of mathematically provable accuracy, much simpler changes in the US-voting process would already provide a huge overall improvement.

                                                                          I am content with the voting procedures in Germany, where you don’t have huge counting facilities but break it down into small stations where only 200-400 votes each are counted, and every citizen allowed to vote is assigned to one station only. The process (voting and counting) is completely public and you can witness it as long as you don’t interfere in the process. If you place your vote your name is checked off on the list, greatly reducing the risk of double-voting. Mail-in-voting is also possible, but then you are not allowed to cast a vote in person and you must have applied for the mail-in-ballot weeks before the election.

                                                                          Germany has 80 million inhabitants, but given these circumstances, you can easily validate an election with a relatively small number of poll watchers. I don’t know why the US is using counting machines, has no voter-ID and accumulates everything into huge counting facilities. If it’s not malfeasance, it’s incompetence at best.

                                                                          1. 2

                                                                            I you haven’t watched this documentary about how US elections work, I truly and strongly recommend you to do so. Such system doesn’t need “simple changes”, it needs to stop being used at all. And it’s not about a partisan discussion between company A or B favouring party X or Y, it’s a huge global concern.

                                                                            The process (voting and counting) is completely public and you can witness it

                                                                            That’s exactly what a blockchain allows you to do, in real time and also globally. Any external observer can audit and monitor the whole process, and not just one polling station.

                                                                            Mail-in-voting is also possible

                                                                            By using blockchain tools you don’t need mail voting in the first place, basically because you can vote from wherever you want. The key difference is that you can verify inclusion of your vote for yourself. In mail voting, you simply hope that it will reach the polling station on time (if at all), that it won’t be opened, disclosed, tampered with, dismissed, etc.

                                                                            Blockchain will not be the solution for everything, but payments, notarization, contracts and governance are among the best use cases that you can find for it, as of today.

                                                                            1. 26

                                                                              That’s exactly what a blockchain allows you to do, in real time and also globally.

                                                                              No it does not. It allows people with a certain depth of understanding of a complex technology to do that. Not the general public, which is what /u/FRIGN is talking about. As somebody who has counted votes in a German elections a few times I can tell you the system works and is understandable by anyone. Blockchains or even computers are def. not.

                                                                              1. 5

                                                                                Exactly. I’ve thought along similar lines (we could have mathematically secure elections, asset transfers, etc.) except that the layperson barely understands computers, let alone crytographic keychains, private keys, the necessity to keep said private key safe and accessible, etc.. All of these solutions take for granted there’s ubiquitous+secure cryptographic identities for everyone, which alone is a logistical nightmare. My parents certainly couldn’t understand it, and would certainly lose their keypair or have it stolen.

                                                                                Paper is simple, universally understood, and difficult to make fraudulent at scale.

                                                                                1. 4

                                                                                  This highlights the problems with pushing electronic voting now. Even if you have a perfectly accurate system, it can only be trusted by a limited amount of specialized individuals, and not the majority of the populace. For e-voting to be viable, you have to grow a new society that can understand how such processes work. That will take a very long time, I’d say 100 years at least. That doesn’t mean that it isn’t worth exploring e-voting now (to the contrary, I think that research in this area is very important for future trust), but I think that trying to push for utilization now is useless and even harmful for it.

                                                                                  1. 3

                                                                                    Right. Chase the magic out of the system.

                                                                                    I’ve daydreamed about whether there are some wins to be had by grafting cryptographic processes in on top of paper ballots (themselves, but also the whole chain of custody) in a way that actually helps chase more magic out of the system… (but taking care to avoid some pitfalls around making coercion easier)

                                                                                    Regardless of the crypto, though, at least one problem is designing a process that is resilient to the kinds of low-probability events that are likely to happen somewhere during larger elections.

                                                                                    1. 1

                                                                                      Personally I don’t really understand this “everyone must understand the process” argument. There are plenty of elements in a modern life where we need to act based on faith in the system.

                                                                                      I refuse to acknowledge that elections can’t be one of them, if an electoral process would be at least as convenient and safe as voting by mail.

                                                                                      The biggest issue by far, in my opinion, in today’s politics is that the public at large is mostly uninterested in casting their ballot, and that anything we can do to increase the number of people participating should be done without thinking twice. When presidents are being elected with the vote of one third of the population, that is a serious problem. When laws get passed by people elected by a legislative group that reached barely 20% of the population, that’s really, really bad.

                                                                                      An electronic system could even supplant a whole electoral college in my opinion. Having every citizen be able to cast their ballot in 5 minutes would lead to a truly democratic process, where no representative needs to be voted in office and squander public money.

                                                                                      It’s baffling to me that in today’s age people are clinging to these antiquated methods, and instead of trying to find viable solutions, the large majority hides behind the “vast amount” of literature speaking against using blockchain technology for voting.

                                                                                      1. 2

                                                                                        Personally I don’t really understand this “everyone must understand the process” argument. There are plenty of elements in a modern life where we need to act based on faith in the system. I refuse to acknowledge that elections can’t be one of them, if an electoral process would be at least as convenient and safe as voting by mail.

                                                                                        You are entitled to that opinion, but it is a very weak argument. You are basically saying that we should replace a working and trusted system with one that nobody can really understand just because we can and because other systems are opaque too? That makes no sense, sorry.

                                                                                        The biggest issue by far, in my opinion, in today’s politics is that the public at large is mostly uninterested in casting their ballot, and that anything we can do to increase the number of people participating should be done without thinking twice. When presidents are being elected with the vote of one third of the population, that is a serious problem. When laws get passed by people elected by a legislative group that reached barely 20% of the population, that’s really, really bad.

                                                                                        Sure, blockchains are not going to do any of that. Not one bit of it. These problems are completely orthogonal to the discussion of using blockchains for political elections. Do you think even one person from the non-voting part of the general public is going to the election because they use a blockchain? I am sorry, but you are living in a bubble if you honestly believe that.

                                                                                        It’s baffling to me that in today’s age people are clinging to these antiquated methods, and instead of trying to find viable solutions, the large majority hides behind the “vast amount” of literature speaking against using blockchain technology for voting.

                                                                                        You have not come up with one example where the blockchains solves anything that you list in the paragraph above this. Just because something is old, does not mean it is bad. The new and shiny normally comes with new problems. Understandability is one of them in this case that you just glance over. Why you do that is unclear, but I guss you just want blockchain b/c it is cool.

                                                                                        1. 1

                                                                                          I’m not saying we should replace working system, like tomorrow. What I’m saying is that we should be working in finding alternatives that ensure a wider participation in the electoral process, and once they are proved to be fool proof then maybe yes we can replace the good old paper ballots.

                                                                                          Also I’m not saying that the blockchain is the way to do it, like I mentioned in a different message in the thread, I don’t have any knowledge in that area. What I’m saying is that people hide behind the excuse that blockchain is not viable instead of trying to find alternatives.

                                                                                          And the way you’re misinterpreting my words through your own bias is painful and offensive.

                                                                                1. 3

                                                                                  I currently have a similar issue with my webcam (Logitech StreamCam). When I use it, my system freezes or peripherals stop working (please let me know if and how I can be of help for debugging and troubleshooting, or if somebody else here is also experiencing this problem).

                                                                                  I know from experience that it’s most likely due to a race condition in the firmware, and I’ll have to wait for a firmware update (or an update in the Linux UVC driver), but I find it fascinating that a single misbehaving peripheral can destabilize the entire system.

                                                                                  I know there are good reasons against this, but one can only dream of an ideal world where a kernel is more resilient and self-segregational (i.e. a microkernel) with strictly mandated IPC mechanisms.

                                                                                  I’d gladly trade a bit of performance for the advantages it would provide.

                                                                                  1. 4

                                                                                    I don’t think a microkernel would actually help though, right? The problem wasn’t the kernel, it was that the hardware was bad in certain situations. If you put your WiFi/Ethernet card driver in a userspace process, that’s not going to change the fact that the physical networking card is getting lots of interference from the USB and HDMI cards, and is dropping or corrupting packets like nobody’s business.

                                                                                  1. 33

                                                                                    Great story, but why do so many people think it’s a good idea to write entire articles on Twitter? It’s even worse than publishing them on Medium, and that says a lot!

                                                                                      1. 15

                                                                                        Because that’s where the readers are. (Same reason Willie Sutton robbed banks.)

                                                                                        I hate Twitter so, so much, and that’s one of the reasons. Even with the new supersize 280-char limit, it’s still such a choked, impoverished writing medium. Constraints can be good, but when they’re a choice, not when you’re forced into the constraint because it’s baked into the only platform that meets your needs.

                                                                                        1. 12

                                                                                          Writing is also pretty easy too for them. Each thought can be composed piecemeal and worked into a larger thread. It’s compatible with shorter attention spans for /writing/.

                                                                                          Maintaining a blog is ceremony/effort if you’re not actively committed to it. The next lowest effort/easiest distribution is Medium, and we all know what we think of that.

                                                                                          Constraints can be good, but when they’re a choice, not when you’re forced into the constraint because it’s baked into the only platform that meets your needs.

                                                                                          Many constraints were because of forced limitations. That was the post of many of them.

                                                                                          1. 3

                                                                                            Yeah, you’re right about forced constraints. I guess it’s that any one constraint is good for some things but bad for others. Twitter has been a great boon to standup comedians and haiku poets, I’m sure.

                                                                                            WordPress.com is pretty low-effort; it has issues but not so much as Medium. If it or something like it were more popular, people could write their tweet-threads there. Unless, as you say, they’re ADD enough that they’d get blank-page fright and never write anything.

                                                                                            (I’m trying hard not to start bemoaning the demise of LiveJournal again. It coulda been a contendah…)

                                                                                            1. 12

                                                                                              Another thing that might be interesting is that people can reply to the indivual atomic units of thought easily too. It’s really more like structured/permament IRC than it is a blog.

                                                                                              And yes, from the people who DO write mega tweet storms tell me, blank page fright is huge.

                                                                                              1. 9

                                                                                                WordPress.com is pretty low-effort; it has issues but not so much as Medium. If it or something like it were more popular, people could write their tweet-threads there. Unless, as you say, they’re ADD enough that they’d get blank-page fright and never write anything.

                                                                                                I don’t know; I suspect it’s more of a barrier-of-entry thing. Twitter is kind of ephemeral and “write and forget”, whereas writing on your personal WordPress site takes more effort, as it’s less ephemeral.

                                                                                                The same with comments on e.g. Lobsters: I usually just write them, read over them a little bit, and post. Whereas on my website I tend to take a lot longer to write more or less the same stuff. If something’s on my website, I want to make sure it’s reasonably accurate, comprehensive, and written as well as I can. Usually this entire process takes up quite a lot of time for me. For some Lobster comment or Twitter remark, it’s a bit different.

                                                                                                It’s really difficult to put my feelings on this in words; so I hope this makes sense 😅 But publishing something on my (or any) website just comes with a lot higher barrier of entry for me, and I’m probably not so special that I’m the only one.

                                                                                                @calvin mentioned “blank page fright”; which is more or less the same thing in a way, just expressed different, I think(?)


                                                                                                At any rate, Twitter is hardly my favourite platform for these kind of things, but if the choice is between “it would never be published at all” and “it’s published on a platform I don’t like”, then the second option is clearly the better one.

                                                                                            2. 4

                                                                                              Because that’s where the readers are.

                                                                                              Then Tweet a link.

                                                                                              Might be a great story, but I’m not reading it in 20 parts on Twitter.

                                                                                              1. 4

                                                                                                And many people will not click a link.

                                                                                                1. 2

                                                                                                  Plus clearly many, many people are. Writers go where readers are, and though you may not like reading things in this way on Twitter, there are enough people who do to make a market for this sort of material.

                                                                                                2. 1

                                                                                                  The choked, impoverished writing medium is what makes it so much fun!

                                                                                                3. 10

                                                                                                  For some people this is the answer.

                                                                                                  It’s easier to just write a set of tweets. When you publish a wall of text you gotta format it, you feel like proof-reading, etc.

                                                                                                  A tweetstorm is like…. whatever, just get it out there. Hell, type it in drafts and it’ll post the tweetstorm for you.

                                                                                                  This is like instagram stories: A way to reduce the barrier to sharing content. And some stuff is low effort, but some stuff is just high quality. It’s also, like other said, a way to share to people who are following you.

                                                                                                  1. 2

                                                                                                    you gotta format it, you feel like proof-reading, etc.

                                                                                                    I think there might be a reason why people do this.

                                                                                                    Ironically, this ‘article’ is more of a ‘wall of text’ than most blog posts, in that it’s just a collection of ‘text bricks’ stacked on top of each other, with no real structure. As a result, it’s practically unreadable.

                                                                                                    1. -1

                                                                                                      Thanks for pointing this tweet out, but I don’t buy that for a minute. If you have so much ADHD that you can’t do it any other way, you could still tweet your story and then copy-paste the sentences into a blog post. No one could be that debilitated by ADHD that he wouldn’t be able to do this basic thing.

                                                                                                      Also, a blog post is written once and read many times (ideally). It’s disrespectful to your readers to force this horrible format on them. If I were in this situation, I’d ask a friend to help me format a “tweetstorm” into a nice blog article. Even long texts wouldn’t take that much time.

                                                                                                      1. 10

                                                                                                        Uh, hey maybe don’t make comments that people with ADHD could do something when the evidence and statements of actual people with ADHD say they can’t. One of the key experiences of ADHD is executive dysfunction, meaning mental challenges around planning, problem-solving, organization, and time management. People with executive dysfunction (which isn’t solely experienced by people with ADHD) describe it in a number of ways that can be illuminating:

                                                                                                        Mental differences like this aren’t something you push through. Maybe sometimes you can (people with disabilities often describe experiencing fluidity in the severity of their challenges), but maybe sometimes you can’t. The experience of others demanding that they push through, or judging them for failing to push through, is one of the main challenges faced by disabled people. If you spend time listening to disability advocates, you’ll hear them talk about how they’re not disabled because something is wrong with them, they’re disabled because of limitations in the systems we all operate within, and the expectations and demands of our collective culture.

                                                                                                        So please, don’t toss out comments about how disabled people ought to function. They’re doing their best, and the expectations you’re putting out there are a core part of the challenges they face.

                                                                                                        1. 1

                                                                                                          Did you even read my comment before pasting your pasta here? Even disabled people ought to be able to ask for help, and in this case, I see no reason why someone with ADHD and executive dysfunction shouldn’t be able to ask someone for help in this regard.

                                                                                                          1. 6

                                                                                                            I did read your comment.

                                                                                                            I’m also flattered you think my post is a copypasta.

                                                                                                            Seems unlikely you’ll be convinced, but to hammer it home: saying “disabled people ought to be able” or even “disabled people ought,” is the problem. If you do not have executive dysfunction, you do not know what it’s like to live with, and should defer to people who do live with it when they talk about what is reasonably doable for them.

                                                                                                            1. 3

                                                                                                              I’m also flattered you think my post is a copypasta.

                                                                                                              Not taking sides here, but just wanna say, that is the best kind of rhetoric.

                                                                                                            2. 6

                                                                                                              Let me describe how I post on Lobsters. First, I think about what I want to post. Then, usually I don’t post it.

                                                                                                              If I do decide to post, then I commit myself to keeping a browser tab open for about half an hour while I write my post. I try to get my evidence lined up, opening additional tabs with each consideratum so that I won’t forget what I’m writing about.

                                                                                                              Paragraphs are usually written out of order. Entire sentences are written, rewritten, discarded, and written again. Phrases become semantically satiated and read wrong in my mind. I worry that I have used too many words. I worry that I haven’t used enough.

                                                                                                              I constantly feel disconnected from myself and also from my audience. I don’t understand how to relate to people, or how to ensure that my meanings are preserved. In fact, I am used to being horribly and hilariously misinterpreted.

                                                                                                              The help that I would ask from you is for you to reread the parent post and reconsider your stance. There is no universal way in which humans are supposed to interact with computers.

                                                                                                              Alternatively, take a programmer’s point of view: A module is not merely a collection of code snippets, and it is disingenuous to suggest that folks can simply collate code snippets into meaningful modules.

                                                                                                          2. 7

                                                                                                            Also, a blog post is written once and read many times (ideally). It’s disrespectful to your readers to force this horrible format on them. If I were in this situation, I’d ask a friend to help me format a “tweetstorm” into a nice blog article. Even long texts wouldn’t take that much time.

                                                                                                            But you’re not in this situation.

                                                                                                            1. 4

                                                                                                              You may not realise it, but this is what your post looks like from the outside:

                                                                                                              • You’re mistaking your personal dislike for a universal dislike.
                                                                                                              • You’re laying your personal preferences on other people as responsibilities.
                                                                                                              • You’re presuming you know what other people can or can’t do, or how they should or shouldn’t spend their energy and friend-favours.

                                                                                                              That is not how you reason your way to correct conclusions, and it is not how you win friends and influence people.

                                                                                                          3. 8

                                                                                                            No constraints, no glory!

                                                                                                            But really the real reason is that I put weeks of research and editing into my blog posts, in some case months… while I can hammer a tweetstorm out in five minutes.

                                                                                                            1. 4

                                                                                                              As much as I hate Twitter ‘articles’, I think they’re actually better than Medium articles, which is… impressive.

                                                                                                              1. 1

                                                                                                                Agreed. This would be a pretty lengthy blog post, and this format is just awful. Really good war story though.

                                                                                                              1. 14

                                                                                                                The growing bloat on the web really kills websites for me. One recent example is the new reddit design, which made me quit it altogether (among other reasons).

                                                                                                                Why does it always need to be lazyloading Ajax-crap? JS-generated transitions are always horrible and clunky. Let’s hope there will be a move towards a more sustainable and suckless direction at some point in the future.

                                                                                                                You don’t need Javascript in many many cases, and if you do, a few kB will do just fine. And you don’t especially need it for overloaded UI-orchestration past the UI-model the browser provides and is optimized for.

                                                                                                                1. 6

                                                                                                                  old.reddit.com still works fine; say about the new UI what you want, but at least they’re not forcing it upon you.

                                                                                                                  1. 3

                                                                                                                    The new Reddit redesign is such a disaster. After clicking on a post, I frequently find myself scrolling the background (the list of posts) rather than scrolling through comments in the post itself. It frequently takes many seconds for the website to respond to clicks on ridiculously powerful hardware. Scrolling through subreddits with many image posts bogs down the site completely, probably because infinite scrolling + lots of images and gifs + no technical competence is a predictable disaster. Searching in subreddits literally doesn’t work; when I search something, more often than not, the site will just say ‘No results found for “”’.

                                                                                                                    I obviously mostly use the old website (which is also a design disaster in many ways, but at least it works). I just don’t understand how a team could see the result that is the redesigned website and be happy with it.

                                                                                                                    1. 6

                                                                                                                      You bring up really good points and explained the problem well! It’s especially shocking when you browse the modern web with an older computer. I booted my old Mac mini from 2008 and was really sad to see that it was impossible to browse the web without massive lag and problems (text-only and light sites were just fine). Do we really want to waste all our advances by just keeping up with more and more useless cruft that brings essentially zero benefit to the end-user?

                                                                                                                      A good case is YouTube: They’ve stuffed their video pages with megabytes of Javascript, Canvases, AJAX-magic and whatnot, and even though it’s probably 4 orders of magnitude heavier than the video page from 10-13 years ago, it essentially does the same thing, while actually being worse at it, because it’s often unbearably sluggish and clunky. I often press the “back” button in my browser to return to the previous video, only to find out that their “history-emulation” in Javascript failed to keep up.

                                                                                                                      1. 6

                                                                                                                        I just don’t understand how a team could see the result that is the redesigned website and be happy with it.

                                                                                                                        There are millions of web programmers in the world. I doubt 95-99% of them would ever be an engineer if not for the current job market offering good career prospects, rather than being an engineer at heart.

                                                                                                                        The rush and the satisfaction of doing a good piece of engineering just doesn’t resonate with these people. Working in a company with hip factor, perks, following js-trend-du-jour because it’s trendy rather than for its technical merits, and a pay check. This is what the majority of the developers (specialy web developers due to lower entry barrier) care about. Never do they stop 5 minutes and think: “why are we doing this? what value does this provide to society? What are the advantages and disadvantages of switching a legacy product with a new flashy one with a material design UI, even if 1000 times slower?”. These essential questions don’t matter for the bulk of web developers. What matters is a pay check and a quasi-religious sense of belonging to the users of this or that stack, preferably one generous handing out t-shirts (see hacktober fest fiasco) and/or stickers.

                                                                                                                        Why writing a clean, elegant piece of software in C or pascal, well though with strong theoretchical foundations, if you can hack together a buggy, yet flashy version with deno and get thousands of github stars? Who cares about code elegance… pfff… github stars man! That’s where is at!

                                                                                                                        1. 9

                                                                                                                          I can replace the word “web” with “C” and substitute the relevant misgivings with those from a 80’s die-hard assembler/ALGOL/Lisp programmer to make it sound like it was from 1991. Your comment would still be just as ridiculous as it is now.

                                                                                                                          1. 5

                                                                                                                            s/ridiculous/true/

                                                                                                                            :-P

                                                                                                                            1. 2

                                                                                                                              I strongly disagree with you there. C vs assembly actually provides a lot of benefits with minimal losses in regard to performance, and back in the 90’s, we didn’t have this hipster-culture around it that we see today with web-development. @pm is spot-on with his analysis, in my opinion.

                                                                                                                      1. 4

                                                                                                                        While the performance of static pages tends to be dominated by render-blocking network requests

                                                                                                                        In (maybe) the majority of cases I’d say static sites aren’t render blocking. If you go to your devtools and set throttling to 56kbps, then click a link on the orange website with >100 comments, you will see a useful section of the page load well before all of the html is downloaded - browsers doing this kind of trickery outweighs almost all js performance magic in my experience. (I’d be interested in a rundown of these kind of tricks and how to optimise to them if anyone has a link to hand).

                                                                                                                        1. 2

                                                                                                                          The core of it is simple, but not easy: deliver enough information to start rendering. Practically, that means a short block of inline styles, followed by content, with no style or script tags that reference separate resources (put those after the main content).

                                                                                                                          1. 3

                                                                                                                            I wouldn’t overthink this. Just serve a simple HTML-document and refer to an external stylesheet using “<link>” or “<?xml-stylesheet>” (rarely known but really cool) that is shared across the site. Once it has been loaded just once, every consecutive access of other sites yields instant styling, because the browser cached it.

                                                                                                                            If you overthink it with inline styles and orderings, the browser won’t be able to leverage caching, and when you put style declarations at the end the browser has no chance to “invoke” the cached style until the HTML is fully loaded, which might make a difference if you have a really slow connection or a very large HTML-document (e.g. a large table).

                                                                                                                            And even the first load, which I mentioned earlier, won’t get harmed too much by missing external CSS, given browsers are optimized enough to immediately send a request (on an open keep-alive connection, which is the default) for an external CSS as soon as they “read” the -tag. The CSS-data will start streaming in at best after just one RTT, which is just a few ms, making it comparably fast to inline-CSS, with the added bonus of the aforementioned possibility for caching.