1. 2

    I wonder if the FCC’s ruling on one-touch make-ready would’ve been helpful for the author. I’d imagine that it’d be far more cost effective, and a quick Google Street View tour of his area looked like there are plenty of utility poles to use.

    1. 1

      It appears that one-touch wouldn’t be available in PA.

      1. 4

        That wikipedia article is outdated after the FCCs announcement in August.

        In this map the blue states are ones that could make their own regulations - the white ones would fall under the new federal regulations clearing the way for OTMR, I believe.

        It’s also an interesting question if the language the FCC used would have impact on those 20 self regulating states as well:

        in a Declaratory Ruling, the FCC made clear that blanket state and local moratoria on telecommunications services and facilities deployment are barred by the Communications Act because they, in the language of Section 253(a), “prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunications service.”

        1. 1

          Thank goodness, about time!

      1. 1

        Figured I’d link the more recent one, as neither appears to have been posted here before.

        1. 1

          All good. I just posted it for Lobsters’ convenience.

      1. 30

        LOL @ 100 MB being considered small. Just yesterday, I was working on my gui lib (which, granted, has zero mobile support, so totally unfair comparison.. but still) and I was like “eh, that 2.4 MB RAM usage is a little fat”.

        How low have standards fallen in the mobile world?

        1. 11

          This is the RAM their replacement for the Android simulator needs. So that’s including debugging/hot code swapping support, I suppose. I’d be interested in the non-debugging case.

          1. 8

            I’m skeptical: 2.4 MB of ram is barely enough to contain the pixels for a single 640x480 window with double buffering. While there’s the Flutter example could almost certainly use less memory, I think you’ve got some sort of accounting error here.

            1. 1

              I have a buffer for 500x500 there, since that was the window size in my test program, so that accounts for 1 MB… but that isn’t actually strictly necessary (and the operating system will have another for the desktop compositor, and the screen framebuffer itself, but those are on the OS side and thus is apples-to-apples on any user framework). You can redraw in response to the WM_PAINT messages too and avoid keeping the additional buffer. This isn’t an arcane technique - it is the way native desktop apps have basically always worked!

              My library is 150 KB, and actually uses native stuff so… that’s about all there is to it.

              1. 6

                You can redraw in response to the WM_PAINT messages too and avoid keeping the additional buffer. This isn’t an arcane technique - it is the way native desktop apps have basically always worked!

                You seem to be on Windows, so I want to clarify things a little bit:

                Yes, since Windows 7/Server 2008, Windows automatically keeps an off-screen buffer for you, and that’s what you’re actually drawing to when you respond to WM_PAINT. But (at least as of Windows 8) that buffer is subject to the same drawing rules that used to apply for direct screen rendering—specifically, if you’re drawing “directly” to the screen, then your double-buffered image can still sheer, giving you exactly the same artifacts you used to get on Windows 3.1 if you didn’t double-buffer.

                (If that has changed, I’m curious how. My guess is they would implicitly do buffer-swaps at the conclusion of handling WM_PAINT.)

                1. 1

                  Well, what I meant there was you can just respond to the message and get a working program, but yeah, I use a bitmap buffer too and just blit that over to paint (and in the window size I had at the time, it was a ~1MB buffer).

                  But this is a fraction of what flutter (and electron) devour.

            2. 5

              One big point of desktop apps is to not eat an imperial shit-ton of RAM, so this struck me too.

              Oh well, memory is relatively expensive so computers don’t ship with a lot of it, but people seem content with blowing it all on a browser and a browser-based app or two :(

              1. 1

                Well, RAM is plentiful… until you have 80 tabs. 80 * 1 MB is doable. 80 * 100 MB is suddenly memory pressure. And that isn’t that uncommon!

                1. 4

                  Err… my ancient desktop, which is so old it barely has USB3, got 32G without breaking financial sweat. This back in the day, not current discount offers. But if I wanted to upgrade my current workstation up from 8G I’ll not only pay myself sick, but get in line to even buy anything.

                  Thanks, mobile phones, for eating up the world’s DDR4 supply, I guess.

                  Despite that, I’m a heavy user of tabs, famously sticking to an old and insecure browser for Tab Mix Plus + Tab Groups, until Mozilla unfucks its product.

                  I don’t have nearly all tabs loaded and occasionally I shut down the browser.

                  Not just to conserve memory but to keep me from being distracted while working with proper desktop apps.

                  1. 4

                    Ram isn’t exactly cheap today however. I recently bought some more ram for my pc, and it cost double what I paid for the same amount of ram 2 years ago. It’s jt that expensive, but often other parts get prioritized over more ram.

                    1. 1

                      Err… my ancient desktop, which is so old it barely has USB3, got 32G without breaking financial sweat.

                      On Desktops you can upgrade, but laptops are a different story. I have 32GB in my ThinkPad X220, but I can’t just upgrade the 16GB MBP that I am typing on. Similar to my phone, where upgrading RAM is not possible either. I guess I need to throw it away now and buy a new one…

              1. 5

                Using the assignment operator = will instead override CC and LDD values from the environment; it means that we choose the default compiler and it cannot be changed without editing the Makefile.

                This is true, but really, if you want to ensure variables are set in a Makefile, pass them as overrides (make ... CC=blah ...), don’t set them in the environment. The environment is a notoriously fragile and confusing way to specify all these things. (They certainly don’t work with the GCC and binutils stuff I work with on a regular basis!)

                My advice for Makefile is be explicit. It’s tedious and boring, but so much easier to debug.

                1. 4

                  The reason that setting things in the environment is fragile is because people follow advice to ignore the environment. It’s very useful for cross compilation to simply set the appropriate environment and go.

                  1. 1

                    There’s also no easy way to pass arguments and options to a makefile except through environmental variables. You can also play games with the target, but there’s only so much you can do with that.

                    1. 2

                      I don’t believe that’s true. You can also pass macro definitions as arguments to make; e.g., make install PREFIX=/opt/tools

                      1. 1

                        Yes, overrides passed on the command line can be arbitrary expansions.

                        % cat Makefile
                        STUFF = 1 2 3 4
                        
                        all:
                                @echo $(FOO)
                        % make 'FOO=$(firstword $(STUFF))'
                        1
                        
                        1. 0

                          Yeah, but environmental variables are turned into make variables in the same way as variables after the make command. The only difference is that they also get placed in the environment of subcommands.

                          1. 2

                            I’m reasonably sure that is not true either. From my reading of the manual, an explicit assignment of a macro within the Makefile will override a value obtained from the environment unless you pass the -e flag to make. The manual suggests the use of this flag is not recommended. In contrast, a macro assignment passed on the command line as an argument will override a regular assignment within the Makefile.

                            Additionally, some macros receive special handling; e.g., $(SHELL), which it seems is never read from the environment as it would conflict with the common usage of that environment variable by user shells.

                            1. 2

                              As far as I can tell, they both get placed in the environment of subcommands. The manual is (as per many GNU manuals) unclear on the matter: “When make runs a recipe, variables defined in the makefile are placed into the environment of each shell.” My reading is that anything set in Make should be passed through, but this does not appear to be the case.

                              % cat Makefile
                              FOO = set-from-make
                              
                              all:
                                      @sh ./t.sh
                              % cat t.sh
                              echo "FOO is '${FOO}'"
                              % make
                              FOO is ''
                              % FOO=from-env make
                              FOO is 'set-from-make'
                              % make FOO=from-override
                              FOO is 'from-override'
                              
                              1. 1

                                IMO the GNU make manual is pretty clear on this.

                                https://www.gnu.org/software/make/manual/html_node/Values.html

                                Variables can get values in several different ways:

                                • You can specify an overriding value when you run make. See Overriding Variables.
                                • You can specify a value in the makefile, either with an assignment (see Setting Variables) or with a verbatim definition (see Defining Multi-Line Variables).
                                • Variables in the environment become make variables. See Variables from the Environment.
                                • Several automatic variables are given new values for each rule. Each of these has a single conventional use. See Automatic Variables.
                                • Several variables have constant initial values. See Variables Used by Implicit Rules.

                                https://www.gnu.org/software/make/manual/html_node/Overriding.html

                                An argument that contains ‘=’ specifies the value of a variable: ‘v=x’ sets the value of the variable v to x. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been overridden by the command line argument.

                                https://www.gnu.org/software/make/manual/html_node/Environment.html

                                Variables in make can come from the environment in which make is run. Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the ‘-e’ flag is specified, then values from the environment override assignments in the makefile. See Summary of Options. But this is not recommended practice.)

                                1. 1

                                  Yes, I don’t disagree with any of this and it’s consistent with usage. My point was about variables getting into the environment of shell commands in recipes. The wording suggests all variables are put into the environment, but based on the first result in the example that’s clearly not the case.

                                  1. 1

                                    Oh I see. The manual is less clear on that point:

                                    By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the export directive to pass other variables.

                                    It should probably say “passed to child processes through the environment” or something similar.

                                    $ cat Makefile
                                    VAR1='hi'
                                    export VAR2='hi'
                                    test:
                                            echo $$VAR1
                                            echo $$VAR2
                                    $ make
                                    echo $VAR1
                                    
                                    echo $VAR2
                                    'hi'
                                    
                                    
                    1. 4

                      Some of the things in the blog post like := or ?= don’t appear in the posix spec for make. Are they GNU’isms?

                      1. 7

                        Yes, along with $(shell ... ). The author should have mentioned he was using GNUMake.

                        1. 1

                          := is almost mandatory for makefiles. If you have a shell expansion it will get run every time unless you use :=. Many of the extensions in Gnu make are simply unreproducable in posix make.

                        1. 1

                          What’s “n-gate”?

                          1. 9

                            I like how everyone dunks on C but like, we have four or five C alternatives right now that are viable, and numerous compilers and static analyzers and such that are fully capable of catching the bulk of problems that you have in C.

                            Not just that, but the problems they list are relatively simple to solve code-wise. Let’s take a look:

                            “use of uninitialized memory” -> just use calloc and memset on shit before you use it, and assign all of your variables default values. sparse or cppcheck are quite capable of catching these and will point them out for you.

                            “type confusion” -> Literally any sane static analyser will catch this (and I think GCC will too a lot of the time with -pedantic -Wall), but really you shouldn’t assign between variables of different types anyway, unless you’re using a bona-fide conversion function (like, lrint and friends – which will point out bugs for you). Personally speaking I take this further and use large integer sizes and the appropriate size_t and friends wherever possible, anything else is just premature optimization TBH. Besides, the entire point of typedef is to guard against this sort of thing, though. Don’t use void *, use typedef XYZ * typename, and you will very rarely have this bug,

                            “use after free” -> This goes under “use of uninitialized memory”. Anything that can be NULL/-1, check it. Set to NULL/-1 when you free. Clang’s scan-build catches a lot of these and sparse and cppcheck are capable of catching the rest.

                            Also, from what I’ve read of security literature, most of the vulnerabilities come from things like, not sanitizing your input, allowing larger inputs than you have space for, etc. Those are programming problems that you can have in any language, including Python. While C does give you room to fuck up, it also gives you the tools to NOT do that. Use them.

                            Javascript is quite literally a bigger danger with regards to proliferation, pure language idiocy, and fuckup potential, because you actually cannot avoid those parts that are broken (Some of which are generally considered to include, Unicode, and Arrays). People regard C as a loaded shotgun, and then go program in Javascript which has almost an equivalent number of flaws, and which is beyond broken.

                            Not just that, but C had and continues to serve a (somewhat debatable) purpose in the embedded world, in kernel development, in drivers, and some other places. Javascript was arguably superceded when Scheme was invented, 20 years before Javascript was born.

                            1. 8

                              Good points on mitigations being available. On last paragraph, Ada has been around a long time, too, with stuff written in it having less defects. Same with Modula-3 at one point. Newer stuff like Rust can do embedded. There was even a C/C++ implementation done in Scheme for its benefits. D and Nim are contenders with advantages, too.

                              C’s time has passed on technical grounds. Best replacement should still integrate well with its ecosystem, though, for practical reasons.

                              1. 3

                                On last paragraph, Ada has been around a long time, too, with stuff written in it having less defects. Same with Modula-3 at one point. […]

                                Oh, indeed! However the main benefit to C as it is, is the lack of linguistic complexity. It’s easy to pick the Right Way To Do Things, there’s very little room for debate, except perhaps architecturally – i.e. where it matters. But in addition to that, the best feature of that linguistic complexity is that a) it’s an easy language to remember, and b) it’s an easy language to hold in your head. It doesn’t require a ridiculously huge parser and it’s ‘easy’ to port (at least, it was, heh).

                                C’s time has passed on technical grounds.

                                I disagree :)

                                The main contender, Rust, not only has a ridiculously bloated stdlib (on part with Common Lisp’s with how lost you can get in it), and AFAIK still produces pretty large binaries. In addition it pushes a specific method of building on you, which really isn’t favourable to me.

                                Personally I’d like to see a systems-level language with the syntax of Lisp or Scheme and the philosophy of C, just with a more robust (but hackable) type system.

                                1. 3

                                  re linguistic complexity. There’s many of you that say that. I think you all pick a subset you need with coding style you can work with. That might keep it simple for you. The language itself isn’t simple as I said in the counter to vyodaiken. The people that were modeling and analyzing other languages took something like 40 years to do the same for subsets of C. Even experts like vyodaiken argue with other experts about the language details here on threads about pretty, basic stuff. That doesn’t seem simple.

                                  re Rust criticisms. Thanks for sharing them. I know Rust isn’t the end all. If anything, there’s still room for stuff that’s more C-like and flexible to help folks that don’t like Rust. I keep mentioning languages like Cyclone and Clay to give them ideas.

                                  re Lisp/Scheme. There’s two I know of in that direction: PreScheme and ZL. ZL had the most potential if combined with C tooling. Website here. If it doesn’t already, its own implementation probably could be updated to better tie-into popular forms of Scheme like Racket and HtDP.

                                  1. 3

                                    re Lisp/Scheme. There’s two I know of in that direction: PreScheme and ZL

                                    I think bitc was quite promising too (from afar, I’ve never actually played with it). I don’t know what happened to it, its website seems down.

                                    1. 4

                                      There was Retrospective Thoughts on BitC in 2012. Mail archive is down too, but you can use Internet Archive.

                                      1. 1

                                        Thanks for that! I’ll have to take out some time to read it, it’s quite long.

                                      2. 3

                                        He was on a row with EROS, COYOTOS, and BitC. Then, Microsoft poached him. (sighs) The best development in high-assurance on language side in recent times is COGENT. They did two filesystems in it. Although paper talks general purpose, O’ Connor showed up on Reddit and HN saying it’s only for stuff like filesystems. He wouldn’t answer follow-up questions about that. So, it’s interesting at unknown usefulness.

                                      3. 1

                                        The language itself isn’t simple as I said in the counter to vyodaiken. The people that were modeling and analyzing other languages took something like 40 years to do the same for subsets of C.

                                        As I said, syntactically. The entire C grammar can fit in three pages. The base C library is like 20 pages and fits into the end of K&R next to the grammar. If you want more functions there’s posix, 99% of which is part of the base operating system.

                                        You’re right that C-the-implementation isn’t simple. But at that level there are very few simple things, anyway. There are lots of approaches to choose from implementation-wise, for threads, etc. And not to mention the reality of the machine underneath, which doesn’t give a crap about what you think about it.

                                        With regards to program verification, you are indeed correct, but I’d argue the main problem with that was that C ended up being subjected to the mutagens of almost every single platform of the 1970s to 1990s, very few of which were standardised. The standards committee ended up having to backwards-support everything. That’s ignoring the fact that in certain cases they make it deliberately more difficult to standardize for the sake of improving optimization, or allowing optimizations that already exist in the wild.

                                        I was mulling it over after I wrote the above, and I think it’s useful to adopt a view of C as being forged by the pressures of being quick to write a compiler for, (and therefore relatively simple to understand how something was implemented (see: macro system, standard library – indeed, 99% of K&R is just teaching you C by reimplementing the standard library in tiny snippets of C)), and close enough to the machine that it’s easy to make optimization choices – you can generally (although it’s got harder with more advanced processors), just by looking at the C source, figure out the machine code produced. That’s where C’s power lies, and it’s something that other languages really do not know how to capture.

                                        Like, it’s one thing to be able to say, X is better than Y, but C adopts itself really well to showing you why, I guess. And I don’t think we can find a replacement for C until we figure out a language that captures both of those features.

                                      4. 1

                                        In addition it pushes a specific method of building on you, which really isn’t favourable to me.

                                        tbh this is my major objection to rust as well. For all C’s build “process” gets maligned, it is very easy to swap in different tools.

                                        1. 1

                                          However the main benefit to C as it is, is the lack of linguistic complexity

                                          Say what now?

                                          https://hackernoon.com/so-you-think-you-know-c-8d4e2cd6f6a6

                                          1. 2

                                            I believe you are replying to the wrong person.

                                            1. 1

                                              Oops.

                                        2. -1

                                          However the main benefit to C as it is, is the lack of linguistic complexity. It’s easy to pick the Right Way To Do Things, there’s very little room for debate, except perhaps architecturally – i.e. where it matters.

                                          Excellent point and exactly why the Crappy Pascal initiative also known as the ISO C Standard has been so detrimental to C.

                                          1. 4

                                            I don’t know if you’ve ever worked with pre-ANSI-C code, but given the choice between that and ANSI-C, ANSI-C wins if just for function prototypes.

                                            What I don’t like about the standard is the tortured language so that everything from signed-magnitude to 2’s-complement, 8-bit to 66-bit, can be supported. That may have been a valid compromise for C89, but is less and less so as time goes on [1]. The major problem with C now is the compiler writers trying to exploit undefined behavior to increase speed, to the point that formerly valid C code now breaks.

                                            [1] Byte addressable, 2’s-complement won. Get over it C Standards Committee!

                                            1. 1

                                              The major problem with C now is the compiler writers trying to exploit undefined behavior to increase speed, to the point that formerly valid C code now breaks.

                                              I think the major problem is people compiling with -O3 and then complaining that compilers are trying to make their broken code fast.

                                              1. 0

                                                The standard is full of contradictions and obscurities. Compiler writers treating the standard as if it were some shrink wrap contract that they could exploit to evade every obligation to their users is simply wrong. The code you complain is “broken” is not even broken according to the standard and is common in things like K&R2. It’s ridiculous to claim that exploiting loopholes in murky standard written by a committee that seems to have no idea what they are doing is somehow justifiable.

                                              2. 1

                                                You may get your wish soon. From https://herbsutter.com/2018/11/13/trip-report-fall-iso-c-standards-meeting-san-diego/:

                                                … all known modern computers are two’s complement machines, and, no, we don’t really care about using C++ (or C) on the ones that aren’t. The C standard is likely to adopt the same change.

                                                1. 0

                                                  I think K&R2 is basically C at its best and that is ANSI. I even like restrict, although God knows the description of it in the standard reads like the authors were typing it on their phones while working at the Motor Vehicle Bureau. But there are programs in K&R2 that are not valid programs according to the current interpretation of the current incarnation of the standard.

                                          2. 3

                                            esides, the entire point of typedef is to guard against this sort of thing, though.

                                            The big flaw in typedef is that there is implicit type conversion between e.g. typedef int metric x and typedef int english y that permits x = y etc. There should be a flag in C to give warnings on all implicit type conversions. and a strong typedef (although the struct method works well too )

                                            1. 3

                                              To solve this thing, just do that thing!

                                              If it’s so simple and yet so easy to forget… why don’t we just automate it? 😉

                                              1. 0

                                                why don’t we just automate it?

                                                If you read what I have written, that is what I said. scan-build+sparse+cppcheck+valgrind will catch 99% of the errors mentioned in the article, and they take only about 5 seconds to run.

                                                1. 7

                                                  Sounds to me like you’re choosing a language that requires boilerplate, then installing tools to scan for missing boilerplate.

                                                  For something as important as memory safety, it seems shortsighted to arrive at such a solution. But to each their own: if a certain workflow helps you produce safe code, then I won’t complain it puts the cart before the horse.

                                                  1. 3

                                                    Valgrind requires you to exhaustively test your application. That’s not a five-second job.

                                                2. 1

                                                  Nulling things only works if you have a single pointer to the memory, and pass that pointer by reference when you need it

                                                1. 1

                                                  Why use easyrsa over, for example, GnuPG, which is probably already installed?

                                                  1. 1

                                                    I use easyrsa because it is the tool recommended by OpenVPN and developped by them. It is also really easy to use even if it does not provide the best algorithms available.

                                                  1. 2

                                                    I actually had no idea Linus had worked on a Quake 3 port.

                                                    EDIT: The dates don’t make sense. It can’t have been Quake 3.

                                                    1. 9

                                                      Reusing a joke from last year’s Reddit thread: hur hur, he meant Quake 2 but was using -ffast-math

                                                      1. 3

                                                        The Linux release of quake 3 came out in 1999, which fits the timeline…

                                                        1. 2

                                                          Me neither. But thinking about it: it’s circa 2001 and you’re after a Linux expert, who are you going to call? :D

                                                          I’m trying to come up with a good linux LKML and quake 3 crossover joke, but I’m worried I’ll get gibbed.

                                                        1. 2

                                                          Been working on a sketch of a pen-and-paper rpg. Still in dire need of balance due to the amount of playtesting (none) which has been done.

                                                          Also finished Brandon Sanderson’s Stormlight Archives series.

                                                          On a more programming related note, I’ve been using Haskell to model Factorio’s circuit network.

                                                          1. 2

                                                            screenshot

                                                            Using xfce (with metabox), as I have for most of the time I have been using linux. One panel up top with window buttons and various dials and gizmos. Front and center is Firefox. I recently upgraded my ram, and have been spending most of it on even more open tabs. I have a fairly standard array of addons (at least for this site), the notable exceptions being FoxyProxy for when I occasionally need to use a proxy, and RES for my favorite non-lobsters link aggregation site. On my second monitor is KVIrc, which I have recently gone back to after years of using different irc clients. I haven’t configured the theme to my liking yet, and it is honestly a bit more colorful than I prefer. In my other workspace I have my terminal, which is relatively unmodified from the default xfce4 terminal, except for the colorscheme (solarized dark). At some point I may move to a tiling wm, but I haven’t had the impetus to make the jump yet.

                                                            1. 1

                                                              Thunderbird on desktop, K9Mail on phone.

                                                              1. 4

                                                                That’s really cool! PXE boot seemed like a very neat concept but the setup instructions (like four different daemons all written by different people that have to be configured to cooperate perfectly) always looked way too daunting.

                                                                Might be better to link to the README file?

                                                                I’m very impressed by the API server part, makes it much easier to script things.

                                                                1. 1

                                                                  What are your four daemons? Normally it’s 3 (dhcp, tftp, and nfs), but dnsmasq reduces that to two.

                                                                  1. 1

                                                                    I wasn’t being precise and the count. :p

                                                                    Edit: though if you want anything to actually work nicely, the fourth is going to be a httpd because TFTP is not very good at moving data.

                                                                    1. 1

                                                                      I guess Forty-Bot was using NFS for that

                                                                      1. 1

                                                                        The way I had it set up, the computer boots, gets an IP, and transfers a kernel image over tftp. That kernel boots and mounts root via NFS. You could conceivably use any networked filesystem in place of that.

                                                                1. 20

                                                                  “(For the record, I’m pretty close to just biting the bullet and dropping $1800 on a Purism laptop, which meets all my requirements except the fact that I’m a frugal guy…)”

                                                                  One more thing to consider: vote with your wallet for ethical companies. One of the reasons all the laptop manufacturers are scheming companies pulling all kinds of bloatware, quality, and security crap is that most people buy their stuff. I try where possible to buy from suppliers that act ethically to customers and/or employees even if it costs a reasonable premium. An recent example was getting a good printer at Costco instead of Amazon where price was similar. I only know of two suppliers of laptops that try to ensure user freedom and/or security: MiniFree and Purism. For desktops, there’s Raptor but that’s not x86.

                                                                  Just tossing the philosophy angle out there in case anyone forgets we as consumers contribute a bit to what kind of hardware and practices we’ll see in the future every time we buy things. The user-controllable and privacy-focused suppliers often disappear without enough buyers.

                                                                  1. 10

                                                                    One more thing to consider: vote with your wallet for ethical companies

                                                                    Don’t forget the ethics of the manufacturing and supply chain of the hardware itself. I would imagine that the less well-known a Chinese-manufactured brand is the more likely it is to be a complete black box/hole in terms of the working conditions of the people who put the thing together, who made the parts that got assembled, back to the people who dug the original minerals out of the ground.

                                                                    I honestly don’t know who (if anyone) is doing well here - or even if there’s enough information to make a judgement or comparison. I think a while back there was some attention to Apple’s supply chain, I think mostly in the context of the iPhone and suicides at Foxconn, but I don’t know where that stands now - no idea if it got better, or worse.

                                                                    1. 6

                                                                      Apple has been doing a lot of work lately on supplier transparency and working conditions, including this year publishing a complete list of their suppliers, which is pretty unusual. https://www.apple.com/supplier-responsibility/

                                                                      1. 1

                                                                        Technically their list of suppliers covers the top 98% of their suppliers, so not a complete list, but still a very good thing to have.

                                                                        1. 1

                                                                          Most other large public companies do that too, just not getting the pat on the back as much as Apple.

                                                                          http://h20195.www2.hp.com/v2/getpdf.aspx/c03728062.pdf

                                                                        2. 2

                                                                          You both brought up a good concern and followed up with reason I didn’t include it. I have no idea who would be doing good on those metrics. I think cheap, non-CPU components, boards, assembly and so on are typically done in factories of low-wage workers in China, Malaysia, Singapore, etc. When looking at this, the advice I gave was to just move more stuff to Singapore or Malaysia to counter the Chinese threat. Then, just make the wages and working conditions a bit better than they are. If both are already minimal, the workers would probably appreciate their job if they got a little more money, air conditioning, some ergonomic stuff, breaks, vacations, etc. At their wages and high volume, I doubt it would add a lot of cost to the parts.

                                                                        3. 9

                                                                          Funnily enough

                                                                          The Libreboot project recommends avoiding all hardware sold by Purism.

                                                                          1. 5

                                                                            Yeah, that is funny. I cant knock them for not supporting backdoored hardware, though. Of the many principles, standing by that one make more sense than most.

                                                                            1. 1

                                                                              Correct me if I’m wrong, but I thought purism figured out how to shut down ME with an exploit? Is that not in their production machines?

                                                                            2. 3

                                                                              I agree, which is why I bought a Purism laptop about a year ago. Unfortunately, it fell and the screen shattered about 5 months after I got it, in January of this year. Despite support (which was very friendly and responded quickly) saying they would look into it and have an answer soon several times, Purism was unable to tell me if it was possible for them to replace my laptop screen, even for a price, in 6 months. (This while all the time they were posting about progress on their phone project.) Eventually I simply gave up and bought from System76, which I’ve been very satisfied with. I know they’re not perfect, but at least I didn’t pay for a Windows license. In addition my System76 laptop just feels higher quality - my Librem 15 always felt like it wasn’t held together SUPER well, though I can’t place why, and in particular the keyboard was highly affected by how tight the bottom panel screws were (to the point where I carried screwdrivers with me so I could adjust them if need be).

                                                                              If you want to buy from Purism, I really do wish you the best. I truly hope they succeed. I’m not saying “don’t buy from Purism”; depending on your use case you may not find these issues to be a big deal. But I want to make sure you know what you’re getting into when buying from a very new company like Purism.

                                                                              1. 1

                                                                                Great points! That support sounds like it sucks to not even give you a definitive answer. Also, thanks for telling me about System76. With what Wikipedia said, that looks like another good choice for vote with your wallet.

                                                                              2. 2

                                                                                Raptor but that’s not x86

                                                                                Looks like it uses POWER, which surprised me because I thought that people generally agreed that x86 was better. (Consoles don’t use it anymore, Apple doesn’t use it, etc)

                                                                                Are the CPUs that Raptor is shipping even viable? They seem to not have any information than “2x 14nm 4 core processors” listed on their site.

                                                                                1. 4

                                                                                  The FAQ will answer your questions. The POWER9 CPU’s they use are badass compared to what’s in consoles, the PPC’s Apple had, and so on. They go head to head with top stuff from Intel in the enterprise market mainly sold for outrageous prices. Raptor is the first time they’re available in $5000 or below desktops. Main goal is hardware that reduces risk of attack while still performing well.

                                                                                1. 5

                                                                                  +1. Everyone else seems to love GnuCash.

                                                                                  1. 4

                                                                                    I recently switched from GnuCash, which did work but felt unpolished and unreliable in edge cases, to Beancount, and why did I not do that sooner. I feel at home in my editor with autocomplete, regex search, bean-check for linting, bean-format for formatting and git for tracking (!!). Refactoring is as easy as with code, while it was the most manual and painful thing in GnuCash. Python plugins are a breeze and I already made a few while I never surpassed the inertia of automating GnuCash. The web UI makes reports more intuitive than anything in GnuCash, and there’s an SQL language for custom stuff. And finally, I can assert more things I care about and leave flexible more things I don’t. Try Beancount (or any other ledger port I suppose, but yay Python plugins).

                                                                                  2. 2

                                                                                    why not use sqlite or something?

                                                                                    1. 3

                                                                                      Because plain text is much simpler than SQL. You can just open up the file in $EDITOR and start editing, instead of having to run SQL over it to modify.

                                                                                      1. 1

                                                                                        But then if you want to do even the simplest things like sum by month you need to write a text parser first. I get that awk and stuff lets you do things with “tricks” but sqlite would let you do it without worrying about whitespace.

                                                                                        I can understand text-based input, but if you’re trying to track the flow of money by sources in a “real” way it seems pretty logical to use relations

                                                                                        EDIT: this isn’t necessarly pro-SQL, but it is pro-“structured data instead of worrying about escape characters in the format you defined”

                                                                                        Plain text is nice when you have an underspecified format but if you want to actually operate on it, it’s kinda gnarly

                                                                                        1. 5

                                                                                          EDIT: this isn’t necessarly pro-SQL, but it is pro-“structured data instead of worrying about escape characters in the format you defined”

                                                                                          Actually the data is very tightly structured. Here’s what Beancount allows. Any deviations are reported as errors.

                                                                                          But then if you want to do even the simplest things like sum by month you need to write a text parser first.

                                                                                          Generally the tool you’re using takes care of it. I’m using Beancount+Fava and it shows me pretty much every single metric that’s interesting out of the box. For everything else, it allows me to query the “database” using a SQL-like interface.

                                                                                          If you’re interested, I wrote a blog post on exactly this topic last week which could be relevant.

                                                                                  1. 9

                                                                                    They make it clear; they signed on to participate in a meritocracy with reputation rewards, and they think that is being taken way from them.

                                                                                    Oh look, we’re back at that meritocracy bullshit, and asserting that people develop the kernel for some imaginary reputation rewards. I’m sure there’s some pride in working on the kernel, but that being a significant factor? C’mon.

                                                                                    This is nothing more than another “Look at me, I’m still relevant!!1!” take by ESR, dressed up in fancy words to sound legit.

                                                                                    1. 1

                                                                                      Oh look, we’re back at that meritocracy bullshit

                                                                                      What BS? As far as I was aware, the kernel was perhaps the most meritocratic institution we have today. If there’s any project which ticks those boxes, it’s linux.

                                                                                      I’m sure there’s some pride in working on the kernel, but that being a significant factor?

                                                                                      Why do you think they do it? Getting paid doesn’t count, as the kernel had much development before people started to get paid for it.

                                                                                      This is nothing more than another “Look at me, I’m still relevant!!1!” take by ESR, dressed up in fancy words to sound legit.

                                                                                      I have to admit, there does seem to be a bit of that feeling on his blog lol

                                                                                      1. 1

                                                                                        What BS? As far as I was aware, the kernel was perhaps the most meritocratic institution we have today. If there’s any project which ticks those boxes, it’s linux.

                                                                                        Oh, but it doesn’t. Unless you’re part of the small elite, it’s nowhere near “meritocratic”. There have been many, many takes on this topic, like this in 2009, or this from 2014, with plenty of other links. They may not specifically mention the kernel, but it applies there just as well.

                                                                                        Getting paid doesn’t count, as the kernel had much development before people started to get paid for it.

                                                                                        Because they had an itch to scratch, or enjoyed working in a particular area. Pride and reputation rewards (whatever those may be) are not sustainable motivators, and never were. For very few people, maybe. For ESR, most likely - but ESR has little to do with the kernel. (Perhaps there’s a correlation there, somewhere, hm… :P)

                                                                                    1. 8

                                                                                      I was reading through an earlier thread on the linux CoC situation, and I noticed that many lobsters were fairly dismissive of CoC detractors as being nothing more that outsiders engaging in a proxy conflict on lkml. For example, /u/gerikson writes

                                                                                      This entire issue has been hijacked by the alt-right/gamergate crowd as another front in their “culture war”.

                                                                                      While many of the commenters on the issue are not members of the community (on both pro- and anti-CoC sides), there are some legitimate criticisms of the new CoC which should not be dismissed out of hand. ESR hits the nail on the head in this post about why some people are apprehensive about all the changes, and why it is important to address criticism without lumping all opposition into one group. I think this is perhaps the most level-headed take on the situation I’ve seen this far.

                                                                                      1. 17

                                                                                        In my comment, I was referring specifically to the linked post from lulz.com. It had been literally spammed over multiple channels in what I consider a concerted attempt by its backers to create controversy. It was not argumentation in good faith, but an attempt to leverage reasonable critique and doubt of the CoC into outrage and dissent.

                                                                                        I wrote in some anger and frustration and should probably have moderated my comment to not exclude reasonable critiques of the CoC, and the specific situation and history of the LKML.

                                                                                        Edit to expand, I am not a Linux kernel developer. My interest and fascination/disgust with the debate around the CoC is to see the same motivations, prejudices and methods from “GamerGate” being used in this campaign against the CoC - and probably from the same people and “communities”. That’s what I meant by using the term “hijack”.

                                                                                        1. 2

                                                                                          In my comment, I was referring specifically to the linked post from lulz.com.

                                                                                          Yeah, I agree that that is probably the most egregious example of an outside influence sparking conflict. I mostly used your comment because it was the most straightforward instance of the mentality I found throughout the whole thread.

                                                                                      1. 19

                                                                                        Don’t mistake ‘couched in neutral language’ for ‘not trying to convince you’. ESR is trying to pretend these assumptions are true, and the only relevant assumptions:

                                                                                        • that a group’s unwritten rules (ethoses) come exclusively from a group’s shared purpose (telos)
                                                                                        • that successful groups have only the minimum of rules needed to achieve the shared purpose
                                                                                        • that you can only judge a group by its success in its shared purpose

                                                                                        Notice the implication: that group rules ostensibly do not come from shared values, or from balancing the group’s purpose with the needs of its members.

                                                                                        ESR then invites you to ask which shared rules do and don’t flow from these assumptions. He already knows the answer to that. He does not mention his desired outcome, but is pushing for it via ‘framing’: introducing assumptions that lead there, and hoping you don’t question them.

                                                                                        His argumentation entirely fails to address the actual questions at hand:

                                                                                        • what are the benefits and drawbacks, to the shared purpose and to group members, of adding the proposed code of conduct to our (un)written rules?
                                                                                        • what are the benefits and drawbacks, to the shared purpose and to group members, of the current situation?
                                                                                        1. 1

                                                                                          that group rules ostensibly do not come from shared values, or from balancing the group’s purpose with the needs of its members.

                                                                                          Isn’t that where the telos comes from? The telos of the linux project is probably at its most basic just “create a kernel,” but further expansions include consideration of its members. For example, it could also be “create a FOSS kernel,” which carries with it considerations for the needs of members in the form of recognition and the freedoms associated with FOSS software. Additionally, shared values shape the telos. There is a good chance that Linus might have never have open-sourced the linux kernel had he not had a community with a strong preference for FOSS projects. Of course, yes, I do think he is “framing the question.”

                                                                                          • what are the benefits and drawbacks, to the shared purpose and to group members, of adding the proposed code of conduct to our (un)written rules?

                                                                                          Isn’t the whole ethos/telos discussion for creating a framework to evaluate this? He explicitly directs the reader to use this framework to make a decision. Although he didn’t address this directly, I don’t think it’s fair to say he “entirely fails to address” this.

                                                                                          what are the benefits and drawbacks, to the shared purpose and to group members, of the current situation?

                                                                                          I think his general tone implies that he dislikes all the conflict, but I’d be hesitant to read further in than that.

                                                                                          1. 1

                                                                                            There is a good chance that Linus might have never have open-sourced the linux kernel had he not had a community with a strong preference for FOSS projects. Of course, yes, I do think he is “framing the question.”

                                                                                            This is in fact precisely correct: Linus did not originally distribute the kernel under an open source license but a non-commercial license. I believe he took a fair bit of convincing to switch to the GPL, and the convincing was from free software people - the ‘open source’ movement didn’t really exist at that stage from what I understand.

                                                                                            1. 1

                                                                                              Isn’t the whole ethos/telos discussion for creating a framework to evaluate this? He explicitly directs the reader to use this framework to make a decision.

                                                                                              Many of the calls for a code of conduct have mentioned, with reference to specific cases, how asshole behaviour has affected individuals. To present a framework that includes telos as something to consider, but really obviously omits the the wellbeing of individuals (group members) – that is a framework that would make us ignore those arguments instead of addressing them. Whether he’s pushing this broken framefrom incompetence or knowingly … I know he self-identifies as ‘anti social justice’, so I suspect the latter, but you can make up your own mind. His motivations aside: ESR’s framework is incomplete, and people should not use it.

                                                                                          1. 7

                                                                                            And now, having read this, please continue using package managers and pulling in dependencies from Github.

                                                                                            :^)

                                                                                            1. 2

                                                                                              You are confusing two different things. Package managers can be very good for security when feeding from distributions that implement a vetting process, security reviews, package signing and ongoing security support.

                                                                                              1. 2

                                                                                                I mean, theoretically, yes. In actuality, we have left-pad.

                                                                                                1. 1

                                                                                                  Don’t use node? My distro’s package manager is doing fine…

                                                                                            1. 9

                                                                                              I self-host. Pretty easy with sovereign. Or if you want to use NixOS: simple-nixos-mailserver

                                                                                              Definitely worth it, even just for learning how email works.

                                                                                              1. 1

                                                                                                Have you encountered any problems with sent mail being caught in spam? that’s one of the most common problems I’ve heard about with self-hosting.

                                                                                                1. 1

                                                                                                  Yeah, but it’s not so bad after you setup the DKIM etc records properly. The sovereign README has instructions on how to do all that. The situation improves as the age of your domain increases too, I think.