1. 1

    Next level: port electron to windows 95, run Slack on this.

    1. 3

      Ironically, the maker of the win95-in-electron hack works at… slack. https://github.com/felixrieseberg

      1. 1

        That would be quite a hack. I doubt Electron could even be made to run on Windows 95. Once Windows 98 came out, Win95 was all but forgotten by 99% of the computing world in short order. I would guess that most programs of the pre-Win7 era that are still actually useful have roughly this level of support:

        • Windows XP: probably? maybe?
        • Windows 98: not likely
        • Windows 95: lololol
        1. 1

          Pre-Win7 would have been Windows Vista. Nearly all programs should have run on Windows XP that were being developed on Vista. Typically you’re going to want to target the current release and at least the last major release. I think you’re correct about 98 and 95 though. Even today with Visual Studio 2017 compiling C++ I can target Windows 7, although I think by default you only get to target Windows 10 and Windows 8.x

      1. 2

        I need to look at the open source options. Back when Shwartz was the Sun CEO and I was a Java fanboy, when I wanted to make a cross-platform, easy to install GUI application, my choice was obvious. Swing is still my favorite widget toolkit.

        One time I tried to make a GUI app in F#, only to discover that even VisualStudio didn’t have real integration with GUI building tools for it. I haven’t tried since then, but if I can get it to work with Mono, I guess I’ll be happy.

        1. 2

          There’s always Gtk#. It appears to be maintained. I haven’t used it so I’m just suggesting it as another option.

          1. 1

            Yes, I know of GTK#. My feeling is that it combines the disadvantages of both, if you want the end user on Windows to install GTK anyway, you can just make native executables in any language of your liking.

            1. 1

              that comes down to packaging. xamarin studio is a complex GTK# app that is packaged such that you don’t really know what ui toolkit it uses.

              1. 1

                Yes, you can definitely package anything together so that the user doesn’t have to install any dependencies by hand, but… If you are making a small app, making a package many megabytes in size is likely to just scare Windows people off. UNIX people are unlikely to be fascinated by the prospect of installing a .Net implementation for dependencies just for one app (or even installing it at all), but at least most of them understand the why’s.

        1. 7

          A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible.

          I feel like the lack of valgrind does hurt OpenBSD as a testbed. I know there’s malloc.conf(5), but that doesn’t seem to help much in the case of, say, out of bounds access of a stack-allocated variable.

          a) Patches. Although most of them are trivial modifications, they are still my contributions.

          Don’t claim it’s just trivialities. The small things and adding polish is what really makes OpenBSD stand out (or any software project, really), and every “trivial” modification helps.

          1. 14

            OpenBSD does have Valgrind.

            1. 4

              I stand corrected. Oops. Thank you.

              1. 1

                What about ASan and the other sanitizers?

            1. 14

              Microsoft lets you download a Windows 10 ISO for free now; I downloaded one yesterday to set up a test environment for something I’m working on. With WSL and articles like this, I thought maybe I could actually consider Windows as an alternative work environment (I’ve been 100% some sort of *nix for decades).

              Nope. Dear lord, the amount of crapware and shovelware. Why the hell does a fresh install of an operating system have Skype, Candy Crush, OneDrive, ads in the launcher and an annoying voice-assistent who just starts talking out of nowhere?

              1. 5

                I’ll give you ads in the launcher – that sucks a big one – but Skype and OneDrive don’t seem like crapware. Mac OS comes with Messages, FaceTime and iCloud; it just so happens that Apple’s implementations of messaging and syncing are better than Microsoft’s. Bundling a messaging program and a file syncing program seems helpful to me, and Skype is (on paper) better than what Apple bundles because you can download it for any platform. It’s a shame that Skype in particular is such an unpleasant application to use.

                1. 3

                  It’s not even that they’re useful, it’s that they’re not optional. I’m bothered by the preinstalled stuff on Macs too, and the fact that you have to link your online accounts deeply into the OS.

                  I basically am a “window manager and something to intelligently open files by type kinda guy.” Anything more than that I’m not gonna use and thus it bothers me. I’m a minimalist.

                  1. 2

                    I am too, and I uninstall all that stuff immediately; Windows makes it very easy to remove it. “Add or Remove Programs” lets you remove Skype and OneDrive with one click each.

                2. 2

                  Free?? I guess you can download an ISO but a license for Windows 10 Home edition is $99. The better editions are even more. WSL also doesn’t work on Home either. I think you need Professional or a higher edition.

                  1. 2

                    It works on Home.

                    1. 1

                      Yup. Works great on Home according to this minus Docker which you need Hyper-V support for.

                      https://www.reddit.com/r/bashonubuntuonwindows/comments/7ehjyj/is_wsl_supported_on_windows_10_home/

                  2. 1

                    I always forget about this until I have to rebuild Windows and then I have to go find my scripts to uncrap Windows 10. Now I don’t do anything that could break Windows because I know my scripts are out of date.

                    It’s better since I’ve removed all the garbage, but holy cats that experience is awful.

                  1. 2

                    I thought this was going to be about PCC but it appears to be something else? Actually some of this PDF seems to suffer from a very poor scan and it’s difficult to read. PCC is still under active development although it is moving very slowly.

                    1. 3

                      This compiler by Alan Snyder is a different one, yeah, which predates the PCC by a few years, but didn’t really live on. Snyder’s compiler does seem to have influenced PCC, though. The 1978 report announcing PCC says:

                      A number of earlier attempts to make portable compilers are worth noting. While on CO-OP assignment to Bell Labs in 1973, Alan Snyder wrote a portable C compiler which was the basis of his Master’s Thesis at M.I.T. This compiler was very slow and complicated, and contained a number of rather serious implementation difficulties; nevertheless, a number of Snyder’s ideas appear in this work.

                    1. 1

                      Are there any benchmarks for this?

                      1. 2

                        Only thing I see is the the performance graph on the author’s page here: https://kristaps.bsd.lv/kcgi/

                        1. 1

                          I’ve long wanted to update these with some good measurements against, say, PHP. (And on OpenBSD, too.) It’s important to have a solid measure of the performance trade-off between CGI with a compiled binary and the FastCGI clones (Python’s, PHP’s, etc.) alongside the security benefits of ephemeral processes.

                          1. 1

                            Wow. Thanks for that.

                            15msec response sounds like an eternity. My server responds in micros over loopback, so what’s going on?

                            Is there an easy way to test this?

                        1. 2

                          I thought it interesting that UrWeb is in the top 5, which is a whole web system from scratch, more or less with a very small community. I have no idea what this tests though.

                          1. 2

                            It appears that the core of UrWeb uses good old blocking I/o and is heavily pthreaded.

                            https://github.com/urweb/urweb/tree/master/src/c

                            Most other servers in the benchmark use event based libraries like libuv, libevent, etc. I think it’s interesting to see how far you can get using good “old fashioned” threads and architecture that was more popular in the late 90s/early 2000s.

                            1. 1

                              I think it depends on what it’s doing. I’m not really sure the actual requests that are happening for this test.

                              1. 1

                                Yeah it’s certainly odd or at least such a different approach from the other frameworks. They are running basically code in http.c which has this comment:

                                qprintf("Starting the Ur/Web native HTTP server, which is intended for use\n"
                                      "ONLY DURING DEVELOPMENT.  You probably want to use one of the other backends,\n"
                                      "behind a production-quality HTTP server, for a real deployment.\n\n");
                                

                                I’d like to run the benchmark locally, but no time yet.

                          1. 5

                            As exciting as this is, I’m wary about dependency in GNU tools, even though I understand providing an opembsd-culture-friendly implementation would require extra work and could be a nightmare maintainance, with two different codebases for shell scripts, but perhaps gmake could be replaced with something portable.

                            1. 12

                              This version of Wireguard was written in go, which means it can run on exactly 2 (amd64, i386) of the 13 platforms supported by OpenBSD.

                              The original Wireguard implementation written in C is a Linux kernel module.

                              A dependency on gmake is the least of all portability worries in this situation.

                              1. 18

                                While it’s unfortunate that Go on OpenBSD only supports 386 and amd64, Go does support more architectures that are also supported by OpenBSD, specifically arm64 (I wrote the port), arm, mips, power, mips. I have also implemented Go support for sparc64, but for various reasons this wasn’t integrated upstream.

                                Go also supports power, and it used to run on the power machines supported by OpenBSD, but sadly now it only runs on more modern power machines, which I believe are not supported by OpenBSD. However, it would be easy to revert the changes that require more modern power machines. There’s nothing fundamental about them, just that the IBM maintainer refused to support such old machines.

                                Since Go support both OpenBSD and the architectures mentioned, adding support in Go for OpenBSD+$GOARCH is about a few hours of work, so if there is interest there would not be any problem implementing this.

                                I can help and offer advice if anyone is willing to do the work.

                                1. 3

                                  Thanks for your response! I didn’t know that go supports so many platforms.

                                  Go support for sparc64, but for various reasons this wasn’t integrated

                                  Let me guess: Nobody wanted to pay the steep electricity bill required to keep a beefy sparc64 machine running?

                                  1. 23

                                    No, that wasn’t the problem. The problem was that my contract with Oracle (who paid me for the port) had simply run out of time before we had a chance to integrate.

                                    Development took longer then expected (because SPARC is like that). In fact it took about three times longer than developing the arm64 port. The lower level bits of the Go implementation have been under a constant churn which prevented us from merging the port because we were never quite synced up with upstream. We were playing a whack’a’mole game with upstream. As soon as we merged the latest changes, upstream had diverged again. In the end my contract with Oracle had finished before we were able to merge.

                                    This could all have been preventable if Google had let us have a dev.sparc64 branch, but because Google is Google, only Google is allowed to have upstream branches. All other development must happen at tip (impossible for big projects like this, also disallowed by internal Go rules), or in forks that then have to keep up.

                                    The Go team uses automated refactoring tools, or sometimes even basic scripts to do large scale refactoring. As we didn’t have access to any of these tools, we had to do the equivalent changes on our side manually, which took a lot of time and effort. If we had an upstream branch, whoever did these refactorings could have simply used the same tools on our code and we would have been good.

                                    I estimate we spent more effort trying to keep up with upstream than actually developing the sparc support.

                                    As for paying for electricity, Oracle donated one of the first production SPARC S7-2 machines (serial number less than 100) to the Go project. Google refused to pay for hosting this machine (that’s why it’s still sitting next to me as I type this).

                                    In my opinion after being involved with Go since the day of the public release, I’d say the Go team at Google is unfortunately very unsympathetic to large scale work done by non-Google people. Not actively hostile. They thanked me for the arm64 port, and I’m sure they are happy somebody did that work, but indirectly hostile in the sense that the way the Go team operates is not compatible with large scale outside contributions.

                                    1. 1

                                      Having to manually follow automated tools has to suck. I’d be overwhelmed by the tedium or get side-tracked trying to develop my own or something. Has anyone attempted a Go-to-C compiler developed to attempt to side-step all these problems? I originally thought something like that would be useful just to accelerate all the networking stuff being done in Go.

                                      1. 2

                                        There is gccgo, which is a frontend for gcc. Not quite a transpiler but it does support more architectures than the official compiler.

                                        1. 1

                                          Yeah, that sounds good. It might have a chance of performing better, too. The thing working against that is the Go compiler is designed for optimizing that language with the gccgo just being coopted. Might be interesting to see if any of the servers or whatever perform better with gccgo. I’d lean toward LLVM, though, given it seems more optimization research goes into it.

                                        2. 2

                                          The Go team wrote such a (limited) transpiler to convert the Go compiler itself from C to Go.

                                          edit: sorry, I misread your comment - you asked for Go 2 C, not the other way around.

                                          1. 1

                                            Hey, that’s really cool, too! Things like that might be a solution to security of legacy code whose language isn’t that important.

                                      2. 1

                                        But these people are probably more than comfortable with cryptocurrency mining 🙃

                                      3. 3

                                        Go also supports power, and it used to run on the power machines supported by OpenBSD, but sadly now it only runs on more modern power machines, which I believe are not supported by OpenBSD. However, it would be easy to revert the changes that require more modern power machines. There’s nothing fundamental about them, just that the IBM maintainer refused to support such old machines.

                                        The really stupid part is that Go since 1.9 requires POWER8…. even on big endian systems, which is very pointless because most running big endian PPC is doing it on pre-POWER8 systems (there’s still a lot!) or a big endian only OS. (AIX and OS/400) You tell upstream, but they just shrug at you.

                                        1. 3

                                          I fought against that change, but lost.

                                        2. 2

                                          However, it would be easy to revert the changes that require more modern power machines.

                                          Do you have a link to a revision number or source tree which has the code to revert? I still use a macppc (32 bit) that I’d love to use Go on.

                                          1. 3

                                            See issue #19074. Apparently someone from Debian already maintains a POWER5 branch.

                                            Unfortunately that won’t help you though. Sorry for speaking too soon. We only ever supported 64 bit power. If macppc is a 32-bit port, this won’t work for you, sorry.

                                            1. 3

                                              OpenBSD/macppc is indeed 32-bit.

                                              I kinda wonder if say, an OpenBSD/power port is feasible; fast-ish POWER6 hardware is getting cheap (like 200$) used and not hard to find. (and again, all pre-P8 POWER HW in 64-bit mode is big endian only) It all depends on developer interest…

                                              1. 3

                                                Not to mention that one Talos board was closer to two grand than eight or ten. Someone could even sponsor the OpenBSD port by buying some dev’s the base model.

                                                1. 3

                                                  Yeah, thankfully you can still run ppc64be stuff on >=P8 :)

                                        3. 2

                                          This version of Wireguard was written in go, which means it can run on exactly 2 (amd64, i386)

                                          That and syspatch make me regret of buying EdgeRouter Lite instead of saving up for an apu2.

                                        4. 2

                                          I’m a bit off with the dependency of bash on all platforms. Can’t this be achieved with a more portable script instead (POSIX-sh)?

                                          1. 3

                                            You don’t have to use wg-quick(8) – the thing that uses bash. You can instead set things up manually (which is really easy; wireguard is very simple after all), and just use wg(8) which only depends on libc.

                                            1. 2

                                              I think the same as you, I’m sure it is possibe to achieve same results using portable scripts. I’m aware of the conviniences bash offers, but it is big, slow, and prompt to bugs.

                                          1. 10

                                            Thankfully this page is completely readable without JavaScript. Yay!

                                            1. 10

                                              Not only that, she really puts her money where her mouth is in terms of accessibility! I’d gone halfway through the text with the default zoom, because I really enjoyed the Tufte-style layout. But my eyes don’t really like small text for long reads. When I did finally switch (reluctantly) to reader view, I was very impressed with how well everything was still represented on the page. My old eyes thanked me, too!

                                              Presentation and content both top-notch. The ‘healthy tech pyramid’ macguffin is used to great effect!

                                            1. 5

                                              So, yeah, this, as presented in the article, is a bad idea.

                                              char buf[BIG_ENOUG_SIZE];
                                              struct something *foo;
                                              struct something_else *bar;
                                              
                                              // point foo to the beginning of buf
                                              foo = (struct something *) buf;
                                              // point bar to the location after foo inside buf
                                              bar = (struct something_else *) (buf + sizeof(struct something));
                                              

                                              You can’t be certain that everything is going to align properly here, namely on the border between the struct something and struct something_else.

                                              The article advocates for this because otherwise you use twice the memory. Is that extra memory on the stack? In that case, it may not matter. If you don’t have a stack (and lots of 8-bit microcontrollers don’t), then do yourself a favour and do this instead:

                                              struct concated_stuff {
                                                struct something { ... };
                                                struct something_else { ... };
                                              };
                                              

                                              And honestly, there doesn’t seem to be a good reason why they have to be adjacent in memory in the first place. [edit: the article does offhandedly mention that “it needs them concatenated into a single buffer”. Sounds dubious, but it’s not explained, so I’ll let it slide.] So just declare them as static globals and don’t worry about it.

                                              1. 1

                                                Another option I would think is to use a flexible array member inside struct something like this:

                                                struct something {
                                                     int something_field1;
                                                     char *something_field2;
                                                
                                                     char cls_extra[];
                                                };
                                                

                                                Then when allocating later in your code:

                                                struct something *sthing = malloc(sizeof(struct something) + sizeof(struct something_else));
                                                
                                                struct something_else *selse = (struct something_else *)sthing->cls_extra;
                                                

                                                Or would that also have the same alignment issues? I was under the impression that Microsoft Windows API uses this trick (well the older way pre-dating C99) in some APIs as mentioned on Old New Thing.

                                                1. 1

                                                  I think that one might be okay, since cls_extra is now aligned as a pointer, but there might still be some weird layout that would cause a problem.

                                                  Bear in mind the article is writing in the context of 8-bit microcontrollers, so there are two things to consider: the size of things tends to be not what you think (and thus alignment can be bizarre), and use of malloc is practically forbidden. (If you don’t have a stack, you probably don’t have a heap, either.)

                                              1. 6

                                                This video is what I use when countering the myths about how C was designed. It goes to the papers and constraints that led Richards’ team to chop ALOL into BCPL. Then, the modifications from BCPL to B to C. Understanding the context of why C was the way it was might help folks understand why we should upgrade given we’re no longer in those constraints either in hardware or programming features.

                                                1. 10

                                                  I think there is a reasonable argument that C won on its merits. The following is a list of some languages that were available in 1975 and my opinion of why they lost out to C. C is pretty much the only language on this list with a portable implementation that ran on minicomputers.

                                                  Algol 60 - call by name is expensive, not really intended for system software

                                                  Algol 68 - complex to implement, standard uses an obscure formal semantics, requires a runtime, compilers did not emerge for years

                                                  Algol W - first implementation was for IBM mainframes in an infix assembly language, few other implementations

                                                  BCPL - untyped, inferior to C in some ways, limited support for byte addressing

                                                  BLISS - semantics for assignment are unusual, untyped, no portable compiler, only for DEC architectures

                                                  Coral66 - British military standard, may not have had recursion

                                                  Fortran 66 - not really suited to system software, although a number of vendors wrote operating systems in an extended Fortran

                                                  Forth - different programming model, mostly interpreted

                                                  IMP72 - implemented mostly on supercomputers, low level of abstraction (Fortran II), complex (extensible) grammar

                                                  Jovial73 - DoD standard, no standard IO

                                                  LRLtran - no implementations for minicomputers

                                                  MAD - low level of abstraction, implementations ran on mainframes

                                                  NELIAC - low level of abstraction

                                                  Pascal - weak standard, no separate compilation, Wirth moved on to new languages

                                                  PL.8 - internal to IBM, compiler ran on mainframes

                                                  PL/I - complicated to implement, early implementations were slow

                                                  PL/S - internal to IBM, compiler ran on mainframes

                                                  RTL/2 - British language for realtime systems, probably unknown in the US.

                                                  Simula 67 - uses garbage collection, inventors wanted license fees

                                                  1. 2

                                                    Great list. Remember that there’s two parts to this: one is how they designed it; one is what happened later. Your list seems to be what happened later after comparing its design to everything else unmodified. Whereas, mine says they’d have cherry-picked the best of anything on that list modifying it for their situation. In each case, they’d pick whatever was safest or cleanest by default switching to unsafe only where necessary. As hardware improved, the safety and maintainability would improve.

                                                    That’s the approach Wirth took with Modula-2 and the rest of his languages. Most others did as well doing languages for safety or programming in the large. It’s now the standard way to do things with many citing an escape from the safety and maintainability problems of C. So, the evidence leans toward Wirth’s choice.

                                                  2. 1

                                                    If I wanted to both re-write indent(1) in not-C and continue to distribute it as a part of FreeBSD, NetBSD, OpenBSD - which programming language should I “upgrade” to? What choice do I have?

                                                    1. 2

                                                      My top choices for contenders would be rust, zig, myrddin and nim. zig being the closest to C with many fixes.

                                                      1. 4

                                                        One issue with rust currently is that building the compiler will dominate compile times until most of the distribution is ported to rust.

                                                        1. 3

                                                          What about Wirth’s new Oberon-07?

                                                          Recently it has got a new promising little compiler to C, OBNC.

                                                          1. 2

                                                            Hadn’t seen that, will check it out.

                                                            1. 1

                                                              I’d really appreciate your opinion, since you cited Myrddin, which is my favourite contender for the package system of Jehanne.

                                                              I do not really know any of the two (I used Pascal years ago.. but Oberon seem better).

                                                              But of all the C alternatives I could decide to integrate in Jehanne (the way Perl was integrated in Unix) these seem the best two candidates for their balance between simplicity and practicality.

                                                              Wirth’s Oberon win on simplicity, but Obi’s Myrddin win on practicality (according to my shallow understanding so far… take this with a grain of salt!)

                                                              1. 2

                                                                FWIW, Myrddin is probably going to be the easiest to port to Jehanne, since it already runs on Plan 9, and has a very small system abstraction layer.

                                                                1. 1

                                                                  Hi Orib! Yes, you are right! Myrddin is the most practical choice for Jehanne.

                                                                  Also it provides language features I like, such as ADT and pattern matching, and it already has a practical standard library.
                                                                  But honestly I haven’t had the time to try your hints: I saved them from my irc log, but… really I didn’t had the time… :-(

                                                                  Nevertheless I’m also rather fashinated by Oberon-07: Wirth keeps cleaning it, removing redundant features. I know this add pressure to the library and applicative code, but…

                                                                  I think you can see the affinity with my strive for simplicity in Jehanne.

                                                          2. 4

                                                            All of those fall over on portability. Rust is amd64 and i386 only, myrddin is amd64 only, and building the zig compiler requires llvm. nim has the best story with amd64, i386, ppc and arm, which still isn’t enough.

                                                            1. 1

                                                              I think you are wrong about rust, there have been plenty of posts of embedded arm and other processors targetted by rust. LLVM has lots of targets and can compile itself, so it is relatively portable, though extremely complex.

                                                              1. 3

                                                                Is rust on other architectures done natively or by cross-compiling? I don’t know about the others but OpenBSD requires that the base install can build itself on every architecture.

                                                                1. 1

                                                                  https://forge.rust-lang.org/platform-support.html - it seems like rustc can run on at least 5-6 architectures. and the groundwork is there for more.

                                                                  Zig itself has two stdlibs, one is based on libc so I bet that it could run on more platforms.

                                                                  1. 6

                                                                    He is right, the only platforms at the moment able to self-build rust are amd64 & i386. OpenBSD requires much more. You participated in a previous thread so you know that rust in the base system is not likely to happen. Hence rust is not the answer to:

                                                                    If I wanted to both re-write indent(1) in not-C and continue to distribute it as a part of FreeBSD, NetBSD, OpenBSD - which programming language should I “upgrade” to? What choice do I have?

                                                                    With the current status quo, the only language fitting the above question I believe is Perl.

                                                                    1. 2

                                                                      https://github.com/Microsoft/checkedc seems to be one of the more practical approaches to upgrading C, though obviously not ready.

                                                          3. 2

                                                            Maybe Vala. It compiles to C but has dependency on GObject.

                                                            1. 2

                                                              The answer is C++. Every architecture that OpenBSD currently supports has a C++ compiler in base (well actually compXX.tgz). I’d imagine the answer is similar for FreeBSD and NetBSD. You may be able to get away with C++11 on the popular architectures but I think the less popular ones you’re stuck with C++03 or even C++98.

                                                              1. 1

                                                                The general choice is anything that compiles to C. If they’re picky about coding standards, what you would have then is a cultural argument instead of one on language capabilities. They wouldn’t allow something better. Then, you might be forced to do what folks like Per Brinch Hansen did back when hardware couldn’t run ALGOL: write in one language you can’t share for its benefits with a second version done against that in language you can share. To automate that, I recommended a while back someone make a safe, clean, superset language that’s compatible with C plus exports to readable code in that language. Throw in real macros and REPL for an extra reason to use it.

                                                                Then, we don’t have a CVSup-style situation where author uses a safe, maintainable, high-level language for its benefits but people eventually rewrite that stuff in C anyway.

                                                            1. 5

                                                              Everything, well the C++ and programming videos, by Bisqwit.

                                                              1. 1

                                                                Which BSDs still support 32-bit architectures? I assume NetBSD does, but AFAICT the others are gradually dropping it from their latest releases.

                                                                1. 2

                                                                  I believe all of them do except Dragonfly, like the article said. Also, TrueOS (which wasn’t covered in the article) only supports amd64. Do you have a link or can you cite something that says the major BSD operating systems are dropping 32 bit? Certainly the majority of development is occurring on 64 bit architectures but I think 32 bit is still supported for a while.

                                                                  1. 1

                                                                    My soekris and my alix are still running.

                                                                1. 6

                                                                  It was once normal to browse the web without a form of ad blocking software. Now, unless you were to restrict yourself to a very limited set of sites, you really need ad blocking software to browse. As more sites abuse javascript to run these type of things I see either ad blocking software getting more complex and adopting crypto mining blockers or new browser extensions that further sandbox javascript in some way becoming more popular.

                                                                  For a few months now I’ve been using multiple browsers. I’ve configured Firefox, my daily driver, to completely disable javascript via NoScript. Most sites still run fine, although they might have some rendering issues. When a page doesn’t work or is impossible to read I switch over to Chromium (which just has ad blocking).

                                                                  1. 6

                                                                    I’ve configured Firefox, my daily driver, to completely disable javascript via NoScript. Most sites still run fine, although they might have some rendering issues.

                                                                    I’ve been using NoScript for years (maybe a decade?), and while this used to be true, more and more sites are requiring JavaScript in order to work at all. I’m really not a fan of this trend (HTML is already quite capable of displaying text and images), but I don’t know how to stop it.

                                                                    I recently switched to uMatrix; it allows first-part JS by default, and seems pretty useful overall.

                                                                    1. 1

                                                                      Yes umatrix is pretty great. Though I still keep ublock on for certain first-party ads.

                                                                      My workflow typically involves using the tor browser at safer settings to open websites and switch to firefox with umatrix whenever I have to login to something.

                                                                    2. 2

                                                                      I categorize browsers by levels of blocking. Chrome is the daily driver with moderate blocking - ublock and some extra plugins to stop autoplaying video. Firefox is for heavy blocking, with ublock as as a strictly configured noscript, for anything that has too many annoying scripts and ads even in Chrome. And IE/Edge/Safari for no blocking, for anything that I can’t get to work in the other 2 and am willing to live with whatever weird stuff they do.

                                                                    1. 1

                                                                      Kudos to Facebook (and Wordpress)!

                                                                      1. 15

                                                                        I don’t understand how this is a good thing; they are moving from a license that has a bad patent protection clause to a license that has no patent protection clause.

                                                                        1. 12

                                                                          this comment on HN explains it

                                                                          Basically, if you sell or license a product that requires a patent to work, courts have generally held that you grant an implied patent license for any patents that the product might require. If you explicitly reference patents within the license, however, then whatever terms you explicitly write into the license supersede this implied patent license. BSD+patents (and Apache 2) have explicit patent language; paradoxically, this makes them more restrictive than licenses like MIT, BSD, or GPL that don’t mention patents at all.

                                                                          1. 10

                                                                            this makes them more restrictive than licenses like MIT, BSD, or GPL that don’t mention patents at all.

                                                                            This is incorrect; the GPL has an explicit patent grant.

                                                                            Lumping Facebook’s random one-off “let’s mash a weak unilateral explicit patent grant onto the BSD license” together with the carefully-designed Apache and GPL licenses is really weird. There’s a (very shaky) argument to be made that their one-off unilateral grant is worse than an implicit grant, but the idea that an explicit bilateral grant written by people who actually care about user freedom is the same thing is just … completely wrong.

                                                                            1. 10

                                                                              I suspect people are confusing GPL with GPLv2. Inexcusable, given that GPLv3 is more than 10 years old now.

                                                                              Note: GPLv2 has no explicit patent grant, but GPLv2 does mention patents. It has a clause which makes patent-encumbered GPLv2 software undistributable.

                                                                            2. 2

                                                                              yeah, I see this as a generally good thing

                                                                            3. 3

                                                                              I am not a React developer but I think it’s a good thing because it’s the same license Angular and Vue use. There is no mention of patents in the MIT license but now going forward any problems you have concerning patents are the same problems you’d have if you chose angular or vue in the first place.

                                                                              1. 3

                                                                                I agree. I expected them to move to the Apache license.

                                                                              2. 3

                                                                                Apache foundation and Baidu also dropped React for the same reason recently.

                                                                              1. 3

                                                                                Considering that this is supposed to be such a big deal where are the other browsers (Firefox, Chrome/Chromium, Edge?) on blocking 3rd party cookies?

                                                                                1. 1

                                                                                  At this point in time, all of the major browsers support blocking third-party cookies. Google is even

                                                                                  testing an ad blocker within Chrome that could be used more broadly by next year.

                                                                                  1. 4

                                                                                    an advertising company is the last thing i trust to implement an ad blocker.

                                                                                1. 4

                                                                                  Do people not know that voter registration data is public? I also liked this bit:

                                                                                  even whether they are on the federal “Do Not Call” list

                                                                                  Yeah, being on the do not call list doesn’t help much if they people thinking of calling you don’t keep track of that fact. (Of course, there’s also some bullshit exception for political purposes.)

                                                                                  1. 3

                                                                                    I think you’re missing the concern that people have: aggregating data and cross-referencing it takes time and energy. There’s a lot of public records about me, but if someone wanted to compile a complete picture, they’d have to do a lot of work to put those pieces together. For most citizens, most of the time, it’s not worth putting that effort in.

                                                                                    So, now we have a vendor who put the effort in, working on economies of scale. That already introduces some concerns, but once again, I’m not being specifically targeted, it’s a bulk thing, I remain essentially anonymous and uncorrelated. Then the data leaks. Now, if someone wants to target me specifically, the “work” is to find a pastebin or a torrent or a download of this data, search it for me, and they can assemble a pretty complete picture.

                                                                                    1. 1

                                                                                      Whether it’s all in one spreadsheet or requires driving around to 5000 courthouses, if it’s public information I think it’s a mistake to call it “sensitive personal information”. It dilutes the term.

                                                                                      1. 1

                                                                                        I agree it’s not wholly accurate, but I’m not aware of a better term to describe shift in economics.

                                                                                        Lots of information has always been technically-public if you were willing to put in a few days work. Convenient aggregation means it’s now a few minutes work.

                                                                                        This introduces new problems, because most people can be furious about something petty for a few minutes but not a few days.

                                                                                    2. 3

                                                                                      Yeah, there are even websites to conveniently (and freely) look it up: http://www.coloradovoters.info/

                                                                                      I’m sure there are similar pages for other states.

                                                                                      Whether or not a person can vote is public information. Who a person votes for is private information.

                                                                                      1. 1

                                                                                        Most states do have similar sites but doing lookups requires that you know someone’s​ date of birth and their house number. The Colorado one seems much more “open” than other states.

                                                                                    1. 3

                                                                                      I like it. There are clearly performance downsides to CGI (and does httpd support concurrent requests to a single FastCGI backend?), but I could see myself using this for a lot of real-world projects where those downsides don’t matter.

                                                                                      I think the key takeaway from this project is that while C-based CGI web applications have traditionally been a security minefield, there are now fewer mines in that field thanks to usable sandboxing mechanisms like OpenBSD’s pledge(2) and wrappers like kcgi. Developers still need to worry about vulnerabilities that could leak or change internal application state, but the application itself can be fairly well contained without dealing with containers/jails.

                                                                                      1. 3

                                                                                        As far as I can tell from browsing the source code to httpd it does not support concurrent (fastcgi multiplex) requests to a single fastcgi backend. I don’t think nginx supports this either though. For higher performance you might just need to use proxy_pass in nginx which does support concurrent requests.

                                                                                        1. 2

                                                                                          IIUC, FastCGI has no notion of flow control when multiplexing connections, which seems a little scary if you have high load.

                                                                                      1. 2

                                                                                        For the iMeme project I see it has a Windows version and a Mac version but when I checked GitHub I only saw the objective C code. Did you use objective C (and maybe Cocotron?) for the Windows version or some form of compiled Python with wxWidgets.

                                                                                        1. 2

                                                                                          Yeah, sorry, that’s confusing. Totally separate codebase: https://github.com/fogleman/pyMeme

                                                                                        1. 2

                                                                                          A laptop is a dev environment that moves with you, and at ~$35/mo, and a 3 year laptop lifecycle, that’s $1260 which is way more than you need.

                                                                                          I got pretty bogged down in the “here’s how I navigated microsoft’s menus”, but I felt like they never covered anything that they could do with this hosted VM that they couldn’t do with a laptop.

                                                                                          1. 1

                                                                                            Small correction but the article mentioned £35 and not $35, so it’s closer to $45/month which is super expensive for a development environment.