1. 6

    BSD-friendly and cheap? Sounds just like a description of some used Thinkpad.

    You should be able to get Thinkpad X200 in this price, or maybe even something better. Basically, look for used Thinkpads series X, T or W.

    1. 1

      Could you sent a PR with fnaify?

      There are also other interesting games (which I happen to maintain): eduke32 (Duke Nukem 3D), julius (Caesar 3), sdlpop (Prince of Persia).

      Other awesome games are: tome4 (IMO best roguelike there is, FOSS video engine), ja2-stracciatella (Jagged Alliance 2), chocolate-doom (Doom I & 2), dhewm3 (Doom 3), iortcw (Return to Castle Wolfenstein).

      1. 1

        Mutt on computers. Dekko on phone.

        1. 2

          I use OpenBSD’s ksh on OpenBSD, HardenedBSD (shells/oksh port) and Gentoo (app-shells/loksh). I wrote my own shell completions, basically something like: `set -A complete_doas – $(ls $(echo $PATH | tr : ’ ’) )

          set -A complete_git_1 – clone pull

          set -A complete_gpg_1 – –armor –change-passphrase –delete-keys –delete-secret-keys –edit-key –export –full-generate-key –import –list-keys –list-secret-keys –receive-keys –refresh-keys –search-keys –send-keys –sign-key –verify

          set -A complete_ifconfig_1 – em0 lo0 wlan0

          set -A complete_mpv – –ao –list-options –no-audio-display –no-vid –sub-file= –sub-font-size= –vo

          set -A complete_pass – $PASS_LIST -c generate edit insert git push

          set -A complete_pkg_1 – autoremove delete info install

          set -A complete_pkg_2 – $PKG_LIST

          set -A complete_poudriere_1 – bulk jail list ports status testport

          set -A complete_vm_1 – console destroy install iso list reset start stop

          set -A complete_sysctl – $(sysctl -aoN)

          set -A complete_zfs_1 – create destroy get list set snapshot

          set -A complete_zfs_2 – $PARAMS

          set -A complete_zfs_3 – $DATASETS

          set -A complete_zpool_1 – list status`

          1. 4

            I self-host. I use HardenedBSD with Postfix, Dovecot, Mailscanner etc.

            1. 6

              If you want secure and rather fast x86, look at Opterons 62xx and 63xx. They are still pretty fast and not vulnerable to many CVE’s. Coupled with Coreboot, they make for a nice desktop or a server.

              If you want something faster, more secure and are not limited to x86, POWER9 with Talos II motherboard is a great choice.

              1. 8

                It looks like a new single CPU Talos board is still $2500. I mean, that’s far cheaper than they were last time I looked, but still not entirely practical for many enthusiasts.

                One biggest issue with other architecture is video deciding. A lot of decoders are written in x86_64 specific assembly. Itanium never had a lot of codecs ported to EPIC, making it useless in the video editing space. There are hardware decoders on a lot of amd/nvidia GPUs, but then it comes down to drivers (amdgpu is open source and you have a better shot there on power, but it’d be interesting to see if anyone has gotten that working).

                1. 2

                  You can hardware decode but you generally don’t want to hardware encode for editing. HW encoders have worse quality at the same bitrate vs. software.

                  Mesa support for decode on AMD is good, encode is starting to work but it’s pretty bad right now (compared to windows drivers).

                  1. 2

                    Decoding isn’t the problem. All modern lossy codecs ate strongly biased towards decode performance, and once you’re at reasonable data rates, CPUs handle it fine. Encoding would be misery, because all software encoders are laboriously hand tuned for their target platform, and you really don’t want to use a hardware encoder unless you absolutely have to.

                  2. 3

                    The only reason you’d be stuck with x86 is if you’re running proprietary software and then chip backdoors are the least of your concerns.

                    1. 4

                      The only reason you’d be stuck with x86

                      When I last saw it debated, everyone agreed x86 stumped all competitors on price/performance, mainly single-threaded. Especially important if you’re doing something CPU-bound that you can’t just throw cores at. One of the reasons is only companies bringing in piles of money can afford a full-custom, multi-GHz, more-work-per-cycle design like Intel, AMD, and IBM. Although Raptor is selling IBM’s, Intel and AMD are still much cheaper.

                      1. 2

                        Actually, POWER9 is MUCH cheaper. You can get 18-core CPU for a way better price and it has 72 threads instead of 36 threads (like Intel).

                        1. 2

                          That sounds pretty high end. Is that true for regular desktop CPU’s? Ex: I built a friend a rig a year or so ago that could do everything up to the best games of the time. It cost around $600. Can I get a gaming or multimedia-class POWER9 box for $600 new?

                          1. 2

                            No, certainly not. But you can look at it otherwise - the PC you assemble will be enough for you for 10-15 years, if you have enough money to pay now :)

                            $600 PC will not make it for that long.

                            1. 2

                              “But you can look at it otherwise - the PC you assemble will be enough for you for 10-15 years, if you have enough money to pay now :)”

                              The local dealership called me back. They said whoever wrote the comment I showed them should put in an application to the sales department. They might have nice commissions waiting for them if they can keep up that smooth combo of truth and BS. ;)

                              “$600 PC will not make it for that long.”

                              Back to being serious, maybe and maybe not. The PC’s that work for about everything now get worse every year. What they get worse at depends on the year, though. The $600-700 rig was expected to get behind on high-end games in a few years, play lots of performance stuff acceptably for a few years more, and do basic stuff fast enough for years more than that. As an example (IIRC), both tedu and I each had a Core Duo 2 laptop for seven or more years with them performing acceptably on about everything we did. I paid $800 for that laptop barely-used on eBay. I’m using a Celeron right now since I’m doing maintenance on that one. It was a cheaper barter, it sucks in a lot of ways, and still gets by. I can’t say I’d have a steady stream of such bargains with long-term usability on POWER9. Maybe we’ll get it after a few years.

                              One other thing to note is that the Talos stuff is beta based on a review I read where they had issues with some stuff. Maybe the hardware could have similar issues that would require a replacement. That’s before considering hackers focusing on hardware now: I’m just talking vanilla problems. Until their combined HW/SW offering matures, I can’t be sure anything they sell me will last a year much less 10-15.

                      2. 2

                        Even though I’d swap my KGPE-D16 for Talos any minute, I simply can’t afford it. So I’m stuck with x86, but it’s not because of proprietary software.

                    1. 1

                      Pass is pretty awesome: https://www.passwordstore.org/

                      1. It is FOSS.
                      2. It’s available for all 3 systems.
                      3. It can sync and backup via git (so over ssh).
                      1. 2

                        the abandoned android port is depressing

                      1. 4

                        So Power is switching to little endian by default?

                        Only slightly related, but I’ve been looking for somewhere to test a piece of code on big endian, but that seems to be rather difficult as a private person. I think the only options are to find some physical hardware on the cheap?

                        I have a Pi3, and that’s supposed to be bi-endian, but I’m not sure how to go about installing a big endian Linux on it. Same goes for a Scaleway ARM virtual machine, I guess.

                        1. 7

                          Shell accounts at Polarhome are free for developers of open source projects (and cheap otherwise). Their Debian/PPC and Solaris/SPARC are big-endian IIRC.

                          You can also run QEMU, here’s a random repo with instructions.

                          1. 3

                            You should be able to virtualize, Debian for example supports some Big Endian architectures. I don’t reckon it matters much though, Big Endian is definitely on the way out.

                            If you do want to go physical, you can get an Octeon-based system, they’re Big Endian mips64. Mostly used in networking equipment. Cavium has an incomplete list of products using Octeon processors, stuff under the consumer tab is probably your best bet for cheap stuff.

                            I have a Ubiquiti UniFi Security Gateway running on Octeon. It’s running some kind of Debian derivative, or so I assume since dpkg and the Debian package keys are present.

                            $ lscpu
                            Architecture:          mips64
                            Byte Order:            Big Endian
                            $ uname -a
                            Linux ubnt 3.10.20-UBNT #1 SMP Fri Nov 3 15:45:37 MDT 2017 mips64 GNU/Linux

                            This seems consistent with the development kit information on the Cavium Octeon web page:

                            OS: Linux 2.6 (SDK 2.x) for OCTEON II or Linux 3.10 (SDK 3.1.x) 64-bit SMP OS for OCTEON II & III

                            My other UniFi hardware runs Little Endian ARMv7 though. Looks like processors made by either MediaTek, or Qualcomm for the wireless gizmos.

                            1. 2

                              Yeah, Ubiquiti’s Octeon stuff (specifically EdgeRouter) is quite well known, it’s supported by FreeBSD and OpenBSD for example. But consumer router grade CPUs are uhhhh rather weak :(

                            2. 3

                              Or just get actual POWER box. Talos II (mentioned in the article) is relatively cheap for the specs.

                              1. 3

                                It’s still very prohibitively expensive unless you’re very dedicated to having a POWER box. I have access to off-lease POWER6 boxes acquired for cheap on eBay, but those are large, loud, pour out heat, suck up electricity, and generally only desirable if you really want a POWER box but lack funds. (Not to mention the firmware bugs that IBM refused to patch for it, so newer distros don’t support POWER6.)

                                Really, the best way to play with PPC still is to buy an old Power Mac, which is kinda sad.

                                edit: interesting thread on this topic of high-end RISC systems being hard to acquire for devs, which reduces their viability on the market

                                1. 2

                                  I guess I am dedicated :D

                                  But I’m going to get it because it’s all FOSS, no blobs, that’s the main reason. It’s also not that expensive, considering specs. And it’s just as power hungry as similar Intel boxes. Sure, older POWER generations were much more power hungry, but things changed with POWER9.

                              2. 2

                                the IBM PDP program gives access to POWER based systems, they’ve just added 9 support but previously had 7 & 8 based systems running AIX & Suse.

                              1. 1

                                What is the current state of Plan 9 development?

                                1. 7

                                  9front is actively developed.

                                  1. 1

                                    Is 9front usable on desktop? By usable, I mean that there’s some mail client (I don’t mind CLI, I use Mutt anyway), some audio / video player (mpv is just fine) and some browser that understands modern websites (yeah, I hate JS too, but it’s inavoidable). I guess the last part is the worst :)

                                    1. 9

                                      The last part is indeed the worst. For web browsing, there’s mothra and that’s about it. Mothra does not support JavaScript. Here is the relevant bit of the FQA.

                                      Russ Cox described his motivation for creating Plan 9 from User Space like this:

                                      I ran Plan 9 from Bell Labs as my day to day work environment until around 2002. By then two facts were painfully clear. First, the Internet was here to stay; and second, Plan 9 had no hope of keeping up with web browsers. Porting Mozilla to Plan 9 was far too much work, so instead I ported almost all the Plan 9 user level software to FreeBSD, Linux, and OS X.

                                      1. 2

                                        Yes there’s a mail client, playing videos depends on the format, modern browser…no, by design mostly.

                                        1. 3

                                          there is no support for video playback at all.

                                          1. 2

                                            What can you use 9front for? I don’t mean playing in VirtualBox or whatever VM software you use, but for serious usage. I’ve always wanted to play with it more, but playing just for the sake of playing with it makes me isn’t interesting for me :)

                                            1. 11

                                              The system excels at manipulating text. It can playback most popular audio formats, and it can display many popular image and document formats. It does not (currently) have any support for video playback. There is no modern web browser (the native browser, mothra(1), ignores CSS, js, and many HTML tags). The system includes a PC emulator called vmx(1) that is capable of hosting Linux or OpenBSD, but currently the guest’s framebuffer is emulated entirely in software, so performance is pretty awful, and programs like web browsers are barely usable.

                                              1. 1

                                                Now, that is something, thanks!

                                                What about use as a server? Since this is Plan9-derivative, I assume all Plan9 servers (CPU, Auth, 9P etc. are available). I can also see the included HTTP server. Can it use TLS? What about others protocols (like XMPP, DNS authoritative server etc.)?

                                                I see there’s a port of OpenSSH, but it’s at version 4.7, which can’t do ED25510 :/ Is there any other SSH client (I mean, one written for 9front)?

                                                I hope you don’t get angry by my questions, I just want to know what I can use 9front for. You kind of made me again interested in it, so I’ll install 9front on a spare PC.

                                                1. 6

                                                  I’m the admin for basically all of the 9front official websites, and the cat-v.org sites, all hosted on 9front for several years. TLS is supported, but there is no support for SNI, so the end result is most current mobile browsers will refuse the self-signed/wrong-domain-name certificate. I also host all my DNS on 9front, pushing updates automatically to slaves at dns.he.net.

                                                  You didn’t ask about mail, but all the 9front mailing lists are also hosted on 9front, with upas(1) and a rather primitive mailing list manager called ml(1). I also host my personal e-mail with upas(1).

                                                  The system includes a native SSH2 client called ssh(1).

                                                  http://fqa.9front.org is probably the best overall resource for information about the system. It includes links and pointers to most other relevant sources. Unfortunately it tends to lag behind the current state of the system at times, mainly because of time comstraints.

                                              2. 3

                                                The Introduction To Plan 9 from the 9front FQA might interest you.

                                                1. 1

                                                  I read it, I used 9front for a few hours some time ago, so I’m not a complete newcomer.

                                                  What I miss is some overview of available software. I can see that there is https://bitbucket.org/mveety/9front-ports, but it doesn’t seem official.

                                                  EDIT: Nvm, just found https://code.9front.org/hg/ports/

                                                    1. 1

                                                      Thanks, that’s what I was asking for.

                                                2. 1

                                                  I’d really like to get around to porting emacs to Plan 9. That might be the sort of work I could actually do. I’d love to port Firefox to Plan 9, but … that just isn’t going to happen.

                                                  It’s a pity, because emacs & a web browser are the only things that Plan 9 is really missing.

                                                  1. 3

                                                    I think it really needs a hardware accelerated graphics stack. Things would improve dramatically after that.

                                                    I would love it if the plumber can talk to my phone. An Android/iOS app that reads a web link from plumb and display it on the phone would solve the browser problem.

                                                    As to the editor… just use acme.

                                                    1. 3

                                                      it’s trivial to plumb a link to a script that opens ssh to a remote host and runs a command.

                                                      1. 1

                                                        I would love it if the plumber can talk to my phone. An Android/iOS app that reads a web link from plumb and display it on the phone would solve the browser problem.

                                                        I’d think that could easily be doable with a small Android app to listen for GCM messages.

                                                        As to the editor… just use acme.

                                                        But that wouldn’t be emacs, and emacs is what I want to use:-)

                                                      2. 2

                                                        emacs has been ported to plan 9 more than once.

                                                        1. 1

                                                          Really? I did a quick googling, but no joy. Is it in the main emacs tree?

                                                          1. 2

                                                            looks like i’m not able to reply from mothra.

                                                            there were a couple of (old) ports on sources, which i think is permanently down. there exists a mirror at http://9p.io.

                                            1. 3

                                              I’ll just link to my Mastodon comment about LinuxBoot: https://mastodon.anongoth.pl/@pkubaj/99417594003380101

                                              1. 1

                                                Similar to those are at Chernobyl Exclusion Zone.

                                                1. 1

                                                  Well Chernobyl is a kind of time capsule + radiation.

                                                1. 1

                                                  Great, but since the bug with undetected SIM card is still not fixed, it heavily limits the number of potential users.

                                                  I used Replicant 4.2 on my S3 and had to switch to LineageOS because of that.

                                                  1. 1

                                                    I wonder what about NetBSD 8. Half a year ago it was supposed to come “soon”.

                                                    1. 1

                                                      check the 8.0_BETA builds, the branch has been cut.

                                                      1. 1

                                                        I know, I’m just waiting for RELEASE :)

                                                        1. 1

                                                          ah :)

                                                    1. 4

                                                      Why would you even run Libreboot? That’s a serious question. I’m for hardware freedom (I run coreboot on all my boards and I’m buying Talos II). I just don’t get why one would want to run a derivative of coreboot that brings nothing to the table, when you can just use upstream. Oh, and you can actually run ucode updates with coreboot (you run ucode anyway, so there’s no harm in updating it).

                                                      Another advantage is that you can easily use SeaBIOS with coreboot, making *BSD systems actually usable with it.

                                                      1. 3

                                                        Libreboot is much easier to flash, and its documentation is friendlier to non-tech savy folk.

                                                        1. 2

                                                          Yeah, and I guess it would be the only advantage over coreboot. Libreboot was my 1st step to starting playing with coreboot, so I guess I’m kind of grateful to Libreboot devs for making the entry easier for people.

                                                          Still, once you get the hang of it, it’s better to just switch to coreboot.

                                                        2. 4

                                                          Coreboot is where most of the development happens, it’s true. But Coreboot uses a rolling release model and has a lot more knobs to adjust. I’m not really a BIOS hacker; I just want to run free firmware.

                                                          Libreboot periodically takes snapshots of the Coreboot tree and stabilizes around it. Their changes mostly involve streamlining the build process and ensuring there are no binary blobs. Personally I found Libreboot much easier to configure and compile on my machine. The ideological guarantee is a nice bonus.

                                                          1. 2
                                                            1. coreboot also has releases, so it’s not rolling release (but it was).
                                                            2. You don’t need to be a BIOS hacker (I’m not). You usually need to adjust only two knobs (vendor and model of your board).
                                                          2. 2

                                                            Same reason people run Trisquel GNU/linux-libre, no blobs.

                                                            Why do people use SeaBIOS for *BSD? I guess the TianoCore payload isn’t ready, but you can use the GRUB2 payload?

                                                            1. 1

                                                              Same as guys before, coreboot can also have no blobs.

                                                              You can use GRUB2, but you can’t use full disk encryption with it on *BSD systems with GRUB2.

                                                            2. 1

                                                              I don’t use either system, but my understanding is that Libreboot removes the binary blob components that are included with Libreboot, and that’s important to some people.

                                                              1. 1

                                                                Libreboot doesn’t remove anything, because coreboot doesn’t load unconditionally those blobs. You can not to run any. That way I can run blobless coreboot on my X200 or KGPE-D16 (also supported by Libreboot).

                                                                1. 1

                                                                  coreboot doesn’t load unconditionally those blobs.

                                                                  so it does load them conditionally? it sounds like maybe coreboot is not deblobbed by default, while libreboot does not require any configuration to have it be deblobbed.

                                                                  1. 1

                                                                    It loads blobs when you enable them in your config, if there are any to enable.

                                                            1. 3

                                                              OpenBSD a few days ago had to make a similar change for their Chromium port: https://marc.info/?l=openbsd-ports-cvs&m=151264513213832&w=2

                                                              1. 10

                                                                Couple bullet points:

                                                                They found a stack overflow in ME, specifically the BUP module, and a way to bypass stack protector mitigations.

                                                                It’s “remotely exploitable” in the sense that if you have AMT turned on and your adversary knows your AMT password, they might trigger this bug. But pretty much every local bug is remotely exploitable under those circumstances. The whole point of AMT is to give someone remote access. So…? (Orig title included “how to hack a turned off computer” but the answer seems to be “turn it on” which is a bit less exciting.)

                                                                It’s in the BUP module, which is the one thing ME cleaner doesn’t remove, so everybody who ran out to run the cleaner after reading about the coming vulns didn’t accomplish much. Not with regards to this world ending vuln, anyway.

                                                                The secret NSA HAP flag doesn’t disable BUP either.

                                                                They’re not entirely clear on attack vectors, but a lot of them seem to involve physical access to SPI header, etc.

                                                                1. 1

                                                                  At work, we use Dell PowerEdge servers. They apparently give RW access to SPI to the OS. So you don’t need physical access, just root on server.

                                                                  1. 1

                                                                    I wish there was more info available about this. My understanding is that’s generally a misconfiguration and the fault of the bios/system vendor? But I don’t know of a big list of vulnerable systems.

                                                                    ME has to be software writeable in order to patch it, but I think that’s gated and verified in some way?

                                                                  2. 1

                                                                    Could you at least get an extra core of compute out of this? Might be worth it for some workloads. Main system continues running smoothly like it’s doing nothing. :)

                                                                    1. 3

                                                                      The ME core, IIRC, is basically a 486 on modern processes with a lot of go-faster stripes. (Before, it was SPARC, and before ARC - the two ISAs are not actually related.) Probably not a lot of performance potential; but now some boards have an “Innovation Engine” so that custom code can run on a second ME-like core.

                                                                      1. 1

                                                                        So, if it’s useful, it’s useful for what a slow coprocessor might do. We had those on some older machines and maybe gaming machines. I’m thinking use in verification or something where it runs through test cases or whatever on an algorithm for long periods of time. Nobody wants to waste CPU doing that during the day. The ME is already a waste, though. ;)

                                                                  1. 19

                                                                    That’s because the web standards are insanely complicated.

                                                                    Way way way more complexity than is actually needed, and the results are, ahh, underwhelming.

                                                                    I would jump ship to a competing standard that reduced complexity by a factor 100x in an instant.

                                                                    1. 15

                                                                      I empathize with your frustration, but you’re overlooking a whole lot of value that you claim are from “underwhelming” results.

                                                                      Any standard with 100x less complexity would have probably 1e6x less utility. Like, it’s “easy” (hah!) to write a bespoke browser that connects to a system on a nonstandard port, requests (without headers!) a markdown document, renders it to the viewport, and allows scrolling and clicking on hyperlinks.

                                                                      Heck, we’ll call it the Lobsters Transfer Protocol.

                                                                      The client should just provide a viewport onto the rendered elements of the markdown document, with interactivity only insofar as clicking on links, scrolling the page, and zooming in. The client takes a starting address as a command-line argument airlobster -h <host> -p <port> -d <document path> --rev <optional server-recognizable digest for a particular historical revision of document> --depth <optional depth to resolve dependent documents, defaults to 1>. Client program flow looks roughly like:

                                                                      • Client program starts
                                                                      • Client parses arguments from command line
                                                                      • Client looks up IP for provided host
                                                                      • Client opens TCP connection to host on provided port
                                                                      • Client errors out if connection is rejected.
                                                                      • Specified document is pushed onto fetching queue
                                                                      • Client sends request in plain ASCII GET <document path> <optional digest>\n
                                                                      • Client receives length of document (8-byte unsigned integer, little-endian)
                                                                      • Client receives type of document (8-byte unsigned integer, little-endian)
                                                                      • Client receives byte array of document
                                                                      • Client errors out if byte array is incomplete or times out (say, 30 seconds from last byte if we’re still expecting the array?)
                                                                      • Client closes connection
                                                                      • Client decodes document based on type, puts result into {path, digest, content in renderable inermediate form} tuple stack (why stack? to make sure leaf deps are loaded first)–depth is decremented for use in dependent document resolution
                                                                      • Client renders the contents of the stack
                                                                      • Clicking on a hyperlink clears out the stack and queue, and starts the process again.

                                                                      The server’s job is equally straightforward:

                                                                      • Accept connections on a port
                                                                      • Drop client unless given valid request string
                                                                      • Drop client if given valid request string but invalid document
                                                                      • Drop client if given valid request string and valid document but invalid digest
                                                                      • Write back length and type of document
                                                                      • Write document to connection
                                                                      • Close connection

                                                                      Writing that should take about a month, let’s call it, if you are doing it in C (that’s fast enough and painful enough to discourage large overly-complex projects, no?).

                                                                      And even with this stripped-down networked PDF reader, there are still problems in the spec. What prevents a server from shoving clients on a ride or telling them to allocate too much? How much is too much? What about alternative forms of the same document? Should a document also have a type in addition to its digest for identification? What if a type doesn’t exist for a specified revision?

                                                                      And we haven’t even touched on the genuinely annoying parts, like rendering the bloody thing at all.

                                                                      And all of this complexity is a toy compared to the usability most people expect from a browser.



                                                                      Here are some tools that you might find helpful in such a journey:

                                                                      1. 9

                                                                        http is probably the least complex and frustrating part of the whole stack, and the part I’d attack last.

                                                                        If I did, it would be to create a “git” like content addressable network.

                                                                        In terms of complexity on the browser end, http is pretty trivial. The bundle of complexity on that level is actually https…. and I’m not convinced of it’s utility. (I’m convinced that encryption and identification is needed, I’m not convinced https is the way to do it.)

                                                                        It’s the html / css / dom / javascript stack on top of http that is the nightmare.

                                                                        And most certainly I would not use C, but something like D or (maybe) Rust.

                                                                        Certainly human language text rendering is insanely complex… but I’d argue it’s not worth the complexity cost it imposes to do it to perfection. You get 90% of the value with ye olde terminal fixed width fonts and 99% of the value with a relatively simplistic library.

                                                                        The last 1% is just worth the complexity no matter how beautiful it looks. If you want typographic perfection, use a png.

                                                                        1. 6

                                                                          90% of the value can mean that 10% get no value.

                                                                          And that’s with a fairly simple rendering of the language. Consider that the actual folks that use Urdu (around 159 million if wikipedia is to be believed) actually see something much richer.

                                                                          I again don’t see the problem with HTML/CSS/DOM/JS. Like, what parts of the problem domain do you think they’re overkill for?

                                                                          1. 6

                                                                            I again don’t see the problem with HTML/CSS/DOM/JS. Like, what parts of the problem domain do you think they’re overkill for?

                                                                            I loved their original conceptual orthogonality of structure, content and presentation…

                                                                            The pixel pushers hated it because they couldn’t pushed the pixels in the corner cases where they wanted to remove that orthogonality.

                                                                            I’m sure the teachers of English (or Urdu) typography would hate that orthogonality… but I bet all languages will be pretty much equally readable if it was rigidly enforced.

                                                                            In fact I’d argue your example .jpg has more to do with the traditionally high cost of paper news print than with conveying meaning. You could lay that page out much much more simply and still be readable in Urdu.

                                                                            I loved the original immutability of the DOM… the need to make it mutable arose the failure of html to address the needs of composing the page from multiple sources.

                                                                            Facebook’s React.js is the abomination that arose from vile abuse of a mutable DOM.

                                                                            D3.js is beautiful and a treasure… until you peer under the hood at its hideous entrails…. and long for the wonderful clean faroff days of libplot.

                                                                            If only the thought and effort that was dogpiled into the stack that D3.js abuses had been used to create a clean and powerful plotting API..

                                                                            1. 6

                                                                              I again don’t see the problem with HTML/CSS/DOM/JS. Like, what parts of the problem domain do you think they’re overkill for?

                                                                              CSS’s original intended usecases no longer exist. Hardly anyone reused the same original HTML with different stylesheets at the best of times; now that HTML is a presentation format rather than a semantic format there is no value in the HTML/CSS split. Likewise the DOM box model is overly complex because it was intended to support manual layout of a kind that people don’t do any more; every element can have margin/padding/border/… when really these should just be separate blocks. And really there are too many elements because there was this intention to have a common language for semantic markup that never really happened.

                                                                              If I was writing this stuff from scratch, I would say:

                                                                              • Make the parsing more consistent. E.g. styles could be expressed with JSON or even full javascript.
                                                                              • Don’t parse invalid markup - perhaps even require markup to be sent in a protobuf-esque format so that only well-formed markup can be sent at all.
                                                                              • No CSS. All styling is inline. Offer selector-like functionality in the DOM API but only as a code API, don’t attempt to define a language for selector expressions.
                                                                              • Greatly reduced number of built-in elements. Maybe just <div> and <span> i.e. block and non-block elements.
                                                                              • Remove border/padding/… as first-class concepts. A box with a border is five boxes.
                                                                              • To make all this practical, standardise something like web components. Offer a standard library of components that replicate the functionality of existing tags and attributes.
                                                                              1. 2

                                                                                Make the parsing more consistent. E.g. styles could be expressed with JSON or even full javascript.

                                                                                Why not JavaScript?

                                                                                1. 1

                                                                                  Aw man. So often it feels like the wrong technology won. About 5 years ago I was working on a project that needed to have a transforming and zooming UI; VRML was an absolutely perfect fit, and the webapp in question was already IE-only. And then IE8 came out and removed the functionality entirely. Now I look at web VR and wish they’d kept the solution that existed and worked.

                                                                                2. 2

                                                                                  Hardly anyone reused the same original HTML with different stylesheets at the best of times

                                                                                  Years ago, when I was a young and poor web developer, I did this very often, for fun and profit.

                                                                                  I’m not young anymore, but not even in my worst nightmare I saw the world you are describing here!

                                                                                3. 2

                                                                                  You are assuming the current complexity is needed for the current functionality, and that it isn’t caused by a windy path of legacy that with current knowledge you could avoid.

                                                                                  1. 1

                                                                                    what part of the problem domain are they not overkill for?

                                                                                4. 8

                                                                                  I think you are missing the point, the point is the browser has become a whole operating system by accident.

                                                                                  If you split out the awesome parts we have ended up with:

                                                                                  • Standard way to display information across operating systems.
                                                                                  • Sandboxing and convenient distribution of cross platform applications.
                                                                                  • Secure communication with remote servers.
                                                                                  • A universal linking/url system.

                                                                                  and remove lots of the legacy we used to get to this point:

                                                                                  • complex js engines instead of just making a wasm like system from the start.
                                                                                  • insecure defaults requiring things like OWASP guidelines to begin with.
                                                                                  • many legacy and deprecated api’s.
                                                                                  • … add your favourite problems …

                                                                                  And design things with all of our current knowledge we didn’t have previously, then we might have an equally capable, more secure and cheaper to maintain system.

                                                                                  The false assumption you have is that the complexity is necessary for the current functionality, and that is just incorrect.

                                                                                5. 2

                                                                                  Well, there is, or rather was - gopher. There’s still Firefox addon that adds compatibility with gopher (I’m not sure whether it works in 57+).

                                                                                  1. 6

                                                                                    Gopher has some nice features, but it doesn’t solve the problem. HTTP is not the hardest part by a long shot, and HTML+CSS+JS monsters can be delivered over gopher just fine :)

                                                                                    1. 4

                                                                                      Don’t get me wrong…

                                                                                      There are excellent things about the web…..

                                                                                      …but the pixel pushers have ruined it by layering on piles of crap.

                                                                                      The entire web stack is in dire need of refactoring.

                                                                                      Note: A “Pixel Pusher” is a person, typically managerial, who….

                                                                                      • at the expense of meaningful content,
                                                                                      • insists on the layout / font / alignment / color /…
                                                                                      • be correct (to their personal weird definition of correct)
                                                                                      • “to the pixel”.

                                                                                      The web should be removed from their hands and they should be shut in a dark room with only mspaint to play with.

                                                                                      But let’s not throw out the baby with the bathwater… there are some truly excellent things about the web.

                                                                                      • Uniform Resource Locators. Excellent idea, and in many senses URN’s are a hint at an even better idea… (Somewhere between URN’s, Distributed Hash Tables, git and Tor is the concept of a content addressable web.
                                                                                      • Sadly Fielding is credited for the definition of REST…. and then everything he says is watered down. His far greater contribution is the requirement that Anarchic Scalability must drive your design choices. Sadly this is treated with as much trepidation as a communist at a Republican Party convention.
                                                                                      • The notion that structure, content and presentation are orthogonal concerns and hence require three different languages is truly great and empowering. And is probably the thing about the web that is most hated by the pixel pushers.
                                                                                      • The importance of standards and interoperability. … however a standard that is not readily implementable is worse than no standard.

                                                                                      If I have a complaint about REST, it’s the “Code on Demand” part. It has been abused without limit to destroy everything else that is good about the web.

                                                                                      I suspect a massively simplified “BPF like” deliberately Turing Incomplete CoD would be a better design.

                                                                                      What would I throw out?

                                                                                      Straight out immediately would go the SGML / XML heritage. What a mistake that was.

                                                                                      There is very very little about JavaScript I like…. so out goes that as well.

                                                                                      Even if one removes the idiocy of wrangling incompatible browsers and the dire hideousness of monstrosities like bootstrap…. you can’t sanely write CSS without a CSS macro preprocessor facility.

                                                                                      1. 2

                                                                                        HTTP isn’t necessarily the issue here, curl can handle that flawless for example, and every major language has some kind of a HTTP library. Web servers also exist in abundance.

                                                                                        But when it comes to HTML/CSS/JS - that’s where the mess begins. Partially revised standars, torn between backwards compatibility and incompatibility, between being so simple it’s lacking and being so complicated it’s overwhelming. Implementing a CSS layouting engine doesn’t only have to satisfy a standard and a reference implementation, but also all the quirks and ad hoc fixes major, old and long gone browsers came up with, to tackle the practical insufficiently of what some guys came up with.

                                                                                        HTML, a mark up language (implying it’s intended to stylize text, words, paragraphs, etc. - but not necessarily typeset it in every detail) is also cause and caused by this same problem. Due to it’s popularity, people want to use it everywhere, and due to it’s potential, it was hacked into actual fulfilling most of these wishes. And of course, if something is hacked, it probably wasn’t quite well designed. So it follows that the HTML that was one written about in some long forgotten RFC and the one a browser engine developer has to suffer with, are quite different. The former has has to only fulfill an ideal and and intention, the latter has to satisfy a history of compromises, hacks and other tricks on the border of keeping up with, breaking and inadvertently creating new standards.

                                                                                        I’ve been suggesting and thinking about this for a while now, that to fix this we can only use hindsight and humility, to create a simpler, more coherent, but still modern subset of HTML+CSS, that stays true to the intended use of the format, but recognizes the needs of it’s users and limits of it’s standards. Kind of like XHTML, but only less pedantic and based on convention, rather than fatal incompatibility. Idealy it would arise organically, but if necessary groups and individuals could offer suggestions to debate differences. Eventually, so I naively hope, browser developers would commit the radical act of adopting of of these subsets, and thereby creating greater incentive and legitimation to use them. But that’s only to be hoped for…

                                                                                        1. 1


                                                                                          1. 1

                                                                                            gopher doesn’t address application sandboxing and cross platform distribution of applications. There is no evidence to me gopher wouldn’t have feature crept in the same ways.

                                                                                          2. 1

                                                                                            Of all the things he mentioned, implementing web standards is certainly only a fraction of the cost - regardless of their complexity.

                                                                                            1. 1

                                                                                              No, Mozilla needs all that overhead because of the complexity.

                                                                                              If you sliced away two full orders of magnitude of complexity, you wouldn’t need that cruft.

                                                                                              What would a PR person have to do apart from say, “It’s plain, simple, reliable, fast and it works”?

                                                                                              Any questions about “But does it have feature X?” can simply be answered with a “No, and it shouldn’t”

                                                                                              Where’s the PR department for curl?

                                                                                              1. 1

                                                                                                tbf curl is rather complex for what most people use it for. though that’s probably due to pressure to accommodate the complexities of the web.

                                                                                            1. 1

                                                                                              Hmm, I couldn’t remote-follow you for some reason. I get this error:

                                                                                              “Security verification failed. Are you blocking cookies?”

                                                                                              Other instances do seem to handle my cookie policy just fine (whatever it is, who knows anymore).

                                                                                            1. 5

                                                                                              Meanwhile, I’m still waiting on ESR for Vimperator or Vimium to catch up :)

                                                                                              1. 5

                                                                                                Vimperator is EOL sadly.

                                                                                                cmcaine has gotten an extended keyboard api for WebEx approved but it not slated to be implemented until the next release. They are also https://github.com/cmcaine/tridactyl working on a replacement for Vimperator called Tridactyl.

                                                                                                1. 3

                                                                                                  Yeah, I know.

                                                                                                  That said, Vimium is said to be the best among WebEx-compatible, Vimperator-like extensions. I think I’ll probably wait until 52 is EOL and decide what to do then.

                                                                                                  1. 4

                                                                                                    Someone on the orange website mentioned this one: https://github.com/ueokande/vim-vixen apparently the only one supporting ex commands

                                                                                                2. 1

                                                                                                  I’ve been using Vimium with Quantum (Firefox Developer Edition) for a few days and haven’t noticed any problems. (I am a long-time user of Vimium in Chrome.) I don’t know if it’s at 100% feature parity, but all of the features I use work.