1. 5

    I just installed the latest versions of spotifyd and spotify-tui using cargo (compiling the Rust code on my machine) and it works! Some tweaks were required for my System76 / Pop OS install:

    • When configuring spotifyd, I ended up putting my username and password in quotes. That was incorrect, and it wouldn’t log in! When I removed the quotes for all of the values in spotifyd.conf, I was able to log in as expected.
    • When attempting to compile spotifyd, I got an error complaining “Package alsa was not found in the pkg-config search path”. I had pkg-config installed both through apt and Homebrew for Linux. Uninstalling the Homebrew for Linux copy of pkg-config let it compile.
    • I want Rust packages to update regularly as if they were in a package manager, so I use https://github.com/nabijaczleweli/cargo-update to check for updates. In this case, I wanted to compile spotifyd with the dbus_mpris feature flag to get multimedia key support on my Gnome desktop, and I wanted future updates to remember that. Cargo-update supports this, so I did cargo install-update-config spotifyd -f dbus_mpris to store that preference.
    1. 4

      What browser engine does the web browser use? Did they write their own?

      1. 16

        They’re writing their own. In the youtube channel you can see the vlogs of the implementation, it is very inspiring and fun.

        1. 6

          This seems worthy of comment, then, in a world where we are worried about the number of independent browser engines. Could LibWeb or whatever the browser engine is called be extracted for use elsewhere, or is it tied to SerenityOS in some fashion?

          1. 7

            Maybe, but it’s no more complete or compliant than many other small browser projects.

            1. 8

              A big difference between this and other small browser projects is that Andreas worked on webkit and safari for many many years, so has a more complete knowledge of browsers than most other small projects.

            2. 3

              It could certainly be extracted with some effort. It’s implemented on top of various SerenityOS libraries (LibCore, LibGfx, LibGUI, LibIPC, LibProtocol) so most of the work would probably be building a platform abstraction layer where these things would be pluggable at build-time.

              1. 1

                a platform abstraction layer where these things would be pluggable at build-time That could feasibly be (subsets of) all those Lib* bundles reimplemented on top of POSIX/Win32/what-have-you.

        1. 1

          I see this PR was merged but don’t see the setting exposed anywhere.

          I am willing to implement the dark mode myself together with the automatic switching based of OS setting if lobsters developers are okay to accept it?

          1. 4

            If you look at the bottom of that PR you can see it was reverted later that day:

            Firefox support is half-baked. There’s no user or devtool UI to toggle between states, inspecting an element always shows the style for ::selection instead of the element, and it lists the name of a color variable with no way to see the value of the variable or where it is set. Punting until it’s debuggable.

            1. 1

              There is a devtools toggle that is not enabled by default. Relevant bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1550804

              When I asked about it in the DevTools Matrix channel recently, they said:

              The plan is to replace the multi-state button with a drop-down menu with three explicit options, like in Chrome: 1. no preference (uses current system), 2. prefers dark, 3. prefers light. We’ll work on this soon. Hoping to enable it on the next Firefox Beta.

              1. 1

                Thanks for the link, I’m following what seems to be the relevant issue. In the meantime, there are personal approaches linked from the about page.

            2. 1

              You need to activate dark mode in your browser to get that PR turned on :)

              1. 1

                Can you link to documentation how to do that for Safari please?

                I thought browsers themselves don’t have concept of dark mode or light mode but OS does.

                1. 1

                  Hmm. Documentation I find seems to indicate Safari should follow OS dark mode setting, yeah. But on my work laptop safari loads lobste.rs with not dark mode, so must be missing something…

            1. 18

              SCALE YOUR THING. I WANT TO READ YOUR THING ON MY PHONE. SCALE IT.

              I don’t know, I feel like it’s not my fault that the iPhone came along and rendered pages unreadable. Why is it not just a button in Safari – like reader mode, but honestly better – that toggles that meta tag on and off? Instead it’s up to literally every web page that existed and worked for fifteen years before the iPhone to add Apple’s special meta tag…

              1. 10

                Nor is it your readers’ fault. The buck could have stopped with browsers, and I think it probably should have, but it didn’t. Why pass the buck to users?

                I think it’d be useful in this conversation to remember what the web looked like in a 320px-wide desktop browser in 2007. If I recall, every site had a sidebar or two, and Apple used the multi-column New York Times as an example case. Responsive layout was harder then, unless you had a one-column page, and lots of sites had a desktop site and a mobile site rather than one responsive site. Desktop sites were often designed toward an exact width for display and just centered that block in the browser window. I think a very slim desktop browser would have had a hard time making then-current pages look decent. You’d just have to scroll sideways. And if that’s true, mobile Safari would have the same problem if they hadn’t given it a desktop-size viewport by default, and asked pages to identify when they are meant for mobile devices. I don’t know if this was a great long term solution because now more traffic is mobile than desktop, but it did make the existing desktop web nice to use on a phone 13 years ago.

                1. 3

                  Yes! This infuriates me. We’ve cut down on so much crufty boilerplate in HTML5, why do we have to write this annoying piece of boilerplate on every web page? Every browser that needs this meta tag to be readable should assume it is the default, that should be part of the HTML spec.

                  This tag is a waste of precious characters, and in all seriousness a waste of energy / resources that we cannot afford in the climate crisis. How much energy would be saved if every web page deleted that tag tomorrow? I bet a non-trivial amount.

                  1. 6

                    It’d be great if browsers made that tag unnecessary. But until then, there’s a worse energy cost of loading pages that lack the tag on phones, seeing that the text is too small to read, and then loading those entire pages again on another device with empty caches. Especially in this era of page bloat, I think it’s responsible to put effort into making the page body usable on any requesting device.

                    1. 3

                      You’re absolutely correct, I’m not saying that web devs should avoid this tag today. What I’m saying is that this tag should be made unnecessary ASAP, the browser makers should assume it is the default rather than requiring every web page to include it.

                    2. 5

                      This tag is a waste of precious characters, and in all seriousness a waste of energy / resources that we cannot afford in the climate crisis.

                      I do not see how this follows at all from the OP’s suggestion.

                      1. 1

                        Yeah I don’t think 16 bytes or so is going to be a significant problem with web usage versus something like, oh I dunno, all the shitty javascript adding 16k bytes or sometimes upwards of a megabyte in data to be transferred.

                        Premature optimization starts here, apparently folks.

                  1. 1

                    Now I really want a font that supports the “Per” symbol. Anyone know of one, or how to search for such a font?

                    I can’t even find another source for info about the “per” symbol, aside from this entry in the unicode table, let alone a modern font that supports it. The Wikipedia article for that unicode character redirects to “Ratio”.

                    1. 3

                      Firefox inspector reveals the font rendering as “Unviersalia”. At least on my Arch Linux it renders correctly everywhere, including VTE-based terminals, but I can’t verify which font file it comes from.

                      EDIT: Of course Noto includes this. Also, /usr/share/fonts/misc/10x20.pcf.gz, 6x12.pcf.gz, Code2000.ttf, and Symbola.otf

                      1. 2

                        fileformat.info has a handy “fonts that support this” link - https://www.fileformat.info/info/unicode/char/214c/fontsupport.htm

                      1. 3

                        The part I’m missing is what is the definition of “post-container era”? I would assume it is something that comes after containers, yet the project talks about “everything builds in a container”.

                        1. 2

                          I’m assuming it means the “container era”; the era after (“post”) the introduction of containers.

                          1. 2

                            It would save precious characters to just call it “the container era”, and be less confusing. Post-punk means what came after punk, it is not the same thing as punk.

                            1. 3

                              Yeah, I agree it’s incorrect if it’s actually meant to be the “container era”. As it stands right now, it’s legitimate to ask, “when did the container era stop and what replaced containers?”. I was just offering my interpretation.

                              1. 1

                                Agreed. It’s a bit confusing, is it not? I’ll see if I can send them a message about it.

                            2. 1

                              I thought it meant that containers are already passe…

                            1. 28

                              Unix was never as simple as we’d like to remember – or pretend – that it was. Plenty of gotchas have always been lurking around the corners.

                              For instance, newlines are totally legit in filenames. So in the absence of insane names, ls |foo will write filenames with one name per line to foo’s stdin. Usually it’s fine to treat ls output as a series of newline-separated filenames, because by convention, nobody creates filenames with newlines in them. But for robustness and security we have things like the -0 argument to xargs and cpio, and the -print0 argument to find.

                              For a system that is based on passing around data as text, the textual input and output formats of programs are often ill-suited to machine parsing. Examples of unspecified or underspecified text formats are not difficult to find. I’m really glad to see some venerable tools sprouting --json flags in recent years, and I hope the trend continues.

                              1. 5

                                Anything but JSON. If plain text is being used because of its readability, JSON is largely antithetical to that purpose.

                                1. 14

                                  JSON fits a nice sweet spot where both humans and machines can both read and edit it with only moderate amounts of anguish. As far as I can tell there is not a good general-purpose replacement for JSON.

                                  1. 8

                                    a long article promoting JSON with less than a full sentence for S expression

                                    1. 4

                                      What? It’s marked to-do. Here, I’ll just do it. Check the page again.

                                    2. 3

                                      What about Dhall?

                                      1. 2

                                        You might consider including EDN, I think it makes some interesting choices.

                                        Another point: the statement that JSON doesn’t support integers falls into a weird gray area. Technically it’s not specified what it supports (https://tools.ietf.org/html/rfc8259#section-6). If you’re assuming the data gets mangled by a JS system, you’re limited to integers representable by doubles, but that’s a danger point for any data format.

                                        1. 1

                                          I actually like this quite a bit, thanks!

                                        2. 1
                                          1. 7

                                            Looks fine, but it’s binary and schema-defined, so it makes very different design tradeoffs than JSON does. It’s not an alternative to JSON, it’s an alternative to protobuf, cap’n proto or flatbuffers, or maybe CBOR or msgpack. There’s a plethora of basically-okay binary transfer formats these days, probably because they prevent people from arguing as much about syntax.

                                            1. 4

                                              I won’t go into details about where, but at work we have used stateless tokens for the longest time. For us, it’s been a terrible design decision and we’re finally moving off it. Why? Decryption is CPU bound, so it doesn’t scale nearly as well as memory lookups, which is what stateful tokens represent. Moreover a lot of our decryption libraries do not seem to be particularly consistent (high variance if we assume that the distribution is somewhat normal) in their timing. This poses a problem for optimizing the tail end of our latency. At small to medium scales stateless tokens are fine, but as we took on higher scale it just didn’t work. Memory lookups are fast, consistent, and scale well.

                                            2. 1

                                              You should post this as an article! A few comments:

                                            3. 3

                                              Anything but JSON.

                                              Careful what you wish for…

                                              1. 1

                                                FreeBSD has had libXo for a while: https://wiki.freebsd.org/LibXo

                                              2. 4

                                                You can also legitimately give a file a name that starts with a dash, making it challenging to access or delete unless you know the trick.

                                                1. 3

                                                  I remember reading a book on UNIX back in the day (1994? around then) which talked about this issue. The given solution in this professional tome was to cd up and then delete the whole directory.

                                                  (Asking how to handle this problem was also a common question in interviews back in the day, maybe still today I don’t know.)

                                                  1. 4

                                                    That’s… Wrong, at best. rm ./-rf always worked, even when the tool is buggy and doesn’t support -- argument parsing termination.

                                                    1. 3

                                                      The man page for (GNU coreutils) rm now mentions both methods prominently. I believe you’ll get a prompt if you try it interactively in bash too.

                                                      1. 6

                                                        Yeah but kids these days don’t read man, they google, or at best, serverfault.

                                                        </oldmanyellsatcloud>

                                                        1. 14

                                                          No wonder they google. Have you tried reading a man page without knowing Linux inside and out? They all pretty much suck. Take the tar man-page for example. It says it’s a “short description” of tar, while being over 1000 lines long, but it fails to include ANY examples of how to actually use the tool. There’s examples on how to use different option styles (traditional and short options), a loooong list of flags and what they do in excruciating detail, a list of “usages” that don’t explain what they do and what return values tar can give.

                                                          I mean, imagine you need to unpack a tar.gz file, but you have never used tar before and you are somewhat new to Linux in general, but you have learned about the man command and heard you need to use tar to unzip a file (not a given really) so you dutifully write man tar in your terminal and start reading. The first line you are met with looks like this:

                                                          tar {A|c|d|r|t|u|x}[GnSkUWOmpsMBiajJzZhPlRvwo] [ARG…]

                                                          Great. This command has more flags than the UN headquarters. You look at it for a couple seconds and realise you have no idea what any of the switches mean, so you scroll a bit down:

                                                          tar -c [-f ARCHIVE] [OPTIONS] [FILE…]

                                                          Cool. This does something with an archive and a file (Wouldn’t it be helpful if it had a short description of what it does right there?). What it does is a mystery as it doesn’t say. You still have to scroll down to figure out what -c means. After scrolling for 100 lines you get to the part that lists out all the options and find -c. It means that it creates an archive. Cool. Not what we want, but now that we are here maybe we can find an option that tells us how to unpack an archive?

                                                          -x, –extract, –get

                                                          Sweet! We just found the most common usage at line 171! Now we scroll up to the top and find this usage example:

                                                          tar -x [-f ARCHIVE] [OPTIONS] [MEMBER…]

                                                          The fuck is a MEMBER? It’s in brackets, so maybe that means it’s optional? Let’s try it and see what happens. You write tar -x -f sample.tar.gz in your terminal, and hey presto! It works! Didn’t take us more than 10 minutes reading the man page and trying to understand what it means.

                                                          Or, if you understand how to use modern tools like Google to figure out how to do things, you write the query “unzip tar.gz file linux” into Google and the information box at the top says this:

                                                          For tar.gz. To unpack a tar.gz file, you can use the tar command from the shell. Here’s an example: tar -xzf rebol.tar.gz.

                                                          You try it out, and what do you know? It works! Took us about 10 seconds.

                                                          It’s no wonder that people search for solutions instead. The man files were obviously not written for user consumption (maybe for experienced sysadmins or Linux developers). In addition, this entire example assumes you know that tar can be used to extract files to begin with. If you don’t know that, then you are shit out of luck even before you open the man file. Google is your only option, and considering the experience of reading man files, no surprise people keep using Google instead of trying to read the “short description” that is the size of the fucking Silmarillion!

                                                          /rant

                                                          1. 4

                                                            I don’t disagree with the general sentiment here, but I think you’ve found a man page that is unusually bad. Here’s some excerpts from some random ubuntu box.

                                                             

                                                            it fails to include ANY examples of how to actually use the tool.

                                                            EXAMPLES
                                                                 Create archive.tar from files foo and bar.
                                                                       tar -cf archive.tar foo bar
                                                                 List all files in archive.tar verbosely.
                                                                       tar -tvf archive.tar
                                                                 Extract all files from archive.tar.
                                                                       tar -xf archive.tar
                                                            

                                                             

                                                            Cool. This does something with an archive and a file (Wouldn’t it be helpful if it had a short description of what it does right there?).

                                                            Mine has, comfortably within the first screenful:

                                                              -c, --create
                                                                    create a new archive
                                                            

                                                             

                                                            Not what we want, but now that we are here maybe we can find an option that tells us how to unpack an archive?

                                                            Something like 20 lines below that:

                                                              -x, --extract, --get
                                                                    extract files from an archive
                                                            

                                                             

                                                            Anyway, I don’t think man pages are intended to be good tutorials in the general case; they’re reference materials for people who already have an idea of what they’re doing. Presumably beginners were expected to learn the broad strokes through tutorials, lectures, introductory texts etc.

                                                            I think that split is about right for people who are or aspire to be professional sysadmins, and likely anyone else who types shell commands on a daily basis—learning one’s tools in depth pays dividends, in my experience—but if it’s the wrong approach for other groups of people, well, different learning resources can coexist. There’s no need to bash one for not being the other.

                                                            1. 2

                                                              This is a GNU-ism, you’re supposed to read the Info book: https://www.gnu.org/software/tar/manual/tar.html

                                                              But that also lacks a section detailing the most common invocations.

                                                              OpenBSD does it better: https://man.openbsd.org/tar

                                                              Of course, on the 2 Debian-based systems I have access to, info pages aren’t even installed… you just get the man page when you invoke info tar.

                                                              1. 1

                                                                I was just going to bring up info. I believe in many cases manpages for GNU tools are actually written by downstream distributors. For example Debian Policy says every binary should have a manpage, so packagers have to write them to comply with policy. Still more GNU manpages have notes somewhere in them that say “this manpage might be out of date cause we barely maintain it; check the info documentation.” Really irritating. Honestly I never learned how to use info because man is Good Enough™. I mean, come on. Why must GNU reinvent everything?

                                                  2. 1

                                                    I don’t think the author has to deny this, the difficulty of teaching doesn’t have to be the same as using. The difficulty in using, complicates the system, that then make it harder to teach – for example because of --json flags.

                                                    1. 9

                                                      Zen’s claims seem sketchy.

                                                      1. “We cannot see a future for Zig where the founder does not allow corporate entities to use and support Zig” — unless I really missed something, there’s nothing in Zig’s licensing or community against that.
                                                      2. OK, they have a point here; I also feel that polymorphism is important. (That was one of several missing things that led me to stop exploring Zig. The other one being lack of a global memory allocator.) But that of course is only a rationale for forking, not for going closed-source.
                                                      3. IANAL, but I know that trademarks are very domain-specific. Would the existing “Zig™” trademarks cover a programming language/compiler? Is “Zen™” any more available?
                                                      4. “we want to prioritize embedded development” — other languages, notably Rust, have been able to accommodate embedded systems without a hard fork. (Not sure if MicroPython / CircuitPython count as forks or not.)
                                                      1. 11

                                                        That entire list just seems like a post-hoc rationalisation of “I don’t like to work with Andrew”. That’s actually fair enough; sometimes people don’t work well together based on different interests, personalities, differences of opinion, etc. but just be honest about it instead of all this FUD.

                                                        1. 4

                                                          zig is already ideal for embedded development. zen has no value-add there other than translating the docs into japanese.

                                                          1. 3

                                                            Maybe they are offering support contracts zig does not?

                                                          2. 4

                                                            “We cannot see a future for Zig where the founder does not allow corporate entities to use and support Zig” — unless I really missed something, there’s nothing in Zig’s licensing or community against that.

                                                            I’m reminded of the old saying, “A lack of imagination on your part does not constitute impossibility on our part.” Taking the premise that Zig doesn’t ‘support corporate entities’ as true (which I do not think is the case) – Just because a company can’t imagine a future for a language that doesn’t support companies doesn’t mean that there is no future.

                                                          3. 5

                                                            Some interesting info over at the orange site too. Apparently Zen/connectFree is trying to register Zig as a trademark?!
                                                            That’s really shady.

                                                            1. 3

                                                              It reminds me of something similar which happened with Linux a long time ago:

                                                              The Linux trademark is owned by Linus Torvalds in the U.S.,[2] Germany, the E.U., and Japan for “Computer operating system software to facilitate computer use and operation”. The assignment of the trademark to Torvalds occurred after a lawsuit against attorney William R. Della Croce, Jr., of Boston, who had registered the trademark in the US in September 1995[3] and began in 1996 to send letters to various Linux distributors, demanding ten percent of royalties from sales of Linux products.[4] A petition against Della Croce’s practices was started,[5] and in early 1997, WorkGroup Solutions, Yggdrasil, Linux Journal, Linux International, and Torvalds appealed the original trademark assignment as “fraudulent and obtained under false pretenses”.[5] By November, the case was settled and Torvalds owned the trademark.[3]

                                                              The lesson is to register the trademark early, before someone else does and begins to use it against you.

                                                              1. 4

                                                                The lesson is to register the trademark early, before someone else does and begins to use it against you.

                                                                It’s not really as simple as that.

                                                                1. There are a lot of different jurisdictions around the world.
                                                                2. Registering a trademark is not free (there are generally fees, in addition to the time and knowledge required to do it. For foreign jurisdictions, you may need the services of somebody who speaks the language and understands the legal process to acquire a trademark)

                                                                So yes, if you have a moderately successful project, it may make sense to trademark the name in your own jurisdiction, and possibly a few other key areas, but it’s by no means something that everyone should do early on, especially for projects which have no idea that they will grow into something big.

                                                                An alternative lesson from that story would be that registering your trademark early is not that important, because using a name outweighs registering it anyway (though doing both will probably save you a legal battle).

                                                              2. 2

                                                                There was some Japanese company (a sort of programming class) that was able to get a trademark on Python and would claim that they were the only ones allowed to do “Python training certification” with it…. really shady stuff when you see this sort of thing going on

                                                              3. 4

                                                                Just to be clear, is Zen violating the MIT license here by replacing it with their own license?

                                                                If so, then at the moment it wouldn’t make a difference if Zig were licensed under the GPL, Zen is currently violating copyright law and Zig could sue. Presumably Zig is not doing so because they don’t think it’s a good use of their time and money, and these public posts warning about Zen’s bad faith actions should achieve most of what they want, i.e. preventing people from getting scammed.

                                                                Personally I think that A/GPL licenses specifically and copyleft in general deserve more popularity with new projects than they currently enjoy, and I avoid permissive licenses in my own work, but this may not be the time to bring it up.

                                                                EDIT: According to a random comment on HN, “They are complying, the original Zig license is at the bottom of the file lib/zen/std/LICENSE (complete with “Copyright (c) 2019 Andrew Kelley”). I just downloaded it from the Zen website, and the tarball is dated 2020-09-04.”

                                                                1. 5

                                                                  The BSD license (some of them anyway, there are a gazillion variants) is actually a bit clearer on this:

                                                                  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

                                                                  With “retain” it seems to mean more “keep this notice here”, rather than “stuff it somewhere in the project”.

                                                              1. 19

                                                                Mastodon used about ~2.5 GB out of the 4 I have on my Pi. With Pleroma, the total used RAM is only about ~700 MB. That’s crazy!

                                                                I agree it’s crazy. Crazy less bloated, and crazy still bloated.

                                                                700MB. Christ.

                                                                1. 27

                                                                  To be clear, the 700 MB is the total RAM usage, i.e. by all programs and not Pleroma alone.

                                                                  1. 21

                                                                    That 700MB includes a Postgres database and a webserver.

                                                                    1. 9

                                                                      I wonder if we can still run pleroma in a 256mb ram system. Most of the ram is used by postgres, and that can be configured to use a lot less.

                                                                      1. 11

                                                                        I bet you can but PostgreSQL is also very tricky to limit RAM usage to a certain cap. First off the defaults are very conservative in most cases you would be cranking all the values up not down but you already know that as if I recall correctly I saw some great articles regarding PostgreSQL inner-workings from your blog posts on pleroma development.

                                                                        That said there are several configs that have direct and indirect influence on how much memory PostgreSQL will use: shared_buffers - the actual working set of the data the DB hacks on, that will be the largest immediate RAM allocation. Then we have the tricky parts like work_mem which is a per connection allocation but not a per connection limit. If your work mem is 8 MB and you execute a query which has 4 nodes in the resulting plan you can allocate up to 4*8 MB for that one connection. If you add to that parallel query execution then multiply that by concurrently running workers. I assume pleroma uses a connection pool so that alone can bump RAM usage a lot. Add to that things like maintenance_work_mem for tasks like vacuums and index rebuilds and you quickly can see how the actual memory usage can fluctuate on a whim.

                                                                        To the point.

                                                                        I agree it’s crazy. Crazy less bloated, and crazy still bloated.

                                                                        700MB. Christ.

                                                                        I simply think @ethoh is wrong. 700 MB usage is crazy low for a RDBMS and we are talking about RDBMS + a whole app using it. Databases are designed to utilize memory and avoid hitting the disk when not necessary. Unused memory is wasted memory.

                                                                        1. 3

                                                                          700 MB usage is crazy low for a RDBMS

                                                                          I don’t really get how you can make this claim with no reference at all to the data storage needs of the application. A fair metric would be the overhead of the DB relative to the application data. In this case we’d need to know some things about how Mastodon and Pleroma work, and how OP managed his instances of them.

                                                                          1. 4

                                                                            I don’t really get how you can make this claim with no reference at all to the data storage needs of the application.

                                                                            In similar fashion the OP claimed that 700 MB is crazy bloated. I was making a reference to that. However to back up my claims with some quick napkin calculations:

                                                                            Default shared_buffers for PostgreSQL 12 is 128 MB. Per PostgreSQL documentation the recommended setting is roughly 25% of available system RAM then measure.

                                                                            If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system.

                                                                            source: https://www.postgresql.org/docs/12/runtime-config-resource.html

                                                                            The system in question has 4 GB of RAM so by that logic 1 GB for shared_buffers would be a reasonable setting - hence 700 MB at that point could be considered crazy low.

                                                                            Default work_mem is 4 MB, max_worker_processes is set to 8 and max_connections by default is 100 (https://www.postgresql.org/docs/12/runtime-config-connection.html#GUC-MAX-CONNECTIONS). This means that query execution can easily eat up to 3.2 GB by default in the absolutely unlikely worst case scenario.

                                                                            maintenance_work_mem is by default an additional 64 MB.

                                                                            So we are looking at PostgreSQL itself using anywhere between 128 MB and 3 GB of RAM with it’s default settings that are ultra conservative and usually the first thing everyone increases. This is before considering the actual data and application workload.

                                                                            By this logic, personally for me 700 MB for PostgreSQL on a running Pleoroma instance including the memory used by Pleroma itself is crazy low.

                                                                            1. 5

                                                                              But, this little Pi is not a dedicated database server, it at least hosts the app too? And defaults are just defaults. Maybe indicative of PG usage in general, across every application that uses it, but that’s a really broad brush to be painting such a tiny picture with! I still think there are a few different species of fruit being compared here. But I do appreciate your explanation, and I think I understand your reasoning now.

                                                                            2. 1

                                                                              Fwiw, my Pleroma database is approaching 60GB in size.

                                                                              1. 1

                                                                                Due to shit posting or bot? You can clean it up a little bit by expiring remote messages older than 3months

                                                                                1. 2

                                                                                  I have a dedicated 500GB NVMe for the database. Storage isn’t a problem and it’s nice for search purposes.

                                                                          2. 2

                                                                            I’m still not convinced that postgresql is the best storage for ActivityPub objects. I remember seeing in pleroma that most of the data is stored in a jsonb field, and that makes me think that maybe key-value stores based on object’s IDs would be simpler and maybe(???) faster.

                                                                            I’m currently implementing a storage “engine” based on this idea and I’m saving the plain json as plain files in a directory structure. It, of course, is missing ACID[1] and other niceties, but I feel like the simplicity of it is worth for an application that just wants to serve content for a small ActivityPub service without any overhead.

                                                                            [1] IMHO ACID is not a mandatory requirement for storing ActivityPub objects, as the large part of them (activities) are immutable by design.

                                                                            1. 5

                                                                              Misskey used to use a nosql / document store. They switched to postgresql because of performance issues. I’m sure you could build an AP server with a simpler store, but you we do make heavy use of relational features as well, so the relatively ‘heavy’ database part is worth it for us.

                                                                              1. 2

                                                                                Yes. One problem with a off the shelf key value store in this setup is that scanning over the whole keyspace to be able to filter objects is way less efficient than a good indexed db. (Even though I’m not there yet), I’m thinking of adding some rudimentary indexes based on bloom filters on properties that might require filtering.

                                                                                1. 4

                                                                                  postgresql provides indexing for json objects, so it makes a lot of sense to use it even for this kind of use case. Even sqlite has some json support these days.

                                                                              2. 2

                                                                                I am not convinced to store tons of small files individually, they are usually less than 1kb. The overhead from inode will waste 75% of a 4k, and you will also run out of inodes pretty quickly if your fs is not tuned for tons of small files.

                                                                                1. 3

                                                                                  inodes are a legacy filesystem problem. Use ZFS :)

                                                                                  1. 1

                                                                                    The idea behind federation would be that most instances would have a small number of users with small local storage needs.

                                                                                2. 1

                                                                                  Not really for recent releases, you need at least 512MB for a stable instance. Pleroma itself use <200MB RAM, and postgresql can use another 200MB, depends on your configuration.

                                                                              3. 10

                                                                                Total RSS for my Pleroma instance on Arch x86_64 (which is extremely lightly used) is ~115MB. There’s a bunch of other RSS being used by the Postgres connections but that’ll depend on your precise configuration.

                                                                                For comparison, my honk instance on Arch armv7l is using 17MB (but it admittedly bare-bones compared to Pleroma.)

                                                                                1. 2

                                                                                  How is honk working for you? Do you want to share a link to your instance? I’ve been considering installing it myself. It seems cool, but the only honk deployment I’ve seen in the wild is the developer’s. If we’re talking about saving resources, honk seems to be better for that than Pleroma :)

                                                                                  1. 3

                                                                                    I run it for my single user instance. Haven’t tried upgrading since I installed it.

                                                                                    It generally works as expected, if a little rough - I edited a bunch of the default templates and found the terminology a little obtuse, and threads where some replies are private don’t show any indication which can be a bit confusing.

                                                                                    I may setup Plemora at some point, as I would like the extra features, but I might never get around to it because honk is so trouble-free and works alright.

                                                                                    1. 2

                                                                                      Pretty well - I just run the binary in a screen session on one of my servers for now.

                                                                                      https://honk.rjp.is/ - mainly using it as a publish-on-your-own-space feeder for tweets via IFTTT.

                                                                                      1. 3

                                                                                        Have you looked into crossposting using one of the open source crossposters?

                                                                                        I’m assuming that they won’t work because honk has fewer features than Mastodon, but I don’t actually know.

                                                                                        1. 2

                                                                                          I did try moa for a while but the link [from moa to twitter] kept disappearing for some reason - I did intend to self-host it but never got around to it. IFTTT is easier for now and if I want to shift off IFTTT, I’ve already got “RSS to Twitter” code for other things I can easily repurpose.

                                                                                          [edited to clarify “link”]

                                                                                  2. 4

                                                                                    Fwiw it’s a bit over 300MBs on my (single user) instance.

                                                                                    1. 3

                                                                                      I still think that 300MB is a lot, especially when cheaper VPS can have only 500MB of RAM.

                                                                                      1. 3

                                                                                        In fairness, 512 mb is a ridiculously low amount of memory.

                                                                                        Nowadays it’s possible to configure a physical system with 128c/256t and literally terabytes of ram and we’re still paying 2013 prices for 1c/512mb VPS instances.

                                                                                        Think about this.

                                                                                        1. 1

                                                                                          I’ve been mostly using (and recommending) the smallest hetzner vps instances, which have 2gb of ram and cost just under 3 euro per month. although looking at lowendbox, i see that you can get a 1gb vps for $1 per month.

                                                                                  1. 6

                                                                                    So I am growing increasingly dissatisfied with even the minimal utility I get out of Facebook (sharing pictures of my kids with family members) and have thought about standing up a fediverse instance and trying to use that, instead. The problem for me is with the clients – how are they, for elderly parents with a strong interest in granddaughters and none at all with fiddling with technology? How is the fediverse for non-nerds?

                                                                                    1. 10

                                                                                      You should absolutely not share pictures of your kids on ActivityPub. Depending on the software you are using its either more like Twitter or more like blogging software like Wordpress (wordpress has an AP plugin actually, and dedicated blogging instance software like write freely and plume exist and rock)

                                                                                      Can’t speak for the current state of Diaspora, but THAT is what you are looking for.

                                                                                      1. 5

                                                                                        The Fediverse, in general, is closer to Twitter than it is Facebook. That being said, Tusky for Android and Mast for iOS are both (IMO) better than the Twitter client for both platforms. Both incredibly polished and intuitive. There’s a number of high quality clients, but these are the two that I’ve personally settled on.

                                                                                        1. 3

                                                                                          Interesting. I wonder if there is anything that is more of a Facebook replacement.

                                                                                          1. 4

                                                                                            Diaspora perhaps?

                                                                                            1. 3

                                                                                              Yeah, Disaspora looks more facebook-like. I don’t think it’s quite as popular as the Fediverse, and it’s not built on ActivityPub to my knowledge.

                                                                                              If you’re into more Instagram-like (ie. photosharing), there’s always Pixelfed as well, which is part of the Fediverse

                                                                                              1. 3

                                                                                                ActivityPub. And no, it’s not a great protocol for a Facebook-like; the existing projects are all fairly nascent and have been struggling with follower/friend mechanics.

                                                                                            2. 3

                                                                                              friendica could be interesting for you

                                                                                              1. 3

                                                                                                Thanks, I’ll check it out. I’ve always wondered about building something like Facebook but focussed on the needs of families, particularly families with small kids. Yeah, it would never be a billion dollar thing, but nowadays it seems like there might be an appetite for something less crap than Facebook.

                                                                                                1. 2

                                                                                                  For the social network, I always thought that the Google circles concept was way better. I wonder if there are any successors to that concept.

                                                                                            3. 2

                                                                                              I don’t use iOS so I haven’t been able to try it myself, but I just want to mention that Mast does still seem to be open source, the repo is here: https://github.com/ShihabMe/Mast2 It’s annoyingly difficult to find that repo, I found the link on the developer’s Mastodon account. Many open source projects don’t adequately advertise their open source nature, which is a source of significant frustration for me.

                                                                                              1. 1

                                                                                                And the guy is doing a lot of support directly on mastodon.

                                                                                          1. 23

                                                                                            I just want to say that I am excited about Rust, it’s gotten me interested in programming again, and the tools on this list have gotten me excited about the command line for the first time in years. I recently switched back to Linux from macOS after more than a decade of loyalty to Apple, in part because I’m currently more excited about projects like these than I am about Apple’s GUIs.

                                                                                            @sharkdp, @burntsushi, and everyone else working in this space, thank you for your efforts, and don’t let the haters get you down.

                                                                                            1. 18

                                                                                              Much appreciated. :-)

                                                                                              1. 14

                                                                                                Thank you for the feedback!

                                                                                              1. 1

                                                                                                The mrustc project is an experimental Rust compiler that emits C code.

                                                                                                Pretty cool stuff. Doesn’t Zig also compile to C?

                                                                                                1. 5

                                                                                                  It’s the opposite, Zig is a C compiler. https://ziglang.org/#Zig-is-also-a-C-compiler

                                                                                                  1. 1

                                                                                                    In addition, a C backend is in progress for the self-hosted compiler.

                                                                                                  2. 2

                                                                                                    No, Zig does not compile to C.

                                                                                                    1. 1

                                                                                                      OK, ok, it is nim =)

                                                                                                    1. 25

                                                                                                      Annoyingly, it’s not clear from the URL that this is a Medium link. Medium links should be banned. Not only is Medium’s entire site a bloated frustrating design mistake, their paywall is impassible. (The “cached” link doesn’t have anything saved.) I’m not going to pay for a subscription to something just because someone posted a link to it on Lobste.rs.

                                                                                                      In fact, one of my criteria for paying for journalism is that it not have a paywall! I bought Low←Tech Magazine’s print on demand books and put in a donation to their LiberaPay, for instance.

                                                                                                      1. 9

                                                                                                        https://outline.com/wgByTm

                                                                                                        Outline link for medium article. This seems to be free of bloat.

                                                                                                        1. 5

                                                                                                          mhmm, the submitted link is a 302 redirect to the article, which is at https://onezero.medium.com/todays-webcams-are-boring-so-i-brought-back-a-classic-291cc7c94c76 . dunno if it’s possible to change the submission to the actual article link, but it would be nice.

                                                                                                          1. 5

                                                                                                            This is off-topic, but here you go: There was a thread on https://lobste.rs/s/bykzkm/hide_medium_com_as_personal_filter_on and apparently, you’re not the only one with strong feelings. But it resolved differently back then. You may use the linked userscript or try a writing a patch for lobste.rs.

                                                                                                            1. 0

                                                                                                              their paywall is impassible

                                                                                                              laughs in uMatrix

                                                                                                              Low-Tech Magazine is cool though, didn’t know they had a LiberaPay.

                                                                                                              1. 2

                                                                                                                I’ve never had an issue with Ublock Origin set to block javascript, either. Credit at least to Medium for sending the article within the HTML.

                                                                                                            1. 4

                                                                                                              You sacrifice a certain amount of control by using a static site generator, like link properties.

                                                                                                              Shameless plug: and that is why I’m making an SSG that gives you full control over the element tree without JS. ;) https://soupault.neocities.org/plugins/#safe-links

                                                                                                              Nothing against Zola though, it’s a good project.

                                                                                                              1. 2

                                                                                                                Sorry if this is a stupid question, but why would you want safe links on a static site generator? I thought things like nofollow were for comment sections, wikis and other places where you let random members of the public post stuff. If only approved people are posting, why would you need these? Just to make you less of an attractive target for hackers?

                                                                                                                1. 1

                                                                                                                  Well, there’s a few reasons that I can think of:

                                                                                                                  • Good link etiquette
                                                                                                                  • If I add forms for comments or email subscriptions in the future (which is definitely planned), I don’t want people hijacking my window when somebody opens a tab
                                                                                                                  • I want external links to open in new tabs

                                                                                                                  I’m not sure if it’s really that important, but it makes me feel good and it’s what I’ve always done.

                                                                                                                  1. 2

                                                                                                                    noopener on target=”_blank” links I understand because it’s a security measure, but nofollow just tells search engines not to give weight to your links, doesn’t it? Wouldn’t that be bad link etiquette, since you would be hurting the sites you link?

                                                                                                                    1. 1

                                                                                                                      I generally don’t use nofollow (I actually think I’ve never used it). Guess I misunderstood the question. noopener and norefferer are the most common ones I use, and I tend to throw them on every external link when possible.

                                                                                                                2. 1

                                                                                                                  That’s awesome! Currently taking a break from rewriting my site (gotta let it sit for like… a week? Is that healthy?), but I’ll keep that in mind. Looking at some Github issues for Zola, it seems like they’re trying to work this out, but for the time being I’m going to have to deal with raw <a href="">s. :(

                                                                                                                  1. 1

                                                                                                                    Also a shameless plug, but http://mkws.sh/, not sure if there is any sacrifice in there.

                                                                                                                  1. 2

                                                                                                                    What open source alternatives are there to LiveShare? I’ve become rather reliant on LiveShare for pair programming during the pandemic, I had never checked whether each extension I was using was open source.

                                                                                                                    Teletype for Atom appears to be under an MIT license, and it seemed to work fine for me before I switched to VS Code (after M$ acquired Github and I figured Atom was doomed). https://github.com/atom/teletype

                                                                                                                    1. 2

                                                                                                                      I’ve heard lots of good things about tmate but it’s terminal only which may or may not be what you want.

                                                                                                                      At the risk of incurring the wrath of the purists, a question you may wish to ask yourself is - “Why does it matter?”

                                                                                                                      1. 2

                                                                                                                        I’ve heard good things about floobits.com for editor-agnostic shared coding. The plugins are open source, but it’s a company that wants you to pay monthly if you’re a large company.

                                                                                                                      1. -5
                                                                                                                        1. 10

                                                                                                                          This kind of comment tends to lead to flamewars. If you can’t comment on the technical merit of a posting, at least try and refrain from leaving out the internet equivalent of oily rags.

                                                                                                                          1. 10

                                                                                                                            Yes, their black lives matter, pro-lgbt, and anti-fascism messaging on the home page seem like very toxic traits to me /s

                                                                                                                            1. 2

                                                                                                                              I don’t seen any pro lgbt messaging, but come on, surely you have to admit that this large image of a noose would rightly make some people feel uncomfortable: http://9front.org/img/9noose01.png

                                                                                                                              The black lives matter imagery is recent, and I applaud it but if they want people to not think they’re toxic, they have to do a lot more than that.

                                                                                                                              1. 4

                                                                                                                                I don’t seen any pro lgbt messaging

                                                                                                                                Well the page icon is a rainbow.

                                                                                                                                surely you have to admit that this large image of a noose would rightly make some people feel uncomfortable.

                                                                                                                                To me it most likely seems to be a self deprecating joke though. More like use 9front at your own peril. I think that imagery would probably better served by some sort of foot gun apparatus.

                                                                                                                                they have to do a lot more than that.

                                                                                                                                Like what exactly?

                                                                                                                                1. 4

                                                                                                                                  Oh nice, didn’t notice the favicon as I’m on mobile.

                                                                                                                                  There are many interpretations of this image that are more charitable. But the imagery is poignant and I can understand why people consider it toxic.

                                                                                                                                  For a start, they could remove the imagery that makes people feel uncomfortable.

                                                                                                                                  I bring this up every time someone brings up 9front because there’s always two camps of people: one group that says “wow, 9front has a lot of really weird imagery that makes me uncomfortable,” and another group that says “I don’t understand, these are just jokes, what’s the problem?”

                                                                                                                                  I don’t want to have to comment and explain these things, I’d much rather talk about the technical merits of 9front, which is indeed a very interesting project, and the people behind it are clearly very technically competent and have ideas worth sharing. But the fact of the matter is that I can agree with both groups of people. These are clearly jokes, but they can make people uncomfortable. I personally like 9front, it’s a very interesting project, and I like it because I like plan9.

                                                                                                                                  But it’s also very strange to me that people are fundamentally unable to understand why this makes others uncomfortable. from my perspective, that is a form of privilege. It indicates, for example, that you are okay with noose imagery in a project. That’s totally fine, but understand that a lot of people are going to see that image and be extremely uncomfortable. And there’s not a whole lot of context on that page to explain what exactly the joke is.

                                                                                                                                  An image on this page uses a Fraktur font which was used pretty heavily in Nazi germany. Again, reading too much into it, but for some people its going to be evocative, and if you go on the way back machine this stuff has been there far longer than the anti fascist banners. And hey, the image I’m describing even says that 9front is in the “trolling business.” Surely it’s understandable that a non zero percent of people look at this image and feel that maybe 9front could be problematic. It’s not each persons responsibility to do enough research into in jokes to be able to understand them, and they might not want such complex jokes being so close to technical discussions.

                                                                                                                                  Edit: there is also this photo of the entrance to Auschwitz. Yeah it’s a joke but maybe some people don’t want to see this in a faq? http://fqa.9front.org/rails.jpg

                                                                                                                                  Edit 2: when I say “you” above, I’m not referring to the parent comment directly. I don’t want to come off as accusatory as that is not my intention

                                                                                                                                  1. 1

                                                                                                                                    Thanks for taking the time to comment, you have articulated my feelings about this well.

                                                                                                                                    9Front may be a very good project technically, but it’s not worth my time to try to look past the (in my opinion) juvenile and peurile in-jokes to ascertain that. I’m sure the project is fine with this, as it’s not for everyone. But I do honestly believe they are losing possible contributors by having some of these images on their site.

                                                                                                                            2. 6

                                                                                                                              Some people have a weird sense of humor, but I look past that and see the merit in the code.

                                                                                                                              1. 2

                                                                                                                                It looks like this is what happen when a bunch of people make too serious jokes: jokes are taken too seriously.

                                                                                                                                1. 2

                                                                                                                                  Can you please be explicit about what you find toxic about this “propaganda” page? As someone dedicated to nonviolence I don’t love some of the militaristic imagery, but mostly I’m puzzled/baffled by this memery rather than offended.

                                                                                                                                  1. 1

                                                                                                                                    How about this image of a noose surrounding a nine? http://9front.org/img/9noose01.png

                                                                                                                                    1. 4

                                                                                                                                      I interpret that as “enough rope to hang yourself,” which is entirely in keeping with the 9front community perspective.

                                                                                                                                      1. 2

                                                                                                                                        If the plan 9 original authors “left the flying plane”, keeping the plane flying without its pilot could be expressed up this way, but it would be way to far-fetched to look like a realistic interpretation, and absurdity rather than logic emanates from this picture for me too. Was this the point? I probably will never know.

                                                                                                                                        1. 0

                                                                                                                                          Yeah OK somehow I missed that one when scrolling through, that doesn’t seem great.

                                                                                                                                        2. 0
                                                                                                                                          1. 11

                                                                                                                                            Isn’t the entire point of the last image that the one guy who isn’t saluting is circled, implying that 9front is against Nazis if anything?

                                                                                                                                            1. 3

                                                                                                                                              Indeed, it seems more like a protest. Wasn’t the Reichstag fire a protest too?

                                                                                                                                              1. 4

                                                                                                                                                it was at used to strike down against the communists who were blamed for the fire, and may have been staged by the nazis for that purpose, afaik the sources aren’t conclusive.

                                                                                                                                        3. 2

                                                                                                                                          i clicked on the comments expecting outrage, i wasn’t disappointed. 10/10 would click again.

                                                                                                                                          once the rockets are up, who cares where they come down.

                                                                                                                                          1. 1

                                                                                                                                            I don’t get it, can you explain your comment?

                                                                                                                                            1. 2

                                                                                                                                              http://9front.org/img/9germanengineering01.png

                                                                                                                                              once the rockets are up, who cares where they come down.

                                                                                                                                              https://www.youtube.com/watch?v=TjDEsGZLbio

                                                                                                                                              1. 1

                                                                                                                                                Thanks for the link, I’m a fan of Lehrer (who I learned today is still alive, according to Wikipedia).

                                                                                                                                        1. 33

                                                                                                                                          A long time ago I was a huge fan of a certain programmer¹ who had a lot of technical prowess and put TONS of angry rants that I thought were funny in their code. Bashing other tech companies’ engineers in a funny way while writing awesome code seemed to be a great thing to aspire to. One day I decided to contribute a one-line patch to this programmer’s side project which was on their self-hosted git instance. Not wanting to go through the annoying process of setting up git to work with gmail for a one-line patch, I just typed the diff into what I thought was a decent and friendly email. I received no response and forgot about it.

                                                                                                                                          A few months later the programmer, not realizing (or caring) that I followed them on twitter wrote a pointed angry rant about idiots who could only use github and not send in a properly formatted git patch. This was followed by a tweet asking a question about the thing my patch aimed to fix, meaning that the previous rant was almost certainly about me. Suddenly, all the programmer’s funny angry rants weren’t so funny anymore, and just made this programmer seem like… a jerk. They are now the CEO of a successful startup. Am I going to apply to work for them? Probably not. Am I going to use their product were I to need its features? Maybe, but with hesitation.

                                                                                                                                          The whole reason I’m telling this story is to remind people that it’s possible to be opinionated without being a jerk, and that it’s bad to celebrate when people voice good opinions in a jerk-like way². The programmer in question could have emailed me back asking for a properly formatted git patch, privately complained to their friends, or expressed their frustration on twitter in a way that wasn’t mean. I think the last part is the most important – rants have their place to vent frustration (and clearly Drew is very frustrated) or even change people’s minds, but have a different tone entirely when one of the core aspects of a rant is “I am smarter and more ethical than these other people.”

                                                                                                                                          I hope Drew’s concerns get addressed, but I also hope he is able to voice them more constructively in the future.

                                                                                                                                          ¹ Left unnamed, but not @ddevault

                                                                                                                                          ² See Linux Torvalds, for example

                                                                                                                                          1. 16

                                                                                                                                            Not to pile on, but this is the reason I unfollowed Drew on fosstodon. How angry rants become very pointed and are sometimes on the brink of harassment.

                                                                                                                                            1. 5

                                                                                                                                              I didn’t even had to do that, he blocked me after insulting me. I still like to read his posts, but more for the spectacle than for objective opinions, really.

                                                                                                                                            2. 5

                                                                                                                                              rants have their place to vent frustration (and clearly Drew is very frustrated)

                                                                                                                                              I think this is key. Drew seems to be generally angry and frustrated with the state of the world, technology, ethics of technologists, etc. And he’s right to be angry, a lot of stuff sucks! Anger can be a good/useful emotion, if it drives you to change shitty situations instead of accepting them, and that’s something I admire about Drew, his refusal to take the status quo for granted and his ability to follow through on groundbreaking projects. (I’m rooting for Sourcehut because it isn’t a clone of Github, it has a different philosophy and design goals, whereas many open source products look like inferior knockoffs of some proprietary progenitor.) Maybe he wouldn’t be making the kind of software+projects he does if he weren’t so angry. But, like most things, anger can be toxic when taken to extremes or misdirected, and it can drive people away.

                                                                                                                                              Whenever I introduce one of Drew’s projects like Sourcehut to my friends/colleagues, or forward a post of his, I have to add the disclaimer that Drew is kind of a dick, even though I tend to agree with him on many things. At some point his anger may do more damage to his projects by preventing people from using or contributing to them than whatever motivational benefits his anger provides to his personal productivity.

                                                                                                                                              1. 4

                                                                                                                                                One of the effects of having some of my own posts being posted on sites like Reddit and Hacker News a few times is that I’ve become a lot more careful in how I comment on other people’s posts. In general, I strive to write anything public as if the author of the post would actually read it. In quite a few cases, they probably do.

                                                                                                                                                It’s easy to think you’re oh-so-witty-and-funny with your snarky sarcastic takes or whatever; but in reality it’s usually not, especially not over the internet.

                                                                                                                                                I’ll readily admit I don’t always succeed at this though; but I try. I sometimes rant a bit to my friends or girlfriend in private, which has the same frustration-venting effect and doesn’t upset anyone. Just because someone is wrong, or because you strongly disagree with them, isn’t a reason to be a jerk to them.

                                                                                                                                              1. 7

                                                                                                                                                https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines but seriously, I don’t see this taking off. Open source OSs can take on Microsoft with enough coders because it’s just software - hardware is a very different business. I wish it could happen, but it’s very doubtful IMHO.

                                                                                                                                                1. 31

                                                                                                                                                  Depends on what you mean by ‘taking off’. RISC-V has successfully killed a load of in-house ISAs (and good riddance!). For small control-plane processors, you don’t care about performance or anything else much, you just want a cheap Turing-complete processor with a reasonably competent C compiler. If you don’t have to implement the C compiler, that’s a big cost saving. RISC-V makes a lot of sense for things like the nVidia control cores (which exist to set up the GPU cores and do management things that aren’t on the critical path for performance). It makes a lot of sense for WD to use instead of ARM for the controllers on their SSDs: the ARM license costs matter in a market with razor-thin margins, power and performance are dominated by the flash chips, and they don’t need any ecosystem support beyond a bare-metal C toolchain.

                                                                                                                                                  The important lesson for RISC-V is why MIPS died. MIPS was not intended as an open ISA, but it was a de-facto one. Aside from LWL / LWR, everything in the ISA was out of patent. Anyone could implement an almost-MIPS core (and GCC could target MIPS-without-those-two-instructions) and many people did. Three things killed it in the market:

                                                                                                                                                  First, fragmentation. This also terrifies ARM. Back in the PDA days, ARM handed out licenses that allowed people to extend the ISA. Intel’s XScale series added a floating-point extension called Wireless MMX that was incompatible with the ARM floating point extension. This cost a huge amount for software maintenance. Linux, GCC, and so on had to have different code paths for Intel vs non-Intel ARM cores. It doesn’t actually matter which one was better, the fact both existed prevented Linux from moving to a hard-float ABI for userland for a long time: the calling convention passed floating-point values in integer registers, so code could either call a soft-float library or be compiled for one or the other floating-point extensions and still interop with other libraries that were portable across both. There are a few other examples, but that’s the most painful one for ARM. In contrast, every MIPS vendor extended the ISA in incompatible ways. The baseline for 64-bit MIPS is still often MIPS III (circa 1991) because it’s the only ISA that all modern 64-bit MIPS processors can be expected to handle. Vendor extensions only get used in embedded products. RISC-V has some very exciting fragmentation already, with both a weak memory model and TSO: the theory is that TSO will be used for systems that want x86 compatibility, the weak model for things that don’t, but code compiled for the TSO cores is not correct on weak cores. There are ELF header flags reserved to indicate which is which, but it’s easy to compile code for the weak model, test it on a TSO core, see it work, and have it fail in subtle ways on a weak core. That’s going to cause massive headaches in the future, unless all vendors shipping cores that run a general-purpose OS go with TSO.

                                                                                                                                                  Second, a modern ISA is big. Vector instructions, bit-manipulation instructions, virtualisation extensions, two-pointer atomic operations (needed for efficient RCU and a few other lockless data structures) and so on. Dense encoding is really important for performance (i-cache usage). RISC-V burned almost all of their 32-bit instruction space in the core ISA. It’s quite astonishing how much encoding space they’ve managed to consume with so few instructions. The C extension consumes all of the 16-bit encoding space and is severely over-fitted to the output of an unoptimised GCC on a small corpus of C code. At the moment, every vendor is trampling over all of the other vendors in the last remaining bits of the 32-bit encoding space. RISC-V really should have had a 48-bit load-64-bit-immediate instruction in the core spec to force everyone to implement support for 48-bit instructions, but at the moment no one uses the 48-bit space and infrequently used instructions are still consuming expensive 32-bit real-estate.

                                                                                                                                                  Third, the ISA is not the end of the story. There’s a load of other stuff (interrupt controllers, DMA engines, management interfaces, and so on) that need to be standardised before you can have a general-purpose compute platform. Porting an OS to a new ARM SoC used to be a huge amount of effort because of this. It’s now a lot easier because ARM has standardised a lot of this. x86 had some major benefits from Compaq copying IBM: every PC had a compatible bootloader that provided device enumeration and some basic device interfaces. You could write an OS that would access a disk, read from a keyboard, and write text to a display for a PC that would run on any PC (except the weird PC98 machines from Japan). After early boot, you’d typically stop doing BIOS thunks and do proper PCI device numeration and load real drivers, but that baseline made it easy to produce boot images that ran on all hardware. The RISC-V project is starting to standardise this stuff but it hasn’t been a priority. MIPS never standardised any of it.

                                                                                                                                                  The RISC-V project has had a weird mix from the start of explicitly saying that it’s not a research project and wants to be simple and also depending on research ideas. The core ISA is a fairly mediocre mid-90s ISA. Its fine, but turning it into something that’s competitive with modern x86 or AArch64 is a huge amount of work. Some of those early design decisions are going to need to either be revisited (breaking compatibility) or are going to incur technical debt. The first RISC-V spec was frozen far too early, with timelines largely driven by PhD students needing to graduate rather than the specs actually being in a good state. Krste is a very strong believer in micro-op fusion as a solution to a great many problems, but if every RISC-V core needs to be able to identify 2-3 instruction patterns and fuse them into a single micro-op to do operations that are a single instruction on other ISAs, that’s a lot of power and i-cache being consumed just to reach parity. There’s a lot of premature optimisation (e.g. instruction layouts that simplify decoding on an in-order core) that hurt other things (e.g. use more encoding space than necessary), where the saving is small and the cost will become increasingly large as the ISA matures.

                                                                                                                                                  AArch64 is a pretty well-designed instruction set that learns a lot of lessons from AArch32 and other competing ISAs. RISC-V is very close to MIPS III at the core. The extensions are somewhat better, but they’re squeezed into the tiny amount of left-over encoding space. The value of an ecosystem with no fragmentation is huge. For RISC-V to succeed, it needs to get a load of the important extensions standardised quickly, define and standardise the platform specs (underway, but slow, and without enough of the people who actually understand the problem space contributing, not helped by the fact that the RISC-V Foundation is set up to discourage contributions), and get software vendors to agree on those baselines. The problem is that, for a silicon vendor, one big reason to pick RISC-V over ARM is the ability to differentiate your cores by adding custom instructions. Every RISC-V vendor’s incentives are therefore diametrically opposed to the goals of the ecosystem as a whole.

                                                                                                                                                  1. 3

                                                                                                                                                    Thanks for this well laid out response.

                                                                                                                                                    The problem is that, for a silicon vendor, one big reason to pick RISC-V over ARM is the ability to differentiate your cores by adding custom instructions. Every RISC-V vendor’s incentives are therefore diametrically opposed to the goals of the ecosystem as a whole.

                                                                                                                                                    This is part of what makes me skidish, as well. I almost prefer the ARM model to keep a lid on fragmentation than RISC-V’s “linux distro” model. But also, deep down, if we manage to create the tooling for binaries to adapt to something like this and have a form of Universal Binary that progressively enhances with present CPUIDs, that would make for an exciting space.

                                                                                                                                                    1. 6

                                                                                                                                                      But also, deep down, if we manage to create the tooling for binaries to adapt to something like this and have a form of Universal Binary that progressively enhances with present CPUIDs, that would make for an exciting space.

                                                                                                                                                      Apple has been pretty successful at this, encouraging developers to distribute LLVM IR so that they can do whatever microarchitectural tweaks they want for any given device. Linux distros could do something similar if they weren’t so wedded to GCC and FreeBSD could if they had more contributors.

                                                                                                                                                      You can’t do it with one-time compilation very efficiently because each vendor has a different set of extensions, so it’s a combinatorial problem. The x86 world is simpler because Intel and AMD almost monotonically add features. Generation N+1 of Intel CPUs typically supports a superset of generation N’s features (unless they completely drop something and are never bringing it back, such as MPX) and AMD is the same. Both also tend to adopt popular features from the other, so you have a baseline that moves forwards. That may eventually happen with RISC-V but the scarcity of efficient encoding space makes it difficult.

                                                                                                                                                      On the other hand, if we enter Google’s dystopia, the only AoT-compiled code will be Chrome and everything else will be JavaScript and WebAssembly, so your JIT can tailor execution for whatever combination of features your CPU happens to have.

                                                                                                                                                      1. 1

                                                                                                                                                        Ultimately, vendor extensions are just extensions. Suppose a CPU is RV64GC+proprietary extensions, what this means is that RV64GC code would still work on it.

                                                                                                                                                        This is much, much better than the alternative (vendor-specific instructions implemented without extensions).

                                                                                                                                                      2. 2

                                                                                                                                                        Vendor extensions only get used in embedded products. RISC-V has some very exciting fragmentation already, with both a weak memory model and TSO: the theory is that TSO will be used for systems that want x86 compatibility, the weak model for things that don’t, but code compiled for the TSO cores is not correct on weak cores. There are ELF header flags reserved to indicate which is which, but it’s easy to compile code for the weak model, test it on a TSO core, see it work, and have it fail in subtle ways on a weak core. That’s going to cause massive headaches in the future, unless all vendors shipping cores that run a general-purpose OS go with TSO.

                                                                                                                                                        I don’t understand why they added TSO in the first place.

                                                                                                                                                        Third, the ISA is not the end of the story. There’s a load of other stuff (interrupt controllers, DMA engines, management interfaces, and so on) that need to be standardised before you can have a general-purpose compute platform. Porting an OS to a new ARM SoC used to be a huge amount of effort because of this. It’s now a lot easier because ARM has standardised a lot of this. x86 had some major benefits from Compaq copying IBM: every PC had a compatible bootloader that provided device enumeration and some basic device interfaces. You could write an OS that would access a disk, read from a keyboard, and write text to a display for a PC that would run on any PC (except the weird PC98 machines from Japan). After early boot, you’d typically stop doing BIOS thunks and do proper PCI device numeration and load real drivers, but that baseline made it easy to produce boot images that ran on all hardware. The RISC-V project is starting to standardise this stuff but it hasn’t been a priority. MIPS never standardised any of it.

                                                                                                                                                        Yeah this part bothers me a lot. It looks like a lot of the standardization effort is just whatever OpenRocket does, but almost every RISC-V cpu on the market right now has completely different peripherals outside of interrupt controllers. Further, there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible. I hear there’s some work on ACPI which could help.

                                                                                                                                                        1. 7

                                                                                                                                                          I don’t understand why they added TSO in the first place.

                                                                                                                                                          Emulating x86 on weakly ordered hardware is really hard. Several companies have x86-on-ARM emulators. They either only work with a single core, insert far more fences than are actually required, or fail subtly on concurrent data structures. It turns out that after 20+ years of people trying to implement TSO efficiently, there are some pretty good techniques that don’t sacrifice much performance relative to software that correctly inserts the fences and perform a lot better on the software a lot of people write where they defensively insert too many fences because it’s easier than understanding the C++11 memory model.

                                                                                                                                                          Yeah this part bothers me a lot. It looks like a lot of the standardization effort is just whatever OpenRocket does, but almost every RISC-V cpu on the market right now has completely different peripherals outside of interrupt controllers. Further, there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible. I hear there’s some work on ACPI which could help.

                                                                                                                                                          Initially they proposed their own thing that was kind-of like FDT but different, because Berkeley. Eventually they were persuaded to use FDT for embedded things and something else (probably ACPI) for more general-purpose systems.

                                                                                                                                                          The weird thing is that Krste really understands the value of an interoperable ecosystem. He estimates the cost of building it at around $1bn (ARM thinks he’s off by a factor of two, but either way it’s an amount that the big four tech companies could easily spend if it were worthwhile). Unfortunately, the people involved with the project early were far more interested in getting VC money than in trying to build an open ecosystem (and none of them really had any experience with building open source communities and refused help from people who did).

                                                                                                                                                          1. 2

                                                                                                                                                            Are the Apple and Microsoft emulators on the “far more fences than are actually required” side? They don’t seem to have many failures..

                                                                                                                                                            1. 2

                                                                                                                                                              I don’t know anything about the Apple emulator and since it runs only on Apple hardware, it’s entirely possible that either Apple’s ARM cores are TSO or have a TSO mode (TSO is strictly more strongly ordered than the ARM memory model, so it’s entirely conformant to be TSO). I can’t share details of the Microsoft one but you can probably dump its output and look.

                                                                                                                                                          2. 2

                                                                                                                                                            there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible

                                                                                                                                                            Well, device trees (FDT) solve the “generic kernel” problem specifically, but it all still sucks. Everything is so much better when everyone has standardized most peripherals.

                                                                                                                                                            1. 1

                                                                                                                                                              That’s the best solution, but you still have to have the bootloader pass in a device tree, and that device tree won’t get updated at the same cadence as the kernel does (so it may take a while if someone finds a bug in a device tree).

                                                                                                                                                              1. 2

                                                                                                                                                                For most devices it’s the kernel that maintains the device tree. FDT is not really designed for a stable description, it changes with the kernel’s interface.

                                                                                                                                                                1. 2

                                                                                                                                                                  FDT is not specific to a kernel. The same FDT blobs work with FreeBSD and Linux, typically. It’s just a description of the devices and their locations in memory. It doesn’t need to change unless the hardware changes and if you’re on anything that’s not deeply embedded it’s often shipped with U-Boot or similar and provided to the kernel. The kernel then uses it to find any devices it needs in early boot or which are attached to the core via interface that don’t support dynamic enumeration (e.g. you would put the PCIe root complex in FDT but everything on the bus is enumerated via the bus).

                                                                                                                                                                  The reason for a lot of churn recently has been the addition of overlays to the FDT spec. These allow things that are equivalent to option roms to patch the root platform’s FDT so you can use FDT for expansions connected via ad-hoc non-enumerable interfaces.

                                                                                                                                                                  1. 2

                                                                                                                                                                    It doesn’t need to change.. but Linux developers sometimes like to find “better” ways of describing everything, renaming stuff, etc. To be fair in 5.x this didn’t really happen all that much.

                                                                                                                                                                    And of course it’s much worse if non-mainline kernels are introduced. If there’s been an FDT for a vendor kernel that shipped with the device, and later drivers got mainlined, the mainlined drivers often expect different properties completely because Linux reviewers don’t like vendor ways of doing things, and now you need very different FDT..

                                                                                                                                                                    The reason for a lot of churn recently has been the addition of overlays to the FDT spec

                                                                                                                                                                    That’s not that recent?? Overlays are from like 2017..

                                                                                                                                                            2. 1

                                                                                                                                                              Further, there’s no standard way to query the hardware, so creating generic kernels like what is done for x86 is effectively impossible. I hear there’s some work on ACPI which could help.

                                                                                                                                                              There’s apparently serious effort put into UEFI.

                                                                                                                                                              With rpi4 uefi boot, FDT isn’t used. I suppose UEFI itself has facilities to make FDT redundant.

                                                                                                                                                              1. 2

                                                                                                                                                                With RPi4-UEFI, you have a choice between ACPI and FDT in the setup menu.

                                                                                                                                                                It’s pretty clever what they did with ACPI: the firmware fully configures the PCIe controller by itself and presents a generic XHCI device in the DSDT as if it was just a directly embedded non-PCIe memory-mapped XHCI.

                                                                                                                                                                1. 1

                                                                                                                                                                  I have to ask, what is the benefit of special casing the usb3 controller?

                                                                                                                                                                  1. 2

                                                                                                                                                                    The OS does not need to have a driver for the special Broadcom PCIe host controller.

                                                                                                                                                                    1. 1

                                                                                                                                                                      How is the Ethernet handled?

                                                                                                                                                                      1. 2

                                                                                                                                                                        Just as a custom device, how else? :)

                                                                                                                                                                        Actually it’s kinda sad that there’s no standardized Ethernet “host controller interface” still… (other than some USB things)

                                                                                                                                                                        1. 1

                                                                                                                                                                          Oh. So Ethernet it’s not on PCIe to begin with, then. Only XHCI. I see.

                                                                                                                                                            3. 1

                                                                                                                                                              This doesn’t paint a very good picture of RISC-V, IMHO. It’s like some parody of worse-is-better design philosophy, combined with basically ignoring all research in CPU design since 1991 for a core that’s easy to make an educational implementation for that makes the job of compiler authors and implementers harder. Of course, it’s being peddled by GNU zealots and RISC revanchists, but it won’t benefit the things they want; instead, it’ll benefit vanity nationalist CPU designs (that no one will use except the GNU zealots; see Loongson) and deeply fragmented deep embedded (where software freedom and ISA doesn’t matter other than shaving licensing fees off).

                                                                                                                                                              1. 3

                                                                                                                                                                Ignoring the parent and focusing on hard data instead, RV64GC has higher code density than ARM, x86 and even MIPS16, so the encoding they chose isn’t exactly bad, objectively speaking.

                                                                                                                                                                1. 8

                                                                                                                                                                  Note that Andrew’s dissertation is using integer-heavy, single-threaded, C code as the evaluation and even then, RISC-V does worse than Thumb-2 (see Figure 8 of the linked dissertation). Once you add atomics, higher-level languages, or vector instructions, you see a different story. For example, RISC-V made an early decision to make the offset of loads and stores scaled with the size of the memory value. Unfortunately, a lot of dynamic languages set one of the low bits to differentiate between a pointer and a boxed value. They then use a complex addressing mode to combine the subtraction of one with the addition of the field offset for field addressing. With RISC-V, this requires two instructions. You won’t see that pattern in pure C code anywhere but you’ll see it all over the place in dynamic language interpreters and JITs.

                                                                                                                                                                  1. 1

                                                                                                                                                                    I think there was another example of something far more basic that takes two instructions on RISC-V for no good reason, just because of their obsession with minimal instructions. Something return related?? Of course I lost the link to that post >_<

                                                                                                                                                                    1. 1

                                                                                                                                                                      Interesting. There’s work on an extension to help interpreters, JITs, which might or might not help mitigate this.

                                                                                                                                                                      In any event, it is far from ready.

                                                                                                                                                                      1. 6

                                                                                                                                                                        I was the chair of that working group but I stepped down because I was unhappy with the way the Foundation was being run.

                                                                                                                                                                        The others involved are producing some interesting proposals though a depressing amount of it is trying to fix fundamentally bad design decisions in the core spec. For example, the i-cache is not coherent with respect to the d-cache on RISC-V. That means you need explicit sync instructions after every modification to a code page. The hardware cost of making them coherent is small (i-cache lines need to participate in cache coherency, but they can only ever be in shared state, so the cache doesn’t have to do much. If you have an inclusive L2, then the logic can all live in L2) but the overheads from not doing it are surprisingly high. SPARC changed this choice because the overhead on process creating from the run-time linker having to do i-cache invalidates on every mapped page were huge. Worse, RISC-V’s i-cache invalidate instruction is local to the current core. That means that you actually need to do a syscall, which does an IPI to all cores, which then invalidates the i-cache. That’s insanely expensive but the initial measurements were from C code on a port of Linux that didn’t do the invalidates (and didn’t break because the i-cache was so small you were never seeing the stale entries).

                                                                                                                                                                        1. 1

                                                                                                                                                                          L1$ not coherent

                                                                                                                                                                          Christ. How did that go anywhere?

                                                                                                                                                                          1. 5

                                                                                                                                                                            No one who had worked on an non-toy OS or compiler was involved in any of the design work until all of the big announcements had been made and the spec was close to final. The Foundation was set up so that it was difficult for any individuals to contribute (that’s slowly changing) - you had to pay $99 or ask for the fee to be waived to give feedback on the specs as an individual. You had to pay more to provide feedback as a corporation and no corporation was going to pay thousands of dollars membership and the salaries of their contributors to provide feedback unless they were pretty confident that they were going to use RISC-V.

                                                                                                                                                                            It probably shouldn’t come as a surprise that saying to people ‘we need your expertise, please pay us money so that you can provide it’ didn’t lead to a huge influx of expert contributors. There were a few, but not enough.

                                                                                                                                                              2. 7

                                                                                                                                                                Keep in mind an ISA isn’t hardware, it’s just a specification.

                                                                                                                                                                1. 6

                                                                                                                                                                  That ties into my point - RISC-V is kinda useless without fabbing potential. And that’s insanely expensive, which means the risk involved is too high to take on established players.

                                                                                                                                                                  1. 9

                                                                                                                                                                    According to the article, it seems that Samsung, Western Digital, NVIDIA, and Qualcomm don’t think the risk is too high, since they plan to use RISC-V. They have plenty of money to throw at any problems, such as inadequate fabbing potential. Hobbyists may benefit from RISC-V, but (like Linux) it’s not just for hobbyists.

                                                                                                                                                                    1. 8

                                                                                                                                                                      According to the article, it seems that Samsung, Western Digital, NVIDIA, and Qualcomm don’t think the risk is too high, since they plan to use RISC-V.

                                                                                                                                                                      I think it is more accurate they plan to use the threat of RISC-V to improve negotiating position, use it in some corner cases and as a last ditch hedge. Tizen is a prime example of such a product.

                                                                                                                                                                      1. 2

                                                                                                                                                                        I think it is more accurate they plan to use the threat of RISC-V to improve negotiating position, use it in some corner cases and as a last ditch hedge.

                                                                                                                                                                        Yet WD and NVIDIA designed their own RISC-V cores. Isn’t it a bit too much for “insurance”?

                                                                                                                                                                        The fact here is that they do custom silicon and need CPUs in them for a variety of purposes. Until now, they paid the ARM tax. From now on, they don’t have to, because they can and do just use RISC-V.

                                                                                                                                                                        I’m appalled at how grossly the impact of RISC-V is being underestimated.

                                                                                                                                                                        1. 4

                                                                                                                                                                          Yet WD and NVIDIA designed their own RISC-V cores. Isn’t it a bit too much for “insurance”?

                                                                                                                                                                          I don’t think so – it isn’t purely insurance, it is negotiating power. The power can be worth tens (even hundreds) of millions for companies at the scale of WD and NVIDIA. Furthermore they didn’t have to develop FABs for the first time, both have existing manufacturing prowess and locations. I think it is a rather straightforward ROI based business decision.

                                                                                                                                                                          The fact here is that they do custom silicon and need CPUs in them for a variety of purposes. Until now, they paid the ARM tax. From now on, they don’t have to, because they can and do just use RISC-V.

                                                                                                                                                                          They will use this to lower the ARM tax without actually pulling the trigger on going with something as different as RISC-V (except on a few low yield products to prove they can do it, see Tizen and Samsung’s strategy).

                                                                                                                                                                          I’m appalled at how grossly the impact of RISC-V is being underestimated.

                                                                                                                                                                          Time will tell, but I think that RISC-V only become viable if Apple buys and snuffs out new customers of ARM, only maintaining existing contracts.

                                                                                                                                                                          1. 1

                                                                                                                                                                            I don’t think so – it isn’t purely insurance, it is negotiating power.

                                                                                                                                                                            Do you think they have any reason left to license ARM, when they clearly can do without?

                                                                                                                                                                            Time will tell, but I think that RISC-V only become viable if Apple buys and snuffs out new customers of ARM, only maintaining existing contracts.

                                                                                                                                                                            I see too much industry support behind RISC-V at this point. V extension will be quite the spark, so we’ll see how it plays out after that. All it’ll take is one successful high performance commercial implementation.

                                                                                                                                                                            1. 2

                                                                                                                                                                              Do you think they have any reason left to license ARM, when they clearly can do without?

                                                                                                                                                                              I think you are underestimating the cost of rebuilding an entire ecosystem. I have run in production ThunderX arm64 servers – and ARM has massive support behind it and we still fell into weird issues, niches and problems. Our task was fantastic fit (large-scale OCR) and it still was tough setup and in the end due to poor optimizations and other support issues – it probably wasn’t worth it.

                                                                                                                                                                              I see too much industry support behind RISC-V at this point. V extension will be quite the spark, so we’ll see how it plays out after that. All it’ll take is one successful high performance commercial implementation.

                                                                                                                                                                              Well – I think it actually takes a marketplace of commercial implementations so that selecting RISK-V isn’t single-vendor lockin forever, but I take your meaning.

                                                                                                                                                                      2. 3

                                                                                                                                                                        As I said up top, I hope this really happens. But I’m not super confident it’ll ever be something we can use to replace our AMD/Intel CPUs. If it just wipes out the current microcontroller and small CPU space that’s good too, since those companies don’t usually have good tooling anyway.

                                                                                                                                                                        I just think features-wise it’ll be hard to beat the current players.

                                                                                                                                                                        1. 1

                                                                                                                                                                          I just think features-wise it’ll be hard to beat the current players.

                                                                                                                                                                          Can you elaborate on this point?

                                                                                                                                                                          What are the features? Who are the current players?

                                                                                                                                                                          1. 4

                                                                                                                                                                            Current players are AMD64 and ARM64. Features lacking in RV64 include vector extension.

                                                                                                                                                                            1. 4

                                                                                                                                                                              I notice you’re not the author of the parent post. Still,

                                                                                                                                                                              Features lacking in RV64 include vector extension.

                                                                                                                                                                              V extension is due to be active standard by September if all is well. This is practically like saying “tomorrow”, from a ISA timeline perspective. To put it into context, RISC-V was introduced in year 2010.

                                                                                                                                                                              Bit manipulation (B) is also close to active standard, and also pretty important.

                                                                                                                                                                              With these extensions out of the way, and software support where it is today, I see no features stopping low power, high performance implementations appearing and getting into smartphones and such.

                                                                                                                                                                              AMD64 and ARM64.

                                                                                                                                                                              The amd64 ISA is CISC legacy. Popular or not, it’s long overdue replacement.

                                                                                                                                                                              ARM64 isn’t a thing. You might have meant aarch64 or armv8.

                                                                                                                                                                              I’m particularly interested whether the parent meant ISAs or some company names regarding current players.

                                                                                                                                                                              1. 4

                                                                                                                                                                                ARM64 isn’t a thing. You might have meant aarch64 or armv8.

                                                                                                                                                                                The naming is a disaster :/ armv8 doesn’t specifically mean 64-bit because there’s technically an armv8 aarch32, and aarch64/32 is just an awful name that most people don’t want to say out loud. So even ARM employees are okay with the unofficial “arm64” name.


                                                                                                                                                                                Another player is IBM with OpenPOWER.. Relatively fringe compared to ARM64 (which the Bezos “Cloud” Empire is all-in on, yay) but hey, there is a supercomputer and some very expensive workstations for open source and privacy enthusiasts :) and all the businesses buying IBM’s machines that we don’t know much about. That’s much more than desktop/server-class RISC-V… and they made the POWER ISA royalty-free too now I think.

                                                                                                                                                                                1. 4

                                                                                                                                                                                  SPARC is also completely open. Yes, POWER is open now, but I don’t see why it would fare better than SPARC.

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    In terms of diversity of core designers and chip makers, maybe not. But POWER generally just as an ISA is doing much better. IBM clearly cares about making new powerful chips and is cultivating a community around open firmware.

                                                                                                                                                                                    Who cares about SPARC anymore? Seems like for Oracle it’s kind of a liability. And Fujitsu, probably the most serious SPARC company as of late, is on ARM now.

                                                                                                                                                                                  2. 3

                                                                                                                                                                                    The naming is a disaster :/ armv8 doesn’t specifically mean 64-bit because there’s technically an armv8 aarch32

                                                                                                                                                                                    Amusingly, the first version of the ARMv8 spec made both AArch32 and AArch64 optional. I implemented a complete 100% standards-compliant soft core based on that version of the spec. They later clarified it so that you had to implement at least one out of AArch32 and AArch64.

                                                                                                                                                                                2. 1

                                                                                                                                                                                  Current players are AMD64 and ARM64

                                                                                                                                                                                  And ARM32/MIPS/AVR/SuperH/pick your favorite embedded ISA. The first real disruption brought by RISC-V will be in microcontrollers and in ASICs. With RISC-V you aren’t tied to one’s board/chip to a single company (Like ARM Holdings, MIPS Technologies, Renesas, etc.). If they go under/decide to get out of the uC market/slash their engineering budgets/start charging double you can always license from another vendor (or roll your own core). In addition, the tooling for RISC-V is getting good fairly fast and is mostly open source. You don’t have to use the vendor’s closed-source C compiler, or be locked into their RTOS.

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    The first real disruption

                                                                                                                                                                                    Indeed. The second wave is going to start soon, triggered by stable status of the V and B extensions.

                                                                                                                                                                                    This will enable Qualcomm and friends to release smartphone-tier SoCs with RISC-V CPUs in them.

                                                                                                                                                                          2. 6

                                                                                                                                                                            Yes fab is expensive, but SiFive is a startup, it still managed to fab RISC-V chips that can run Linux desktop. I don’t think there is need to be too pessimistic.

                                                                                                                                                                            1. 4

                                                                                                                                                                              The economics are quite interesting here. Fabs are quite cheap if you are a brand-new startup or a large established player. They give big discounts to small companies that have the potential to grow into large customers (because if you get them early then they end up at least weakly tied into a particular cell library and you have a big long-term revenue stream). They give good prices to big customers, because they amortise the setup costs over a large number of wafers. For companies in the middle, the prices are pretty bad. SiFive is currently getting the steeply discounted rate. It will be interesting to see what happens as they grow.

                                                                                                                                                                            2. 5

                                                                                                                                                                              RISC-V is kinda useless without fabbing potential.

                                                                                                                                                                              RISC-V foundation have no interest on fabbing themselves.

                                                                                                                                                                              And that’s insanely expensive, which means the risk involved is too high to take on established players.

                                                                                                                                                                              Several chips with CPU in them based on RISC-V have been fabricated. Some are already shipped as components in other products. Some of these chips are available on sale.

                                                                                                                                                                              RISC-V’s got significant industry backing.

                                                                                                                                                                              Refer to: https://riscv.org/membership/

                                                                                                                                                                              1. 3

                                                                                                                                                                                There’s a number of companies that provide design and fabbing services or at least help you realize that.

                                                                                                                                                                                The model is similar to e.g. SOLR, where the core is an open source implementation, but enterprise services are actually provided by a number of companies.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  With ARM on the market, RISC-V has to be on a lot of people’s minds; specifically, those folks that are already licensing ARM’s ISA, and producing chips…

                                                                                                                                                                              2. 3

                                                                                                                                                                                Open source OSs can take on Microsoft with enough coders because it’s just software

                                                                                                                                                                                Yet we haven’t seen that happening either. In general, creating a product that people love require a bit more than opensource software. It requires vision, deep understanding of humans and rock solid implementation. This usually means the cathedral approach that is exactly the opposite of FOSS approach.

                                                                                                                                                                                1. 5

                                                                                                                                                                                  Maybe not for everyone on the market, but I’ve been using Linux exclusively for over 10 years now, and I’m not the only one. Also, for some purposes (smartphones, servers, SBCs, a few others) Linux is almost the only choice.

                                                                                                                                                                                  1. 3

                                                                                                                                                                                    You are absolutely in the minority though in terms of desktop computing. The vast majority of people can barely get their hand held through Mac OS, much less figuring out wtf is wrong with their graphics drivers or figuring out why XOrg has shit out on them for the 3rd time that week, or any number of problems that can (and do) crop up when using Linux on the desktop. Android, while technically Linux, doesn’t really count IMO because it’s almost entirely driven by the vision, money, and engineers of a single company that uses it as an ancillary to their products.

                                                                                                                                                                                    1. 6

                                                                                                                                                                                      That’s a bit of a stereotype - I haven’t edited an Xorg conf file in a very long time. It’s my daily driver so stability is a precondition. My grandma runs Ubuntu and it’s fine for what she needs.

                                                                                                                                                                                      1. 3

                                                                                                                                                                                        Not XOrg files anymore, maybe monitors.xml, maybe it’s xrandr, whatever. I personally just spent 4+ hours trying to get my monitor + graphics setup to behave properly with my laptop just last week. Once it works, it tends to keep working (though not always, it’s already broken on me once this week for seemingly no reason) unless you change monitor configuration or it forgets your config for some reason, but getting it to work in the first place is a massive headache depending on the hardware. Per-monitor DPI scaling is virtually impossible on XOrg, and Wayland is still a buggy mess with limited support. Things get considerably more complex with a HiDPI, multi-head setup, which are things that just work on Windows or Mac OS.

                                                                                                                                                                                        The graphics ecosystem for Linux is an absolute joke too. That being said, my own mother runs Ubuntu on a computer that I built and set up, it’s been relatively stable since I put in the time to get it working in the first place.

                                                                                                                                                                                  2. 2

                                                                                                                                                                                    Not on th desktop, for sure. Server side however, GNU Linux is a no brainer, the default choice.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  People may also want to check out rtw, a “time tracker CLI tool” written in Rust. All I can say is it compiled fine and its basic functions seem to work, I just installed it, but that seems like a good start :)

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    It looks very nice indeed! I like the timeline feature :)

                                                                                                                                                                                  1. 5

                                                                                                                                                                                    Have you experienced any issues specific to the ARM architecture? I occasionally hears that Linux still has issues with ARM workstations/laptops, but it’s hard to evaluate from outside.

                                                                                                                                                                                    1. 8

                                                                                                                                                                                      I don’t (yet) own a PineBook Pro, but do own a PinePhone and a brace of Raspberry Pis (3 x Pi3s, 1 x Pi).

                                                                                                                                                                                      The only issues I’ve experienced are with proprietary software like games and media players that don’t offer ARM binaries. Otherwise it’s been plain sailing for me, and that includes laying the groundwork for a Common Lisp based dev stack for the PinePhone.

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        zge - I actually have’nt. I wanted a daily driver that consumer very little power and cheap. I have no real issues actually. I only use it as “ssh+browser” system, since the stuff I do is almost 100% in the terminal anyway :)

                                                                                                                                                                                      2. 6

                                                                                                                                                                                        I have a Pinebook Pro, and most things work great. The main problem is that some GUI apps don’t have official binaries, and casually compiling them yourself is often too difficult. As a result, you have to go hunt down unofficial binaries, which may or may not displease you. Some unofficial binaries I use:

                                                                                                                                                                                        I’m using the current default Manjaro KDE, so I’m compiling a lot of stuff using AUR, and rarely run into problems. Sometimes it will warn you that “The following packages are not compatible with your architecture” but so long as it doesn’t include binaries, most terminal-based software will compile just fine despite the warning, and some GUI apps compile as well (takes some trial and error to figure out which).

                                                                                                                                                                                        1. 2

                                                                                                                                                                                          What issues with compiling by yourself did you experience? The “usual” as in I’m not sure what build libraries are required or something else?

                                                                                                                                                                                          1. 4

                                                                                                                                                                                            I just mean that if I try to compile something, and the build fails, I generally don’t know enough to even start trying to fix it. I’m a noob. If a build fails I usually just give up and try something else.

                                                                                                                                                                                            On a few occasions I’ve tracked down the source of the problem and “fixed” it. For instance, Alacritty won’t compile on the PBP because the PBP doesn’t support gles3 by default yet, but I found a branch that compiles for me with gles2: https://github.com/nuumio/alacritty/tree/cyclopsian-gles2-nuumio

                                                                                                                                                                                          2. 1

                                                                                                                                                                                            Yes, I totally understand! That is “the price” people for being on a pretty new architecture. I do assume that everyone single one that has bought one is aware of it. I am thinking about having a dedicate “build-machine” like a RockPro64 from Pine that takes care of the compiling