1. 11

    Note that the internet uplink remains untouched at 1 Gbit/s

    cries in suburban Texan

    1. 4

      telecom germany wants 55€ for 250 Mbit/s (if its available at all), in another post they said they pay ~50€ for 1Gbit/s T_T

      1. 4

        I have gigabit cable from vodafone in Germany. I def. get the downstream when I am using an ethernet cable. 500Mbit via Wifi is def. also the norm. Upstream is unfortunaltely only 50Mbit. All in all it costs me 50€/month. I think that is an okay price.

        1. 1

          Friend of mine was happily switching to 300Mbit unitymedia and then got total outages over days when corona hit..

          1. 1

            It works well for us because we live on the ground floor and there is some sort fo problem with the cable in the appartements above us. But due to certain people not being on speaking terms in our house for some reason, it is not going to be fixed. Therefore nobody else has cable internet and we don’t share the line. Sometimes human drama is to ones advantage.

            That being said, there is the occasional 6 hour outtage at night with vodafone cable too, but I guess that is unavoidable with residential internet.

            1. 1

              I’ve got the 100 Mbit from t-com but without any outage. I really dislike any kind of glitches to my connection.

        2. 1

          Here they offer 10 Gbit/s for ~40€ but I can’t really justify it. I’m happy with my cheap 200 Mbit.

        3. 3

          Cheer up; in my part of London, UK, I can’t get more than 8Mb. “First world”, “Global capital”.

        1. 2

          What happens if you are at the receiving end of this Cellebrite software, and your phone is one of the phones that has these completely unrelated files on it?

          Maybe your “interviewer” won’t find it as funny. Maybe it’s still better than the alternative? If they’re actually doing it, that is.

          1. 5

            If you are being “interviewed” by a government that does not have guarantees around civil rights and rule of law, a cheeky exploit likely won’t matter: the presence of Signal may be enough to convict you in the eyes of the state, let alone anything else found on your phone.

            If the state does respect rights and rules of law, I think the presence of an exploit targeting forensic gathering tools that you didn’t install yourself could arguably introduce enough doubt to the process to exclude anything identified in the search.

            1. 4

              If the state does respect rights and rules of law

              Show me a state whose spooks and counterterrorism apparatus respect rights and rules of law and I’ll show you a bridge you can buy for 5 bucks.

              1. 3

                Some governments have been known to torture and imprison people on the basis of owning a Casio watch: https://www.theguardian.com/world/2011/apr/25/guantanamo-files-casio-wristwatch-alqaida

                Having Signal or WhatsApp on your phone is sure to be excuse enough if the government you’re dealing with doesn’t guarantee civil rights or the rule of law

              2. 3

                Your interviewer won’t find it, that’s the point. Your interviewer’s software will parse the file, and the file triggers a remote code execution. But why would that remote code execution be displayed to the interviewer as user?

                But as it happens, I sort of know someone who works with this. His employer will be angry at Cellebrite, and will the contract will soon say “all CVEs must be applied, or else they pay damages”. Moxie will not have overlooked this aspect.

                1. 3

                  Not knowing more than what’s presented in the article, I imagined for example that the interviewer could have old reports open, or visible in a file browser, and then the exploit would modify them all. That could be one way to notice that something just happened.

                  Like when I change some file from underneath my text editor, and it goes “the file changed on disk, what do you want to do?”

                  1. 2

                    A common fallacy of general computing.

                    When you can program a computer to do anything, that includes any imaginable bad things. Just choose a meaning of “bad”, and general computing includes at an instance of that. The fallacy is to transfer the badness to general computing, or the developer who can choose freely what code to write and run.

                    A program such as Signal can run any code on a Cellebrite computer, including code which is bad for the Signal user. That doesn’t make Signal bad, or imply that the Signal developers would act against their user’s interest, or even that they might. Just that they could.

                    1. 1

                      Modifying past and future reports is not something I came up with. It’s right there in the article.

                      1. 1

                        Yes, because it’s a good threat.

                        Cellebrite has two main customer groups, one of them involves prosecution. That statement tells prosecutors that the report they use aren’t reliable. Those prosecutors need reliability and will tell their Cellebrite salescritters that, so it’s an excellent way to threaten Cellebrite’s Cellebrite’s income.

              1. 7
                fn main() {
                    println!("Hello, world!");
                }
                

                It is pure, and innocent, and devoid of things that can fail, which is great.

                Nitpick, but println! can panic.

                1. 5

                  Productive Compile Times

                  With Bevy you can expect 0.8-3.0 seconds with the “fast compiles” configuration

                  That seems much more reasonable than any Rust project I’ve touched. I was curious how that works, and found some info.

                  1. 12

                    Can we please stop using tabs altogether (the last vestigial remain of MDI) and move towards BeOS’s Stack paradigm in which each window title is “a tab” and you can stack together different windows?

                    The stack paradigm is easier for the users: one concept (windows) instead of two similar-but-different concepts (windows and tab-inside-windows).

                    A graphical example: https://www.haiku-os.org/docs/userguide/en/images/gui-images/gui-s+t.gif

                    1. 7

                      Every time I see tabs mentioned, I’m thinking about window management. The window manager is too weak, and therefore the application themselves had to step in and invent their own.

                      So basically agreeing :)

                      1. 5

                        Counterpoint: there are two major kinds of tabs, which the article seems to think about as dynamic and static. I would call them task-windows and multi-rooted trees.

                        A task-window is the kind of tab that MDI and browsers use: effectively a complete copy of the application main window, but space-compressed. A perfect window manager might be the best way to handle these, but it would have to be good enough for every application’s needs. I haven’t met a perfect window manager yet, but I haven’t seen them all.

                        A multi-rooted tree is most often found in giant config/preference systems, e.g VLC and LibreOffice. It could be represented as a single-rooted tree, but the first level branches are trunk-sized and so independent that a user will often only want to tweak things in one branch. Separating them out into tabs is a pretty reasonable abstraction. It’s not the only way of breaking the tree up, but it maps nicely from category to tab, subcategory to delineated section.

                        1. 3

                          Another counterpoint is Firefox with a lot of tabs. There are some optimizations that Firefox can do because it has access to domain knowledge. Like not loading all of the tabs on start. It could probably unload tabs as well. In order to do that the window manager needs to expose a richer interface.

                      2. 4

                        Former BeOS fan here.

                        Please no. :-(

                        This is all IMHO, but…

                        There are at least 2 different & separate usage cases here.

                        № 1, I am in some kind of editor or creation app & I want multiple documents open so I can, say, cut-and-paste between them. If so, I probably mainly want source & destination, or main workpiece & overflow. Here, title-bar tabs work.

                        № 2, I’m in a web browser, where my normal usage pattern goes: home page → lots of tabs (50+) → back down to a few → back to lots of tabs (and repeat)

                        In this instance, tabs at the top no longer work. They shrink to invisibility or unreadability. In this use case, I want them on the side, where I can read lots of lines of text in a column of a fixed width. Hierarchical (as in Tree-Style Tabs) is clever, yes, but I don’t need I already have a hierarchy: a browser window, and inside that, tabs. Those 2 levels are enough; I rarely need more than 2 or 3 browser windows, so I don’t need lots of levels of hierarchy in the tabs and TST is unnecessary overload and conceptual bloat.

                        The fact that Chrome can’t do this is why I still use Firefox. Or on machines where I don’t have to fight with Github, Waterfox, which is a fork in which XUL addons still work & I don’t need to lose another line of vertical space to the tab bar. In Waterfox as in pre-Quantum Firefox, I can combine my tabs sidebar with my bookmarks sidebar, and preserve most of those precious vertical pixels.

                        We have widescreens now. We have nothing but widescreens now. Vertical space is expensive, horizontal space is cheap. Let’s have window title bars wherever we want. How about on the side, like wm2? https://upload.wikimedia.org/wikipedia/commons/3/3b/Wm2.png

                        That worked well. That can mix-and-match with BeOS-style tabs very neatly.

                        1. 4

                          Microsoft was working on a feature called Sets for Windows 10 that would basically do this. I was very sad to learn that the project was axed though even after making it into an Insiders build :(

                          1. 2

                            Compare with tabbed application windows in current macOS. In almost any Cocoa app, tabs can be torn off to make their own window, or merged with another window. I’m not as familiar with Be, but the main differences seem to be that tabs still go in a bar under the title and can only be joined when they’re the same kind. I’m curious how stacking windows of different kinds would feel. Maybe a window would become more like a task workspace.

                            1. 2

                              I like the idea, and have tried it in Haiku, but in practice it was harder to use for me.

                              Maybe I missed some shortcuts? I was dragging windows to each other manually.

                              Maybe it’s just a matter of getting used to it? I don’t know.

                              1. 3

                                Applications, like web browsers, could create shortcuts for creating a new window as a tab of the current window. I think that would make it near identical in terms of behavior.

                            1. 2

                              Looking forward to this and many other things in Gnome 40. :)

                              1. 3

                                I’ve been using Recursive from the article as my UI font for a while.

                                The uniwidth feature makes some things look less janky, for example typing in a URL/search bar, where the matched text is bold. As additional letters match, the other letters all stay in place, and it’s nice.

                                Other things I like about it is that it can be monospaced, which I use for code/as default monospace font, as well as “casual,” which I use as my serif font on the web and elsewhere. So it covers all of my UI needs.

                                1. 7

                                  this is remarkable!

                                  for the sake of my understanding, what are the other popular options for installing a drop-in c/c++ cross compiler? A long time ago, I used Sourcery Codebench, but I think that was a paid product

                                  1. 7

                                    Clang is a cross-compiler out of the box, you just need headers and libraries for the target. Assembling a sysroot for a Linux or BSD system is pretty trivial, just copy /usr/{local}/include and /usr/{local}/lib and point clang at it. Just pass a --sysroot={path-to-the-sysroot} and -target {target triple of the target} and you’ve got cross compilation. Of course, if you want any other libraries then you’ll also need to install them. Fortunately, most *NIX packaging systems are just tar or cpio archives, so you can just extract the ones you want in your sysroot.

                                    It’s much harder for the Mac. The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use. I couldn’t see anything in the Zig documentation that explains how they get around this. Hopefully they’re not just violating Apple’s license agreement…

                                    1. 3

                                      Zig bundles Darwin’s libc, which is licensed under APSL 2.0 (see: https://opensource.apple.com/source/Libc/Libc-1044.1.2/APPLE_LICENSE.auto.html, for example).

                                      APSL 2.0 is both FSF and OSI approved (see https://en.wikipedia.org/wiki/Apple_Public_Source_License), which makes me doubt that this statement is correct:

                                      The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                                      That said, if you have more insight, I’m definitely interested in learning more.

                                      1. 1

                                        I remember some discussion about these topics on Guix mailing lists, arguing convincingly why Guix/Darwin isn’t feasible for licensing issues. Might have been this: https://lists.nongnu.org/archive/html/guix-devel/2017-10/msg00216.html

                                      2. 1

                                        The license for the Apple headers, linker files, and everything else that you need, explicitly prohibit this kind of use.

                                        Can’t we doubt the legal validity of such prohibition? Copyright often doesn’t apply where it would otherwise prevent interoperability. That’s why we have third party printer cartridges, for instance.

                                        1. 2

                                          No, interoperability is an affirmative defence against copyright infringement but it’s up to a court to decide whether it applies.

                                      3. 4

                                        When writing the blog post I googled a bit about cgo specifically and the only seemingly general solution for Go I found was xgo (https://github.com/karalabe/xgo).

                                        1. 2

                                          This version of xgo does not seem to be maintained anymore, I think most xgo users now use https://github.com/techknowlogick/xgo

                                          I use it myself and albeit very the tool is very heavy, it works pretty reliable and does what is advertised.

                                          1. 2

                                            Thanks for mentioning this @m90. I’ve been maintaining my fork for a while, and just last night automated creating PRs for new versions of golang when detected to reduce time to creation even more.

                                        2. 3

                                          https://github.com/pololu/nixcrpkgs will let you write nix expressions that will be reproducibly cross-compiled, but you also need to learn nix to use it. The initial setup and the learning curve are a lot more demanding that zig cc and zig c++.

                                          1. 3

                                            Clang IIRC comes with all triplets (that specify the target, like powerpc-gnu-linux or whatever) enabled OOTB. You can then just specify what triplet you want to build for.

                                            1. 2

                                              But it does not include the typical build environment of the target platform. You still need to provide that. Zig seems to bundle a libc for each target.

                                              1. 2

                                                I have to wonder how viable this will be when your targets become more broad than Windows/Linux/Mac…

                                                1. 6

                                                  I think the tier system provides some answers.

                                                  1. 3

                                                    One of the points there is that libc is available when cross-compiling.

                                                    On *NIX platforms, there are a bunch of things that are statically linked into every executable that provide the things that you need for things like getting to main. These used to be problematic for anything other than GCC to use because the GCC exemption to GPLv2 only allowed you to ignore the GPL if the thing that inserted them into your program was GCC. In GCC 4.3 and later, the GPLv3 exemption extended this to any ‘eligible compilation process’, which allows them to be used by other compilers / linkers. I believe most *BSD systems now use code from NetBSD (which rewrote a lot of the CSU stuff) and LLVM’s compiler-rt. All of these are permissively licensed.

                                                    If you’re dynamically linking, you don’t actually need the libc binary, you just need something that has the same symbols. Apple’s ld64 supports a text file format here so that Apple doesn’t have to ship all of the .dylib files for every version of macOS and iOS in their SDKs. On ELF platforms, you can do a trick where you strip everything except the dynamic symbol tables from the .so files: the linker will still consume them and produce a binary that works if you put it on a filesystem with the original .so.

                                                    As far as I am aware, macOS does not support static linking for libc. They don’t ship a libc.a and their libc.dylib links against libSystem.dylib, which is the public system call interface (and does change between minor revisions, which broke very single Go program, because Go ignored the rules). As I understand correctly, a bunch of the files that you need to link a macOS or iOS program have a license that says that you may only use them on a Mac. This is why the Visual Studio Mac target needs a Mac connected on the network to remotely access and compile on, rather than cross-compiling on a Windows host.

                                                    I understand technically how to build a cross-compile C/C++ toolchain: I’ve done it many times before. The thing I struggle with on Zig is how they do so without violating a particularly litigious company’s license terms.

                                                    1. 2

                                                      This elucidates a lot of my concerns better than I could have. I have a lot of reservations about the static linking mindset people get themselves into with newer languages.

                                                      To be specific on the issue you bring up: Most systems that aren’t Linux either heavily discourage static libc or ban it - and their libcs are consistent unlike Linux’s, so there’s not much point in static libc. libc as an import library that links to the real one makes a lot of sense there.

                                          1. 1

                                            Doing some cleanup and improvements on Rocket, my Gemini browser.

                                            Hopefully I can tag a v0.1 soon.

                                            1. 6

                                              When I was playing around with Rocket, I felt that Rust was an easy way to make a web API/website.

                                              Compared to Django or Rails or similar frameworks, it seemed to me like there was significantly less to worry about.

                                              But I haven’t built anything serious with it, so maybe I would change my mind if I did.

                                              Rust is not my favourite language (that would be Zig) but I would use it if I had to build a web app.

                                              1. 2

                                                I think the article makes some good points on why writing APIs might be easy, but proper websites / web applications not so much if you factor in things like form validation or CSRF tokens. And where there might be middlewares for that they are not standardized across web frameworks or toolkits.

                                                1. 1

                                                  I’ve used Rocket in some internal tooling in my company. I’ve liked the API; but the requirement of using nightly Rust compiler build is a real dealbreaker, regularly I had to spend time fighting with the compiler in order to compile the library. Even if it finally compiled, few months later and few rustup’s later, it broke again.

                                                  I’ve switched to Actix, which requires a stable compiler version.

                                                  1. 1

                                                    I had an argument with a person that said: the rust compiler changes too much, they need to update their code constantly.

                                                    I was really surprised because my experience was had the opposite with very rare exceptions.

                                                    They failed to mention that they used the nightly compiler. When we discovered that this was the cause for our different experiences, they basically said that in their opinion, you had to use the nightly compiler to use anything in their niche (ML).

                                                    I wonder for how many people this seems to be the case and if popular libraries like rocket which depend on nightly, support this.

                                                    1. 1

                                                      Yeah.

                                                      That’s one reason why I haven’t done anything serious with it.

                                                      It should work on stable in the next release it seems.

                                                      1. 1

                                                        It should work on stable in the next release it seems.

                                                        If I’d knew that, I’d leave those tools running on Rocket, dammit! ;)

                                                  1. 7

                                                    This seems to assume that all of cURL can be reimplemented without the use of unsafe.

                                                    Maybe it can, I don’t know, but I know other web-related Rust projects have had memory bugs/CVEs.

                                                    1. 4

                                                      Since cURL doesn’t have much need to touch raw memory or external API’s beyond what the standard library provides, I think it’s safe to assume that it shouldn’t need to use unsafe. Any time that it does would be for a performance tradeoff.

                                                      1. 1

                                                        I would imagine that it could. Rust without unsafe is pretty limited, in that the only data structure that it supports is a tree, but that is pretty much all that you need for most parsers and most of the dangerous code (i.e. the code that interacts with untrusted data) in cURL is parsing protocol messages.

                                                        1. 3

                                                          …the only data structure that it supports is a tree…

                                                          Technically true, if you consider records, sequences and such to be special cases of trees, but not terribly useful. You can make anything out of a tree after all. More precise to say that safe rust only supports non-cyclic data structures.

                                                          1. 1

                                                            No, safe rust doesn’t support arbitrary DAGs. An arbitrary DAG allows shapes where one node has two pointers to it from places higher up. This requires aliasing, which requires unsafe in Rust. The most common way of doing this in Rust is via the RC crate, which can be implemented in Rust only with unsafe code.

                                                      1. 49

                                                        Good lord, how is it elegant to need to turn your code inside-out to accomplish the basic error handling available in pretty much every other comparable language from the last two decades? So much of Go is well-marketed Stockholm Syndrome.

                                                        1. 16

                                                          I don’t think that responding with a 404 if there are no rows in the database is that any language supports out of the box. Some frameworks do, and they all have code similar to this for it.

                                                          1. 3

                                                            And sadly so often error handling is often done in a poor manner in the name of abstraction, though really bad one that effectively boils down to ignore that things can go wrong, meaning that one ends up digging through many layers when they actually do go wrong.

                                                            People eventually give up and copy paste StackOverflow[1] solutions in the hopes that one of them will work, even when the fix is more accidental and doesn’t fix the root cause.

                                                            The pinnacle was once checking code that supposedly could not fail. The reason was that every statement was wrapped in a try with an empty catch.

                                                            But back to the topic. Out of the box is all good and nice until you want to do something different which in my experience happens more often than one would think. People sometimes create workarounds. In the example of non existing rows, for example doing a count before fetch, so doing two queries instead of one, just to avoid cases where a no rows error would otherwise be thrown.

                                                            Now i am certainly not against (good) abstractions or automation, but seeing people fighting against those in many instances makes me prefer systems where they can be easily be added and can easily be reasoned about, like in this example.

                                                            [1] Nothing against StackOverflow, just blindly copy pasting things, one doesn’t even bother to understand.

                                                          2. 10

                                                            In what way is Go’s error handling turning my code inside out?

                                                            1. 6

                                                              Pike has set PLT back at least a decade or two.

                                                              1. 7

                                                                It is possible to improve the state of the art while also having a language like Go that is practical, compiles unusually fast and is designed specifically to solve what Google found problematic with their larger C++ projects.

                                                                1. 8

                                                                  compiles unusually fast

                                                                  There is nothing unusual about it. It’s only C++ and Rust that are slow to compile. Pascal, OCaml, Zig and the upcoming Jai are decent. It’s not that Go is incredible, it’s that C++ is really terrible in this regard (not a single, but a lot of different language design decisions made it this way).

                                                                  1. 3

                                                                    For single files, I agree. But outright disallowing unused dependencies, and designing the language so that it can be parsed in a single pass, really helps for larger projects. I agree on Zig and maybe Pascal too, but in my experience, OCaml projects can be slow to compile.

                                                                    1. 2

                                                                      I’m enjoying tinkering with Zig but I do wonder how compile times will change as people do more and more sophisticated things with comptime.

                                                                      1. 2

                                                                        My impression from hanging out in #zig is that the stage 1 compiler is known to be slow and inefficient, and is intended as a stepping-stone to the stage 2 compiler, which is shaping up to be a lot faster and more efficient.

                                                                        Also there’s the in-place binary patching that would allow for very fast incremental debug builds, if it pans out.

                                                                    2. 2

                                                                      Don’t forget D.

                                                                    3. 1

                                                                      My experience with Go is that it’s actually very slow to compile. A whole project clean build might be unusually fast, but it’s not so fast that the build takes an insignificant amount of time; it’s just better than many other languages. An incremental build, however, is slower than in most other languages I use; my C++ and C build/run/modify cycle is usually significantly faster than in Go, because its incremental builds are less precise.

                                                                      In Go, incremental builds are on the package level, not the source level. A package is recompiled when either a file in the same package changes, or when a package it depends on changes. This means, most of the time, that even small changes require recompiling quite a lot of code. Contrast with C, where most of the time I’m working on just a single source file, where a recompile means compiling a single file and re-linking.

                                                                      C’s compilation model is pretty bad and often results in unnecessary work, especially as it’s used in C++, but it means that you can just work on an implementation by just recompiling a single file every build.

                                                                      1. 1

                                                                        I have not encountered many packages that take more than one second to compile. And the Go compiler typically parallelizes compilation at the package level, further improving things. I’m curious to see any counter examples, if you have them.

                                                                    4. 4

                                                                      I don’t remember anyone in POPL publishing a PLT ordering, partial or total. Could you show me according to what PLT has been set back a decade?

                                                                    5. -2

                                                                      Srsly, I was looking for simpler, and was disappointed by the false promise.

                                                                    1. 2

                                                                      Wow, I love this.

                                                                      I wish I could make a game with it.

                                                                      1. 3

                                                                        Disclaimer: I haven’t watched the video, but having said that, doesn’t Zig have manual memory management? If I screw it up, doesn’t it segfault or buffer overflow? That would seem to make it dangerous for NIFs.

                                                                        1. 8

                                                                          Yeah, memory management is manual. It’s basically a modern C. That said, it has some significant improvements that make it easier not to screw up memory management:

                                                                          • optional types instead of null help making sure you’re not trying to reach a null pointer
                                                                          • defer/errdefer statements help with cleanups

                                                                          I’ve written some C and even though memory management was a bit of a pain especially at the start, it wasn’t really that difficult to nail down well enough. At least for an OCD person like me armed with a leak detector. I haven’t written much Zig, but it looks like a very nice experience for somebody who’s written C before.

                                                                          1. 4

                                                                            In this case it may not make a huge difference, because you’re dealing with FFI into another system with its own memory management. Such integration boundary already requires being mindful about what is allocated where.

                                                                            For example, Rust treats FFI as an unsafe black box and won’t save you from any unsafety or memory leaks at the FFI boundary. Before you get any help from the language you need to manually build a safe non-leaking abstraction.

                                                                            1. 2

                                                                              But in the case of Rust’s unsafe, the FFI is in the other direction, i.e. using FFI in Rust to call out to something external to Rust. In the case of NIFs it’s something external that’s calling in to the Zig code. The expectation and the warning is that this code had better be safe and error-free to the extent possible.

                                                                            2. 1

                                                                              You can compile in release-safe mode, or use the @setRuntimeSafety builtin per block scope if you want more granular control.

                                                                            1. 3

                                                                              Anyway, it seems this new name has offended some people.

                                                                              I think there’s an important difference between someone saying “this offends me!” and “just letting you know, the new name has unfortunate and probably unintended readings”.

                                                                              1. 1

                                                                                I read that and thought oh no, I hope they weren’t refering to my comment on here.

                                                                                I was not offended, and I think the worst it would do over here is induce giggles.

                                                                                That might be something they want to avoid, or not; totally up to them.

                                                                                1. 2

                                                                                  There were way worse comments than understandable giggles, and I don’t want to go through that again.

                                                                              1. 20

                                                                                I was wondering what happened to pijul.

                                                                                Now I wonder what the new ideas are; I can’t seem to find an overview.

                                                                                A little bit unfortunate naming for my native language where the genitive case for foreign words is typically formed by appending an s.

                                                                                1. 1

                                                                                  English has genitive S too. I wouldn’t worry too much.

                                                                                  1. 1

                                                                                    I know about English possessive Anu’s, what I mean is an actual genitive case, that would be used to say “the development of Anu,” “the advantages of Anu,” and such things, and the ending would just be s, not ’s.

                                                                                1. 7

                                                                                  At $666, it’s not exactly compelling.

                                                                                  Sure, it is faster, but I’ll still take the cheap FPGA approach.

                                                                                  PicoSoC on iCE40 all the way. With an iCESugar, it doesn’t break the bank.

                                                                                  I can see the value for people working on porting fundamental software, things like operating systems, programming languages or c libraries. But that’s about it.

                                                                                  On the bright side, it’s cheaper and better than their previous offering, which was $1k. If this is a trend and it continues, it’ll reach sane specifications and pricing.

                                                                                  1. 4

                                                                                    Pricing of various options for linux capabale alt ISA:

                                                                                    • PolarSOC ~ $500USD (600Mhz u540 cores, 2G Ram but get a FPGA)
                                                                                    • Sifive Hi-Five Unmatched $665 USD (Unrevealed but clockspeed > 1.5Ghz, 8GB Ram, PCI-Ex8, M2 for Network and M2 For Storage and u740 Cores (not sure if these are the upgraded ones announced last month or not).
                                                                                    • RaptorCS Thunderbird ~$1400USD No ram but 4 fast cores (and a probably very loud CPU cooler)
                                                                                    • Orange Crab FPGA (128MB Ram, limited expansion) $130USD ~100Mhz
                                                                                    • ULX3S FPGA (32MB Ram, some expansion) $130USD ~100Mhz
                                                                                    • HiFive Unleased + Microsemi exapansion board (for the same capabilities as the Unmatched) $3000USD

                                                                                    The new board is interesting as it’s < 25% of the cost of the previous iteration and pretty turnkey for putting together a system.

                                                                                    It is of course a question as to whether to get this or wait for one more generation (ie OOO (u840?))

                                                                                    (Edit: Spelling)

                                                                                    1. 1

                                                                                      Unrevealed but clockspeed > 1.5Ghz

                                                                                      What’s your source for this?

                                                                                      ULX3S FPGA (32MB Ram, some expansion) $130USD ~100Mhz

                                                                                      I’ll take that. I actually ordered one, but haven’t received it yet.

                                                                                      1. 2

                                                                                        One of the articles mentioned they thought it would be at least that given the process they used. My apologies, I should have said that it was expected to be at least that speed.

                                                                                        I am thinking about the ULX3S too as a slightly cheaper Mister but like the orange crab as it has more RAM.

                                                                                        1. 1

                                                                                          Unrevealed but clockspeed > 1.5Ghz

                                                                                          What’s your source for this?

                                                                                          u54 runs at 1.5Ghz, and this is supposed to be faster.

                                                                                          1. 2

                                                                                            Clock speed is only one component of performance. The 7-series micro-architecture has a higher IPC, so it’s entirely possible for the chip to be faster, even at a lower clock speed.

                                                                                            We shouldn’t make any assumptions until the speed is actually announced.

                                                                                            1. 1

                                                                                              Well, turns out it’s 1.4GHz. FWIW the reasoning I proposed is what people on reddit were speculating, but alas, it was not to be.

                                                                                      2. 3

                                                                                        For me it’s interesting because I would like a RISC-V “daily driver,” and this could possibly work. But it is a lot of money.

                                                                                        1. 3

                                                                                          You’re likely better off waiting. The time just isn’t now.

                                                                                          There’ll be more options, faster, cheaper. Possibly even open hardware (this board and chip are not).

                                                                                        2. 2

                                                                                          uh nice, are there more recommendations for tinkering with some FPGAs that aren’t that costly ?

                                                                                          1. 4

                                                                                            The aforementioned iCESugar is my advice to anyone who wants to get started with FPGAs. With peak iCE40 value/price, it uses the newest (UP5K, 5K LUT, icestorm supported) iCE40 chip, and provides 3 double-pmod (one of them shared with usbport/stm32serial) and an onboard RGB led. Embeds an stm32 µC for programming/serialport from the programming USB port. A separate USB port goes straight into the FPGA. Some jumpers can be removed to reclaim shared I/Os.

                                                                                            BlackIce MX is a more serious iCE40 devboard (w/HX4K, 8K LUT on yosys/nextpnr) with lots of I/Os (incl. PMODs), and onboard RAM + STM32F7, at $65.

                                                                                            ULX3S is a very high density devboard which uses a relatively powerful ECP5 (~85K LUT) at ~$130. Amazing never-seen-before value at that price, but I’m still waiting for mine to ship. As it can fit quite a bit, many miSTer cores have been ported, including the Amiga (w/020+AGA) implementation.

                                                                                            Tang Nano (GW1N-1, an interesting chinese chip at ~55nm fab, supported by project apicula, which is less mature than icestorm or trellis) sells at less than $10 shipped (aliexpress). Peak value/price. The more experimental option, but already works (LUTS, flipflops and routing, stll no hard block peripherals nor dual port ram blocks) with open FPGA tooling.

                                                                                          1. 4

                                                                                            I hope this submission isn’t too much like an advertisement. I was just excited to hear more about it, ever since it was announced.

                                                                                            1. 2

                                                                                              Indeed, it looks too much like an advertisement. It might have been better to just link the product brief.

                                                                                              1. 3

                                                                                                That would also be too much like an advertisement, in my opinion.

                                                                                            1. 5

                                                                                              I’m not so good at shell, so instead I wrote a little Zig program that prints my prompt, and just have a helper function in shell that passes it anything it needs.

                                                                                              prompt_helper() {
                                                                                                  if test -d .git || git rev-parse --is-inside-work-tree > /dev/null 2>&1
                                                                                                  then
                                                                                                      gs="$(git status --porcelain --branch --ahead-behind)"
                                                                                                  else # remove old status
                                                                                                      unset gs
                                                                                                  fi
                                                                                                  prompt "$1" "$USER" "$(hostname)" "$HOME" "$PWD" ${gs:+"$gs"}
                                                                                              }
                                                                                              
                                                                                              PS1="\$(prompt_helper \$?)"
                                                                                              

                                                                                              I don’t know if that’s a good way to detect a git repo. As I said, not good at shell.

                                                                                              1. 23

                                                                                                There’s also the ninja-compatible samurai. Written in C instead of C++, and significantly less of it.

                                                                                                1. 4

                                                                                                  What makes samurai so much smaller? I’m not familiar with either codebase, but I would guess that Ninja has more complex optimizations for fast start up for massive builds. I vaguely recall reading something about lots of effort going into that.

                                                                                                  I have no bias either way, just curious.

                                                                                                  1. 21

                                                                                                    I would guess that Ninja has more complex optimizations for fast start up for massive builds

                                                                                                    In my personal benchmarks, samurai uses less memory and runs just as fast (or slightly faster) than ninja, even on massive builds like chromium. If you find a case where that’s not true, I’d be interesting in hearing about it.

                                                                                                    As for why the code is smaller, I think it’s a combination of a few things. Small code size and simplicity were deliberate goals of samurai. As a result, it uses a more concise and efficient coding style. It also lacks certain inessential features like running a python web server to browse the dependency graph, spell checking for target and tool names, and graphviz output. samurai also only supports POSIX systems currently, while ninja supports Windows as well.

                                                                                                    In some cases, samurai uses simpler algorithms that are easier to implement. For example, when a job is finished, ninja looks up whether each dependent job has an entry in a map, and if it does, it checks each other input of that job to see if it was finished, and if all of them are ready it starts the job. samurai, on the other hand, just keeps a count of pending inputs for each edge and when an output was built, it decreases that count for every dependent job, starting those that reached 0. This approach is thanks to @orib from his work on Myrddin’s mbld.

                                                                                                    I have no bias either way, just curious.

                                                                                                    As the author of samurai, I am a bit biased of course :)

                                                                                                    1. 2

                                                                                                      If you find a case where that’s not true, I’d be interesting in hearing about it.

                                                                                                      I don’t have any observations. It was just an off-the-cuff guess based on an article I read about ninja’s internals, and the possibility that you may have traded a little performance for simpler code. But on the contrary, your example of a simpler algorithm also sounds more efficient!

                                                                                                      Thanks for the detailed reply. I didn’t even realize ninja had all those extra features, so naturally I can see why you’d omit them. I just installed samurai on all my machines! :)

                                                                                                      1. 1

                                                                                                        samurai also only supports POSIX systems currently, while ninja supports Windows as well.

                                                                                                        Any chance fixing this and adding Windows?