It would be nice to list the extensions that these come with. For example, the T-Head cores have a bunch of non-standard extensions that fix the worst flaws in the core ISA.
It certainly would! I’ve grabbed information on extensions where it’s readily available (and prioritised doing so for standard extensions). I can of course point to https://github.com/T-head-Semi/thead-extension-spec - but I’d ideally find a source which confirmed which SoCs with T-Head cores implement which of these extensions (or perhaps the whole set implemented by everything since the Allwinner D1?).
I got my survey of commercially available RISC-V SoCs to the point I was happy enough to share it, and as I hoped I’ve started receiving a steady stream of corrections, additions, or further information. So I’ll be incorporating that.
Otherwise, gaming with my son as well as the usual weekend jobs.
Oooh, that looks really nice, thanks!
It was absolutely astonishing how difficult it was to get basic information about the RISC-V specification implemented by many of these SoCs.
That fits my experience dealing with many hardware vendors in general. You’d think they’d make full datasheets for all their products easily available so that you can actually, you know, see what their product is, but nope, you usually have to at least cough up your email and phone number so their marketing team can hassle you forevermore. Some companies like STM put some actual effort into making this info accessible. Others, like Allwinner and ironically open-instruction-set-poster-child SiFive, make it a lot more painful.
Even then, finding the actual info you want in the datasheet is often challenging. It’s usually there though… somewhere…
I started a blog last year, but predictably didn’t post as much as I’d hope. I’m trying to start to address that, and kicked off a new week log today. I’ve got a few other pieces in the works related to RISC-V / WebAssembly / LLVM or other topics so hope to make some incremental progress to line those up for future weeks.
Also the usual LLVM hacking, hoping to post more of my WebAssembly GC types work for upstream review this week and tick off some ABI related items on the RISC-V LLVM side.
Helix with multi-megabyte files is unfortunately still very slow, seemingly due to treesitter. See this issue (with a test case from me).
A multi-megabyte markdown file (I keep notes as I work through the day in a great big file) takes >5 seconds to load and lags upon insert so much it’s unusable. Opens instantly in vim, with no delays when editing.
Graydon Hoare’s feedback on this proposal is worth a read. Excerpt:
IMO this whole effort, while well-meaning, is an unwise direction. Writing two different copies of things when they are as fundamentally different as sync and async versions of a function is not bad. Trying to avoid the few cases that are just block_on wrappers aren’t worth the cost to everyone else by pursuing this sort of genericity. At some point adding more degrees of genericity is worse than partially-duplicated but actually-different bodies of code. This initiative greatly overshoots that point.
The whole Reddit comment section went pretty wild over this one. https://www.reddit.com/r/rust/comments/119y8ex/keyword_generics_progress_report_february_2023/
I’m naive to this problem space, coming from Ruby which has a global interpreter lock (but I’m very experienced with threading in both Ruby and somewhat with C).
I would say that Gordon’s take seems reasonable that there’s not a ton of cases where both can be easily represented in the same interface. Still I think that there are cases where authors do want to provide one function and have it handle both and I would like it if their lives were easier.
I don’t love having question mark for try operator and for something else. I like that a question mark now jumps out in my mind now when I read code. I actually originally thought in the example the “await?” call was part of the new syntax rather than handling an err. It was a bit confusing.
My original reaction to the example code and my naive reaction to the problem was: why not just let “impl async Read” accept a Read by default too. If you can rationalize that a sync function is a special case of an async function (one that always awaits immediately) then you could treat them the same). That would avoid need for new syntax.
Thinking about it though, a function author might be okay with that some times but in other times want to force only allowing async inputs in the way they could specify they only want a blocking Read. I believe this is what the color problem is talking about (correct me if I’m wrong).
Again, naively speaking, if async is a behavior essentially meaning it has an await or async method on it another reaction was, why not make Async a trait then you could require “impl Read + Async” and also have another trait like “MaybeAsync” which would indicate there’s a variant of the same type that also implements the same interface but async. I’m sure there’s a good reason, but it’s not obvious to me now.
I’m working through the Atomics book now and I’ve not written any async code in Rust yet. I’m happy lots of smart people seem to be thinking about these problems. I’m also learning a lot.
I think this work is completely separate to the long-running Python interpreter optimisation work that’s been happening upstream (which saw the specializing adaptive interpreter added in Python 3.11). Lua is of course well known for using a register-based VM (see The Implementation of Lua 5.0).
Note: I’ve lightly adjusted the title as “A small stack based, written to bring Advent of Code 2022 Day 13 puzzle to the extreme consequences” is missing words, and I felt lacked clarity. I hope this is in line with the story submission guidance “when the original story’s title has no context or is unclear, please change it.”
I’ve not touched it in a long time, but my entry in this space is sh-todo.
There’s a fairly active debate over on Reddit /r/programminglanguages on the degree to which this technique is useful.
Will the addition of multicore and algebraic effects make OCaml gain more traction?
Where does it stand compared to F#, Haskell, Scala or Rust right now?
Will the addition of multicore and algebraic effects make OCaml gain more traction?
It’s difficult to make predictions, especially about the future, but in my opinion, I very much doubt these things would have any effect on traction. In my experience people will not use OCaml because they don’t know, or don’t want to know functional programming, or simply don’t want it for various personal or organizational reasons. It’s almost never about the lack of any particular feature. In fact, people who reject OCaml outright very rarely have any idea what feature OCaml has or lacks.
Where does it stand compared to F#, Haskell, Scala or Rust right now?
I am not sure comparing any of those languages makes sense.
F# is .NET tech, yes you can run it on Linux or whatever, but realistically speaking, if you are not into Windows/.NET you would never use this. Also F# has the fewest features of the bunch, which is not necessarily a bad thing, but you mentioned fancy features like algebraic effects whereas F# is late 80s level of sophistication. F# doesn’t even have ML-style modules. And btw, this is not a me taking at jab at F#, 80s level of sophistication is great when the industry is still stuck in the 60s. Of course, F# is great if you are stuck in the .NET ecosystem.
Haskell is a non-strict language. OCaml makes it much easier to reason about performance, and to get good performance. Haskell has many features OCaml will never have (too many to list here), but now they have linear functions, and are discussing ways of adding dependent types. Very cool stuff, but unrelated to the reason people use OCaml.
Scala is in the Java ecosystem. Just like with F#, Scala is a non-option if you are not already in the Java ecosystem. And if you are, all the other languages you mention will not be an option, so it’s all moot.
Rust is not a functional language and has a very different applicability domain compared to any of the languages mentioned so far which makes any comparison difficult and pointless. It’s the only one specifically targeting low-level stuff. Yes, I am very aware of MirageOS, it’s a very niche thing, and has far less applicability than Rust in general. E.g. people use rust to run code on microcontrollers today. Feature wise, its type system is the most limited compared to all languages discussed above. I very much doubt Rust will ever get algebraic effects, because algebraic effects are hard without a GC. And I have no idea if OCaml will ever get linear types, or if they even want them. Linear types are difficult to mix with effects.
Algebraic effects and linear types are very different things, but both can be employed to tame IO-effects, and both OCaml (with effects) and Rust (with linear types) do exactly that. So since they solved “the IO problem”, I very much doubt they would be interested in adding more features, especially Rust.
For the record, I am working on a language that has both effects and linear types.
For the record, I am working on a language that has both effects and linear types.
Sounds interesting - have you published anything about it so far?
One of the killers of D’s momentum over a decade ago was the conflict between two (three?) possible standard libraries.
While a lot of other problems have been solved, Ocaml still suffers from this one. Until the whole mess with multicore/lwt/async sorts itself out (which will also impact any standard library), and the community completely settles on a single standard library, I do not think there is that much hope for increased traction. Also: documentation that isn’t just type signatures, which (especially) many third-party libraries suffer from.
For better or worse, Rust is a better option for the time being, mostly due to better tooling and a vastly larger library ecosystem. I’ve been down this road several times, and ecosystem size almost always wins out, unless you’re tinkering.
Until the whole mess with multicore/lwt/async sorts itself out (which will also impact any standard library), and the community completely settles on a single standard library …
Effects mean you don’t need monadic libraries like lwt and async now, and most “concurrent” code can just be written as plain OCaml, so things will likely converge. Porting code is straight-forward and can be done incrementally. See e.g. the migration instructions at lwt_eio (which allows running Lwt code under the (effects-based) Eio event loop).
You can even run both Lwt and Async code together, along with new effects based Eio code, in a single domain. See e.g. https://github.com/talex5/async-eio-lwt-chimera!
For context, u/talex5 is the main dev of Eio. I think a lot of us (OCaml users) are looking forward to a world without monadic IO libraries, and Eio is exactly that, along with clean primitives for byte streams, structured concurrency, etc. If there’s a time where OCaml can gain more interest, I think OCaml 5.0 (or 5.1) is exactly that time.
Catching up with things I’ve been hoping to progress at work, as my son was only in school for half days last week (first week in reception) which was quite disruptive.
I’m hoping to finish off a post for my blog covering what’s new for RISC-V in LLVM 15.
Posting for a couple of reasons:
The babysteps towards enabling buildbots are exciting. Although the need for maintainers to have final approval is obvious, it may come as a surprise just how centralised the current package update process is. Hopefully in the future fixes or updates can be submitted for PKGBUILDs via pull requests, with CI creating the package and testing, pending final approval from the maintainer.
I do actually need a lot of help to pull this off though :)
The latter point is interesting, but I’m of the opinion that the gitlab CI is good enough for linting and testing merge requests. The issue with gitlab CI is how you do something more abstract like a complete rebuild of readline
dependencies, or allow maintainers to build towards temporary staging repositories before doing the main integration work towards testing.
Currently I’m just messing around hoping people get inspired.
Yes, I’d imagine allowing such a flow for the “easy” things first would be the low-hanging fruit. Minor tweaks like version bumps, cherry-picking an upstream bugfix, adding a missing dependency. I don’t think this kind of thing is support by PRs/MRs right now but I’d love to be wrong.
I’d guess tooling beyond standard GitLab CI would be needed for larger scale rebuilds - if nothing else, I imagine you’d probably want to a bit more gating on the ability to burn that much CPU time. And perhaps those larger rebuilds are too interactive by nature…
It’s weird that this is not mentioned in the Archlinux news, or on Planet Archlinux. I would love these to appear in my RSS feeds.
That’s a good point - it looks like there are some issues with the RSS feed right now. Once that’s fixed, it would be great if it showed up on Planet ArchLinux.
I’ve finally pulled together everything for my blog, Muxup.com. See my post detailing some elements of the implementation (I know, creating a special snowflake static site generator and then writing about it is a bit of a meme at this point….). I imagine I’ll be doing more tweaks over the weekend. I’d really welcome any feedback or suggestions to tweak the design (I’m pretty pleased, but I’m no designer and it’s been a lot of trial and error).
To add to the comment about Helix vs Vim on large files - I’ve found Helix’s tree-sitter to be very slow for large markdown files https://github.com/helix-editor/helix/issues/3072#issuecomment-1207319836
To be fair, Helix uses the same tree-sitter library for markdown that Neovim uses, the one from MDeiml. So I’d assume that if Helix is experiencing slowness while opening large markdown files, so will Neovim.
I’m sure tree-sitter has been a helpful tool for people but in my anecdotal experience, almost all the tree-sitter libraries I’ve used so far (HTML, CSS, shell scripts, markdown) are too buggy to be used as a daily driver.
I’m sure tree-sitter has been a helpful tool for people but in my anecdotal experience, almost all the tree-sitter libraries I’ve used so far (HTML, CSS, shell scripts, markdown) are too buggy to be used as a daily driver.
I’ve also had this experience. It’s not that I can’t use them, but there’s weird inconsistencies. For example, tree-sitter’s TypeScript TSX highlights components normally, but highlights regular elements like <p />
the same color as imports (for me it’s red, in Solarized Light, whereas components are normally highlighted blue), which is really odd and makes no sense to me. So now my render functions in React look like some kind of weird American Flag Tribute (never forget in 4 days) to TypeScript.
In terms of a “daily driver”, I still use Vim plugins like vim-javascript
and vim-json
for highlighting, but tree-sitter has allowed me to remove a ton of syntax highlighting plugins that I just don’t need to have hanging around, which has been beneficial. But yeah, wouldn’t say it’s ready to replace your entire syntax highlighting stuff.
tree-sitter’s TypeScript TSX highlights components normally, but highlights regular elements like the same color as imports
This is very likely to be an issue with your colorscheme and not with tree-sitter/the grammar. I would be very surprised to hear that tree-sitter/the grammar gives your editor the same tokens for these two very syntactically different constructs.
Off tomorrow for Brussels for a meetup with my work colleagues in the Igalia compilers team. Looking forward to meeting many of my co-workers in person for the first time.
I’m talking at the first Cambridge RISC-V meetup later this week, so doing some work to prepare that.
On the WebAssembly side, I’m about to post an RFC on the approach to represent table types in C/C++ (as was requested on my current draft patch.
Recovering from COVID :/ With a young child at nursery, I suppose the surprising thing is it’s taken so long to catch us.
I’m starting at Igalia today. So a mixture of going through all their internal documentation and onboarding process, getting to know people, and hopefully starting to get up to speed on my first technical contributions (Clang/LLVM and WebAssembly).
I’m fairly focused on the toolchain side initially, but with so much JS engine work going on within the compilers team at Igalia, there should be opportunities to get involved there too later on.
I’m gearing up to start a new job on Monday and now have account access. So now I’ve done an initial pass through the internal wiki+handbook, I’m fighting the urge to spend the rest of the weekend reading up on everything and trying to relax instead.
What’s the story with this extension? Last change in 2021, is it stuck? any chance of it getting adopted?
Development is very active, though the key details have stabilised - the MVP description is really the best place to understand the current state of affairs, rather than the overview page that was linked.
GC support is about to ship in an origin trial for Chrome. There’s also a large portion of the spec implemented in JSC (some of my colleagues at Igalia did much of that implementation work).
IIRC it depends on a few other extensions, one of which is reference types, but those have been implemented already - I think that’s really that primary hold up at this point.