1. 5

    Ugh, I’ve been reading too much about filesystems, so I stumbled over parsing the title. You know: btrfs, zfs, ProoFS

    1. 3

      Time to go outside! Now I think about it, the formally verified file system the seL4 team are/were working on should definitely be called ProoFS.

    1. 4

      For anyone unfamiliar with IHP, it stands for Integrated Haskell Platform, and can probably be thought of as something similar to Rails for Haskell. I haven’t used it myself yet but it looks really promising - for an idea of what working with it looks like, the GUI which does a lot of code generation for you etc, see the intro video: https://youtu.be/UbDtS_mUMpI

      1. 5

        If only if they’d support Dragonfly, OpenBSD or even NetBSD, I’d go opnsense.

        FreeBSD is, to me, just a boring BSD that lacks leadership and just follows Linux’s decisions. This is why Dragonfly was forked, and as far as I am aware, nothing’s changed.

        1. 16

          Is it a disadvantage to be boring in the world of firewalls?

          1. 9

            I’m sorry to ask, but what are you talking about? FreeBSD’s development seems to be completely unrelated to Linux’s, and why would I want an OS I rely on for infrastructure to be exciting? FreeBSD seems to have some of the best engineering (read, not hacking together random stuff until they feel it’s time for a release) I’ve seen in any OS, the system is consistent, the documentation is excellent, the system is reliable and provides the features its target users want. Th only complaint I have with it as a firewall OS is they never kept up with OpenBSD’s changes to PF, which means having to know two syntaxes if using both.

            1. 2

              Have you heard of wireguard?

              1. 4

                The highly praised cryptokey routing tunnel system that is available for Linux, MacOS, Windows, Android, iOS, OpenBSD and FreeBSD, as well as having a mostly-portable Go implementation?

                No, tell me more.

                1. 0

                  I assume you are referring to the developer who implemented, poorly, wiregard in the kernel scheduled to release in FreeBSD 13, only to have it ripped out because there were major concerns brought up?

                  Your point?

                2. 2

                  what are you talking about?

                  About FreeBSD being an unfortunate choice of an OS for opnsense to run on.

                  FreeBSD’s development seems to be completely unrelated to Linux’s

                  It isn’t, unfortunately. It copied the worst decisions Linux made, such as the fine-grained locks approach to SMP, and the complexity it brings. This is literally the reason Dragonfly exists.

                  For that matter, Dragonfly was relatively recently boasting about the superior performance of its network stack relative to both FreeBSD and Linux. An achievement that has nothing to do with putting more man-hours of effort (Dragonfly has few developers, and FreeBSD is a huge community with significant corporate funding), and everything to do with straight up better engineering.

                  the documentation is excellent

                  Yet FreeBSD is infamously significantly worse than NetBSD and OpenBSD in documentation. It might compete with Dragonfly on that topic, but only because Dragonfly is a small team which effort is focused on actual development.

                  provides the features its target users want

                  Excessively self-fulfilling statement.

                  as a firewall OS is they never kept up with OpenBSD’s changes to PF

                  Is one of many reasons I’d rather use something else as base for a router/firewall.

                  1. 16

                    It isn’t, unfortunately. It copied the worst decisions Linux made, such as the fine-grained locks approach to SMP, and the complexity it brings. This is literally the reason Dragonfly exists.

                    That’s a lot of assertion. FreeBSD uses fine-grained locking because it has good performance and fits well with C. Linux uses a mixture of fine-grained locking and RCU. FreeBSD imported ConcurrencyKit a few years back and so now also has a rich set of lock-free data structures for use in the kernel. Dragonfly pushed in the direction of lightweight message passing, but it’s not clear from any of the concurrency benchmarks that I’ve seen that this was in any way better (in general, I strongly prefer message passing and shared-nothing concurrency, but C is a terrible language for trying to use this kind of model).

                    The reason that Dragonfly exists depends on who you ask. If you ask Matt Dillon, it’s because N:M threading was a terrible idea and 1:1 threading was the right approach. If you ask any of the FreeBSD kernel developers who were around at the time, it’s because Matt Dillon kept committing broken code and shouting at people that they needed to fix the things he’d broken.

                    For that matter, Dragonfly was relatively recently boasting about the superior performance of its network stack relative to both FreeBSD and Linux

                    On what kind of workload? Netflix gets insane performance out of the FreeBSD network stack with large transactions by supporting in-kernel fast-paths for TLS (just the crypto, not the control-plane parts) and things like aio_sendfile. Last time I looked, they were getting around double the per-core performance with TLS that BBC iPlayer was getting from Linux without TLS. At the opposite extreme, Verisign was running half of their root servers on FreeBSD (half on Linux, so a bug in one wouldn’t take out the entire root) and servicing a lot more requests from the FreeBSD ones, particularly the subset of those that were using netmap with an aggressively specialised userspace network stack. Netmap is enabled for pretty much all NICs in FreeBSD, it’s available as a patch set for Intel NICs on Linux. DPDK provides similar abstractions now, but Netmap has been in-tree in FreeBSD for over a decade.

                    FreeBSD was not copying Linux with Jails, it was the first such system to exist in any operating system. Linux now tries to build the same abstractions out of namespaces, cgroups, seccom-bpf, gaffer tape and string. FreeBSD was not copying anyone with Capsicum, which still provides the best set of abstractions for writing compartmentalised applications of any OS. FreeBSD and Linux were both copying the same systems with the TrustedBSD MAC framework and KSMs, but the FreeBSD version is a lot more flexible.

                    1. 3

                      Speaking of RCU style things, SMR was introduced for memory and is now also used by vfs.

                      1. 1

                        Dragonfly pushed in the direction of lightweight message passing, but it’s not clear from any of the concurrency benchmarks that I’ve seen that this was in any way better

                        Message passing does lead to a more structured design, which has a myriad of benefits. When the difference in terms of development manpower is taken into account, there’s no doubt that the approach Dragonfly took was the better one in hindsight.

                        https://www.dragonflybsd.org/performance/

                        On what kind of workload?

                        The network perf data I was thinking about, took a while to find: https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf

                        Netflix / Verisign

                        Yes, there is no doubt FreeBSD is better than Linux, but that is a low bar to meet.

                        FreeBSD was not copying Linux with…

                        See above. Linux is a mess and should be the textbook example on how not to do software development. FreeBSD has it beat by actually making decisions on where to go and following through.

                        It is just that they will many times blindly go forward with what Linux did. The fine-grained locks over message pasing was a fundamental fuckup I simply can’t pretend didn’t happen and makes me sad to think about every time FreeBSD comes up.

                        I believe the effort overhead associated to it will in time put FreeBSD (and Linux) very clearly behind Dragonfly in performance / scalability. And that wall will actually be insurmountable due to fundamental architectural issues, not something that hacking in some optimizations will get anywhere close to breaking.

                        1. 10

                          It must be hard being a contrarian in the face of people who actually do the work. Or not really addressing any of their points.

                          1. 6

                            Message passing does lead to a more structured design, which has a myriad of benefits.

                            Let me fix that for you:

                            Message passing may lead to a more structured design, which might result in a myriad of benefits.

                            1. 2

                              I thought message passing used fine grained locks, just down one abstraction layer

                    1. 1

                      How does this differ to the currying syntax that some schemes provide directly in define or lambda (https://srfi.schemers.org/srfi-219/)?

                      1. 4

                        Yes, a lot of schemes provide directly in a lambda or define the ability to make something like this:

                        (define (((foo bar) baz) bax) (+ bar baz bax))
                        

                        That is an amazing construct, I’m into it, I’m happy Scheme can do it.

                        I made a post here about that type of curry a month ago and a comment about that type of curry an hour ago.

                        What I’ve made here is more awesome in the following way.

                        With (define (((foo bar) baz) bax) (+ bar baz bax)), you have to predict the arities exactly.

                        Sure, you can do

                        (map (lambda (proc) (proc 500)) (map (foo 3) '(10 20 30)))
                        

                        and get (513 523 533).

                        But with the define-curry I’ve made here, if you do

                        (define-curry (foo bar baz bax) (+ bar baz bax))
                        

                        Not only does the above map work (and give the same (513 523 533) result). You can curry any of the arguments. All or none or just some.

                        (define (((foo bar) baz) bax) (+ bar baz bax)) means that you predict that you are gonna add on one argument first, and then the other, and then the other.

                        Or you might want to take two arguments first and only leave one argument hanging: (define ((foo bar baz) bax) (+ bar baz bax)) or you might want to take one argument first and leavet two arguments hanging: (define ((foo bar) baz bax) (+ bar baz bax))

                        You have to know before. The procedures aren’t flexible at all.

                        With (define-curry (foo bar baz bax) (+ bar baz bax)), you can call foo with any number of arguments and it’ll wait for the rest.

                        You can even use it like any normal procedure and do (foo 10 200 3) right away to get 213. Or you can do (foo 10 200) and wait for the last number, or (foo 10) and wait for the last two numbers. And if you do, you can even then only give it one and keep waiting even more on that last number .You get a procedure that’s ready for anything!

                        That’s what I mean by arbitrary-arity, level-recursive currying.

                        That’s why my example was:

                        (=
                         (foo 100 20 3)
                         ((foo 100) 20 3)
                         ((foo 100 20) 3)
                         ((foo) 100 20 3)
                         (((foo) 100) 20 3)
                         (((foo 100) 20) 3)
                         ((((foo) 100) 20) 3))
                        

                        All those forms evaluate to the same thing! That’s the baller part! And (((foo) 100 20) 3) also or for that matter (((((foo)) 100)) 20 3).

                        This is great, all those times I’ve used a procedure and been like “aw, man, I wish this was curried!”

                        If it had been curried (using the SRFI-219 style curries), that would’ve gotten obnoxious quickly too, because I would’ve had to add in those extra parens everytime I didn’t need the curry feature. I would’ve had to always write ((((foo) 100) 20) 3) or whatever the procedure-writer predicted I was gonna need to write.

                        That’s why this is awesome.

                        Now to the bonus episode. The define-curry I made is great when defining new procedures but what about all the backlog of existing procedures? Like + and print and list. I don’t wanna be bothered to re-define all of them with define-curry, I want to be able to use them and curry them as they are. That’s why I made 🍜 (yeah, combinators and operators can be or contain emoji in Chicken Scheme).

                        I just prefix any function with 🍜 and it gains one level of arbitrary arity curry. (🍜 list 1 1 2 3 5) becomes a procedure that waits for more arguments to add to the list, so

                        ((🍜 list 1 1 2 3 5) 8 13 21 34)
                        

                        (1 1 2 3 5 8 13 21 34).

                        This is worse than the define-curry I made at the top of the post in the sense that it only gives me one level of calls / parens. If I need more levels, I need to add more 🍜.

                        ((🍜 (🍜 list 1 1 2 3 5) 8 13) 21 34)
                        

                        (1 1 2 3 5 8 13 21 34).

                        On the other hand, one part of the 🍜 combinator that is better than my define-curry is that it can handle procedures that themselves are already arbitrary-arity, like list and + and string-append. The define-curry only works with procedures that have a specific number of arguments. (In the case of the foo example, three arguments and it always adds three numbers.) Also, again, that it works on already existing procedures, not just newly defined ones.

                        1. 3

                          This is a cool post on multiple levels. As an ocaml fan, it’s fun reading about currying in Scheme. And kudos for 🍜 as an identifier! Functional programming, food, and puns: three of the good things in life.

                          1. 2

                            It feels like The Little Schemer for 2021.

                          2. 2

                            I don’t understand why the ramen emoji was chosen when there’s a perfectly good curry with rice emoji 🍛

                            1. 1

                              Because I didn’t know about it! Post now updated to use 🍛, thank you!♥

                              Trying to making that change over SSH (which I failed to do, I had to get up and go to the desktop) reminded me that it’s probably better to use some other identifier for this combinator. Maybe just a letter, k or h or c or something. I originally wanted to make read-syntax, some special kind of parentheses or something for this.

                        1. 6

                          Well that was disappointing. They called the event “spring loaded” and didn’t introduce anything that’s literally spring-loaded like a foldable phone would be…

                          1. 11

                            They called the event “spring loaded”

                            It’s both spring, and 4/20, so I’m sure there were some “loaded” people somewhere out there watching at the very least.

                            1. 4

                              Do people actually use foldable phones? (I have the same question about iPad Pros too.)

                              1. 2

                                I saw a picture on reddit yesterday of someone using a flip-phone fullscreen folding phone, complaining about the ads they get in the built in weather app. So, I guess the answer is at least one person does…

                                1. 1

                                  I see them used by hosts on shows sponsored by Samsung ;)

                                2. 1

                                  Do people actually use foldable phones? (I have the same question about iPad Pros too.)

                                  I can answer for the second part with a yes, indeed. Best device I’ve ever owned.

                              1. 17

                                I used to use SQLite all the time for geospatial data using the SpatiaLite extensions, it made dealing with data in many different formats much easier (and scriptable) and just simplified a lot of the work we had to do to manage weird datasets on nationalmap.gov.au(/renewables - sadly recently made defunct due to lack of government funding).

                                We’d pretty regularly get CSVs with columns for LAT and LON, and need to do some actual work with the data, or turn it into another format like GeoJSON. Or we’d get a bunch of GeoJSON data that we wanted to manipulate.

                                Probably the most useful thing I did (but have probably lost access to sadly) was sticking the Australian Government’s Geocoded National Address File (GNAF) into a SpatiaLite database which we could use to geocode addresses, display know addresses etc. It handed the rather large quantity of data well (though IIRC generating spatial indexes for the several million locations took a while. Having the indexes meant we could do things like find all addresses which mapped to identical or nearby locations.

                                All this is pretty basic, and could be done in plenty of tools, but it ended up being the one that was most flexible, allowing us to do more than we could in QGIS and without having to buy licenses for ArcGIS. Importantly it was also as powerful, at least as far as we needed, as PostGIS which we used for serving much of the data. Anything we knew would be static was simply put in SpatiaLite databases and handed to GeoServer.

                                1. 1

                                  This is a great illustration on why geospatial data must move past shapefiles and other clunku format to sqlite+spatialite and Geopackage (being sqlite+spatialite configured). Having al you data in a geospatial-aware database let you use easily Geojson as the data format to transfer in-between but also allow to just drop Geoserver/Mapserver in front of it (the databases) and provide WCS/WFS api for free or almost. QGIS can also be used easily as a viewer for a local or remote database with PostGIS or Spatialite or an editor also.

                                  There is some much to move to in the geospatial/gis field to simplify the data format and get rid of past format as shapefiles that is a pain to work with.

                                1. 4

                                  it may replace GNU Coreutils

                                  Is this going to result in a 4MB ls command? I’m still don’t totally understand how Rust publishes shared libraries and integrates with shares libraries without shims via crates. It still seems like Go: packaging every dependency together like a system-tool version of Java.

                                  The other major point of concern should be licensing. clang + llvm + rust base tools means getting away from the GPL. Will we see commercial Linux distributions in the future with no real free equivalents; where only the kernel is released and none of the underlying tooling? The Darwin/BSD of Linux distros?

                                  1. 3

                                    Don’t we see the latter already? Oracle Linux et al? And isn’t Darwin and example of why Linux itself would remain free? Also I don’t think using clang, llvm, etc. will change much about a project’s license. The reverse where GCC is used also didn’t seem to have a huge effect on BSD licensed code.

                                    Also I don’t think the Go / Java comparison is fair, cause with Java you will still need to install Java itself, which on itself pulls in a huge amount of third party software as dependencies.

                                    Also static linking is possible in C as well and dynamic linking I think is possible in Go by now and I think in Rust? If that’s what you meant.

                                    I still agree on file sizes though. And then your Docker base images will be gigabytes. ;)

                                    1. 3

                                      I’m not sure this is much of an issue. Open/FreeBSD are licensed with similar licenses to most of the rust ecosystem, but there aren’t really fully commercial versions of these. Besides, nobody’s stopping you from writing GPL code in Rust, it’s just that MIT is a more common license.

                                      1. 1

                                        There’s certainly commercial systems based off FreeBSD, IIRC Sony have been using it as the base for PlayStations for years. There’s also several storage and firewall vendors who’ve built their commercial systems on FreeBSD, and not to forget Darwin/macOS itself using much of the freebsd user land.

                                      2. 3

                                        Will we see commercial Linux distributions in the future with no real free equivalents; where only the kernel is released and none of the underlying tooling? The Darwin/BSD of Linux distros?

                                        It’s called Android and it’s the most widely deployed Linux distro. Okay, AOSP exists, and you can build it, but most Android software depends on various Google proprietary services.

                                        1. 1

                                          By “commercial” I assume you mean non-free or proprietary? There are already many commercial distros and used to be many more :)

                                        1. 1

                                          I remember when this or some similar discussion came up years ago, making lots of claims that there were things that were only possible in concatenative/stack languages and particularly languages like Haskell could never do the same things. The comments on r/Haskell had a well typed implementation of the whole post in only a few dozen lines, and because of the types it was much more difficult to write incorrect code. IIRC they even has nearly identical syntax, with the addition of a begin and end keyword to inject a () into the start of the process. Looks like one of the posts was https://www.reddit.com/r/haskell/comments/ptji8/concatenative_rowpolymorphic_programming_in/ which has this quote from #haskell

                                          <roconnor> @quote stack-calculator
                                          <lambdabot> stack-calculator says: let start f = f (); push s a f = f (a,s); add (a,(b,s)) f = f (a+b,s); end (a,_) = a in start push 2 push 3 add end
                                          
                                          1. 1

                                            Reminds me of f-script back in the day https://en.wikipedia.org/wiki/F-Script_(programming_language)

                                            1. 8

                                              You should learn a new language only if it learns you something along the way. Some language can be put in a “bag” of similar patterns:

                                              • Python/Javascript/Ruby/PHP/Perl; common scripting language spirit
                                              • C/C++/Java/Go; compiled in spirit, feel “not modern” quite low level
                                              • Clojure; has my preference to have fun while learning new concepts along the way, should make you quite productive
                                              • Haskell; if you prefer never finish your work but learn a lot of things along the way trying and feel good about yourself because you mastered the beast (partially)
                                              • Rust; if you still like to learn a low level imperative languages but with a lot better compiler that checks common mistakes for you so your CRUD API will probably be a lot faster and use a lot less resources than using Python
                                              1. 2

                                                if you prefer never finish your work

                                                I’m not sure this is fair, I’ve been developing using Haskell for about 6 years now and we’ve completed plenty of projects. In fact, one of Haskell best features is the ability to fearlessly refactor; when new requirements come up, we just make the obvious change and the compiler guides up to all the places we forgot about. This makes experimentation also very cheap, but only once you’ve learned the language enough to be proficient and know the patterns to use and avoid to help the compiler help you (big one for me is always using total case statements, unless it’s a situation where you know there’s a single thing you want to match and everything else is invalid).

                                                1. 1

                                                  Learning Haskell to the level of pragmatic productivity for an average software developer sometimes literally takes years.

                                                  Not a single experienced Haskellite would argue with that sentence. Anyone who would - probably hasn’t ventured deeply enough into quite complicated ecosystem of Haskell.

                                              1. 13

                                                Give Ada a shot. It’s used in high-reliability contexts, is ISO-standardized, has been in development since the 70’s and constantly updated and has had a pointer ownership model long before Rust came and claimed to have invented it. Admittedly, it’s not as “cool” and your code looks “boring”, but is very well-readable. The type system is also very strong (you could, for instance, define a type that can only hold primes or a tuple type that can only contain non-equal tuples) and even though Ada is OOP, which I generally dislike, they’re doing it right.

                                                Additional bonuses are a really strong static analyzer (GNAT prove, which allows you to statically (!) verify there are no data-races or exceptions in a given code based on the Ada Spark subset) and parallelism and concurrency built into the language (not some crate that changes every week).

                                                Many claim that Ada was dead, but it’s actually alive and kicking. Many people are using it but are just not that vocal about it and just get work done. As we can already see in this thread alone, the Rust-evangelists love to spread their message, but if you ask me, Rust is doomed due to it’s non-standardization.

                                                You wouldn’t build your house on quicksand (Rust), but bedrock (Ada).

                                                1. 4

                                                  How is the web stack stuff in Ada? Database access? It seems very interesting, but the ecosystem might not be in place for this specific application at least.

                                                  1. 3

                                                    Learning Ada at university for concurrent and parallel systems course, and real-time and embedded, showed me that C being the default for those domains really was a mistake. Ada has a very expressive concurrency model, I haven’t seen anything like it anywhere else (I love Haskell’s concurrency features equally, but they are very different). The precision you can express with Ada is amazing; the example in our real-time course was defining a type which represented memory mapped registers, could precisely describer what every bit would mean, in one of (IIRC) 8 alternative layouts depending on what instruction was being represented, and the type could be defined to only exist at the 8 memory locations where these registers were mapped. To do the same in C requires doing things which can only be described as hacks, and don’t tell the system important things like never allocate these addresses, they’re used for something already. The world has lost a lot by not paying more attention to Ada and hating on it without knowing the first thing about it.

                                                    1. 2

                                                      Ada looks really interesting to me because of all the checks you do at compile time, ensuring your program is correct before even running it. It’s also much more readable than something like, say, Rust. I would love something in the middle of C and Ada, with lots of compile time checks and the flexibility of C.

                                                      1. 6

                                                        After two days of kicking the tires on Ada, I’ve had nearly every opinion I had about it broken in a good way. I’m baffled I’m already productive in a language that feels like I’m being paid to write extra words like a serial fiction writer, but every so often, there’s some super useful bug-preventing thing I can do in a few lines of Ada which would be prohibitive or impossible to do in other languages (e.g. dynamic predicates, modular types).

                                                        The compile time checks it uses by default are in the vein of “if it compiles, it probably works” like Haskell or Rust. Within the same program you can turn on more intricate compile time/flow checks by annotating parts of your program to use SPARK, which is a subset of Ada and can coexist in the same project with your other code. The best way to describe it is almost like being able to use extern C within C++ codebases, or unsafe blocks in Rust to change what language features the compiler allows. Except code seems safe by default, and SPARK is “This part is mission critical, and is written in a reduced subset of the language to assist verification: e.g. functions must be stateless, checking of dependency of function inputs/outputs, etc.

                                                        1. 4

                                                          Let’s say I’m sold on this: what’s the best way to learn ada for - say - writing a web app or doing etl?

                                                          1. 3

                                                            I would do much like I do for any other language, throw some terms into Google and go from there. I’d download GNAT, play with some toy programs and maybe try out Ada Web Application.

                                                            1. 1

                                                              Right, I threw in some search terms but I was wondering if you had any insights beyond that. In particular, how do people discover ada packages?

                                                              1. 2

                                                                I’m sorry, I didn’t know if you being sarcastic. Sigh, the state of the internet these days.

                                                                Honestly, I have no idea. I’m just googling around trying to figure stuff out and this language feels like crawling into the operator seat of an abandoned earthmover and wondering, “What does this level do?” I used to work on ships with life-or-death systems and Ada feels much along these lines and industrial (as from an industrial manufacturing or maritime environment, not a bureaucratic, office, or software based one). They don’t use a tool because it’s popular, they use it because it does the right thing within the technical specs, and can be easily documented and prevents mistakes because people’s lives depend on it.

                                                                1. 3

                                                                  To answer my own question, it looks like there is a beta package manager and index that provides a jumping off point to find stuff https://alire.ada.dev/search/?q=Web

                                                                  1. 2

                                                                    Neat! I hadn’t found that yet.

                                                                    I poked around a bit last night and found that the Adacore Github account has a lot of things like unit testing (AUnit), an Ada language server, and a lot more than I thought would be there. My first major gripe is that gnattest isn’t part of GNAT community, and the AUnit links were broken, but I finally found it on that account. I still need to crawl through how the build system works and such if you’re not going through a package manager.

                                                            2. 1

                                                              If you have access to an Oracle installation, dive into your orgs stored procs. PL/SQL is Ada with a SELECT statement.

                                                            3. 3

                                                              Thanks for your detailed elaboration which I can only agree with!

                                                              And on top of all those guarantees and safeguards, you can easily write parallel code (using tasks) that is ingrained into the language. I find this truly remarkable given it actually makes sense to write web applications in Ada because of that.

                                                            4. 6

                                                              Have you tried D? It works as a flexible language where it looks like Python. Yet you can tighten it up with annotations (const, safe, …) and the meta programming can do plenty of compile time checks like bounded numbers.

                                                              1. 2

                                                                I have thought about D, but it seems to fall right in the middle of lower and higher level languages. It lacks a niche, as I see it.

                                                                I might be wrong though, I have yet to try it after all.

                                                                Update: I checked it out, and it actually seems really interesting. I’ll try it out tomorrow.

                                                              2. 3

                                                                I had the same thought as you when I first looked into Ada, thinking that it may provide safety but at the cost of missing closeness to the machine. However, you can get really close to the machine (for example bit-perfect “structs” and types). On the other hand, yes, if you add to much of the flexibility C provides, you end up with possible pitfalls.

                                                                1. 1

                                                                  That’s Pascal.

                                                              1. 15

                                                                I’m a fan of Rust, and I think it’s a reasonable choice for a personal project language. It’s more or less as suited for writing a CRUD webapp as any of the dozen or so popular general-purpose programming languages in widespread use. It’s an increasingly popular industry language, so knowing how to write code in it has a good chance of improving your marketability as a professional programmer. The language semantics and tooling do a lot of innovative things that many other mainstream programming languages don’t in my opinion (ML style types, memory safety in a GC-less context, cargo and related build tooling).

                                                                I might also suggest writing it in Haskell, if you’ve never touched Haskell or a similar functional programming language before. It’s a genuinely different way of thinking about programming from more mainstream languages.

                                                                1. 3

                                                                  I had thought about suggesting Haskell, it’s the language I would absolutely choose to solve this problem in (whip up some Servant web API types, chuck in some postgresql-simple/beam/selda/whatever the new hotness for DB access is, parsing is trivial in Haskell, etc.) but there’s a lot to learn before you can get something useful done in your first project - it’s pretty simple to cargo cult a python web app, it’s not so easy to do in Haskell because the language is much more strict about what is acceptable (at the benefit of allowing you to be much more expressive about what you think should be acceptable, and having the compiler tell you when you got it wrong).

                                                                  By all means, OP should learn Haskell, but it’s to really the right choice for a one off project like described - but then again. no language worth actually learning is either, I’d say the same about Rust, C++, Julia (probably) too.

                                                                1. 1

                                                                  Just a quick update: they’ve looked into and verified the issue and have pushed out a fix. They will continue to investigate and will provide updates as they have them.

                                                                  :)
                                                                  
                                                                  1. 3

                                                                    Can you stop saying this and make some kind of actual commitment that you will actually protect the privacy of your users? This is a massive violation of privacy, more needs to be done to prevent this happening again. I’m glad that all my data in B2 (several TB) is only access via Arq and I almost never use the web UI, but others definitely won’t be that lucky, and now Facebook knows possibly extremely sensitive data from your users. John’s going to be very upset to find out that Facebook know he has HIV Positive Test Results - John Doe.pdf in his bucket.

                                                                  1. 16

                                                                    Along with the effort of rewriting, there’s also distrust of new models of rewrites of existing ones. There are decades of work in the literature publishing and critiquing results from existing models, and there are…not that for anything new. It’s much less risky and thus easier to accept to port an existing model to a new HPC platform or architecture than it is to adopt a new one.

                                                                    Additionally, HPC is a weird little niche of design and practice at both the software and hardware levels. There are a few hundred customers who occasionally have a lot of money and usually have no money. Spinning that whole ecosystem around is difficult, and breaking a niche off the edge of it (if you decided to take climate modelling into Julia without also bringing petrochemicals, bioinformatics, automotive, etc) is a serious risk.

                                                                    1. 7

                                                                      When NREL finally open sourced SAM (GitHub), one of the standard tools for calculating PV output based on location and other factors, a friend of mine decided to take a look at the code. On his first compilation, it was clear no one had ever built it using -Wall, it had thousands of warnings. When he looked more closely he could tell it has been translated, badly, from (probably) MATLAB, and had many errors in the translation, like (this is C++)

                                                                      arr[x,y]
                                                                      

                                                                      to access a 2D coordinate in arrays - for anyone playing at home, a,b in C++ means evaluate a, then evaluate b and return its result, so this code was accessing only the y coordinate.

                                                                      This would be find if this was undergrad code, but this code had been around for a very long time (decades?), had dozens of papers based on it, and plenty of solar projects relied on it for estimating their ROI. I bring up this anecdote as a counterexample that the age of these libraries and programs does not mean they are high quality, and in fact their age lulls people into a false sense of trust that they actually implement the algorithms they claim to.

                                                                      He’s since submitted many patches to resolve all the warnings and made sure it compiles with a number of compilers, but I wonder how valid the results over the years actually are- maybe they got lucky and it turns out the simplified model they accidentally wrote was sufficient.

                                                                      1. 1

                                                                        All governments take actions because of these models. They do affect lives of every person on the planet ant future generations to come. “If it’s not broken, don’t fix it” approach doesn’t fit here. Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.

                                                                        critiquing results from existing models

                                                                        The models should be scrutinized, no the results they give?

                                                                        1. 33

                                                                          Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.

                                                                          Rarely have I read something so optimistic.

                                                                          1. 14

                                                                            As someone that has interacted with people that write these models, optimistic is putting it lightly. I think whomever thinks that rewriting a bunch of fortran will be productive is entirely underselling both fortran, and the effort to make fortran super fast for simulations.

                                                                            Rewriting this stuff in javascript isn’t realistic nor will it be fast. And any rewrite is going to have the same problem in 50 years. What are you going to rewrite it again then? How do you know its the same simulation and results?

                                                                            Sometimes I think us computer programmers don’t really think through the delusions we tell ourselves.

                                                                            1. 4

                                                                              But by rewriting we may check if the implementation follows the specification - see this as a reproductivity issue, do you recall when a bug in excel compromised thousands of researches? And by not changing anything we may find ourselves in a situation where no one knows how these models work, nor be able to improve them. Something similar to banking and cobol situation, but much worse.

                                                                              1. 5

                                                                                The “specification” is “does this code give the same results as it always has”. HPC isn’t big on unit testing or on other forms of detailed design.

                                                                                1. 4

                                                                                  Isn’t that a problem? How do we know then they follow peer reviewed papers they were supposed to follow?

                                                                                  1. 7

                                                                                    In general, we don’t know, and in the abstract, that’s a problem (though specifically in the case of weather forecasting, “did you fail to predict a storm over this shipping lane” is a bigger deal than “did you predict this storm that actually happened for reasons that don’t stand up to scrutiny”, and many meteorological models are serviceably good). There are recent trends to push for more software engineering involvement in computational research (my own PhD topic, come back to me in three years!), and for greater discoverability and accessibility of source code used in computational research.

                                                                                    But turning “recent trends to push” into “international shift in opinion across a whole field of endeavour” is a slow process, much slower than the few international flights and a fancy dinner some software engineers think it should take. And bear in mind none of that requires replatforming anyone off of Fortran, which is not necessary, sufficient, desirable, feasible, or valuable to anyone outside the Rust evangelism strike force.

                                                                          2. 8

                                                                            Most climate models are published in papers and are widely distributed. If you would like to remake these models in other languages and runtimes, you absolutely could (and perhaps find gaps in the papers along the way, but that’s a separate matter.) The problem is, getting the details right is very tough here. Is your library rounding in the same places the previous library was rounding at, in the same ways? How accurate is its exponential arithmetic? What’s the epsilon you need to verify against to be confident that the results are correct?

                                                                            The article links CliMA as a project for Julia based climate models, but remember, most scientific computing libraries use Fortran one way or another. We’re just pushing the Fortran complexity down to LAPACK rather than up into our model code. Though that’s probably enough to greatly increase explainability, maintainability, and iterative velocity on these models.

                                                                            1. 4

                                                                              “If it’s not broken, don’t fix it” approach doesn’t fit here.

                                                                              What are you even trying to say here? If it’s not broken we should rewrite it because… reasons? It’s not broken, so why would we waste public money rewriting it when we could use that money to further improve its ability to help us?

                                                                              1. 2

                                                                                Rewriting these models is could be made for a cost of a few international conferences, flights and accommodations of participants.

                                                                                The article is low on these specific details and I’m not too familiar with climate modelling but I bet that the models aren’t just fancy Excel sheets on steroids – there’s at least one large iterative solver involved, for example, and these things account for most of the Fortran code. In that case, this estimate is off by at least one order of magnitude.

                                                                                Even if it weren’t, and a few international conferences and flights were all it takes, what actual problem would this solve? This isn’t a mobile app. If a 30 year-old piece of.code is still in use right now, that’s because quite a stack of taxpayer money has been spent on solving that particular problem (by which I mean several people have spent 3-5 of their most productive years in solving it), and the solution is still satisfactory. Why spend money on something that has not been a problem for 30 years instead of spending it on all the problems we still have?

                                                                              1. 5

                                                                                That’s termination free iteration, not recursion.

                                                                                Recursion must have a base case

                                                                                1. 2

                                                                                  The base case is “are you bored yet?”

                                                                                  1. 1

                                                                                    Was about to say this but you beat me to it!

                                                                                  2. 3

                                                                                    It took me far too long to get this…

                                                                                    Relatedly, I also like this post about recursion.

                                                                                    1. 2

                                                                                      I’m on to you (ಠ_ಠ)

                                                                                  1. 15

                                                                                    Haskell is awesome, but like most languages there is cruft and legacy to be avoided. Haskell has a standard library known as base which unfortunately has a fair amount of unsafe or unperformant functions included. As such I went with an alternative standard library relude that builds on and improves base. On top of this, there are many core libraries that are not part of the standard library I wanted to use and have nice patterns around.

                                                                                    Maybe I’m spoiled working in the .NET ecosystem, but large amounts of the standard library being unusable shouldn’t be considered par for the course when working with a language. This reminds me of PHP.

                                                                                    Parsing Libraries […] Why is this nice in Haskell? The ‘monad’ abstraction is excellent for dealing with code with a lot of failure conditions (ie. parsing) and avoids ‘pyramid of doom’ type code. Haskell worked out really well in this key area.

                                                                                    The author doesn’t really explain how monads help with parsing, but I’ve written parsing logic in C# that didn’t result in a “pyramid of doom”. I don’t think the pyramid of doom is a language issue, but an architecture issue.

                                                                                    Compile Times.. Were Fine

                                                                                    I thought I’d call this out as it is a common complaint I see around Haskell. […] Compile dependencies from scratch Time: 17m44s

                                                                                    A 17 minute compile time is considered fine? Even the 1 minute development build time seems slow for such a small project.

                                                                                    Between the compile time, the library issues, and the issues with the official standard library, I don’t understand how the author can claim that Haskell and it’s ecosystem is production-ready. Production-ready isn’t a term that means someone, somewhere, has used it in production. Also it’s important to understand that not only the language and runtime need to be ready for production use, but also the ecosystem. Without all three being stable, other languages become the wiser and more logical choice.

                                                                                    I like Haskell. I’ve built several hobby projects in it. I think it’s a great language to learn to extend your development skills and think about problems in different ways. But never would I consider pushing it to be used in most production applications as it stands right now. For most companies, using it right now would be a mistake.

                                                                                    Five years ago, when I first started working in it, I had hoped the language and ecosystem would mature to a point where it could be recommended for most serious projects, but seeing the lack of progress since then I’ve since set my sights on other functional languages like F#.

                                                                                    1. 11

                                                                                      Maybe I’m spoiled working in the .NET ecosystem, but large amounts of the standard library being unusable shouldn’t be considered par for the course when working with a language.

                                                                                      The .NET standard library also has the same kind of unsafe and unperformant functions the author is talking about, the type of runtime safety he’s talking about just isn’t really a focus when working with .NET. He’s referring to functions like head which throw an exception if passed an empty list instead of returning a Maybe a, and I think we can agree that there are plenty of .NET functions that throw exceptions on invalid input.

                                                                                      The author doesn’t really explain how monads help with parsing, but I’ve written parsing logic in C# that didn’t result in a “pyramid of doom”. I don’t think the pyramid of doom is a language issue, but an architecture issue.

                                                                                      I agree that the author greatly exaggerated how parsing code in non-functional languages usually turns out, but if you haven’t written a parser in Haskell or using something like FParsec in F#, it really is worth it to see how nice a more functional parser library is to work with. Definitely one of the places Haskell shines.

                                                                                      A 17 minute compile time is considered fine? Even the 1 minute development build time seems slow for such a small project.

                                                                                      Agreed, Haskell build times are bad. Hopefully they’ll get there someday.

                                                                                      1. 3

                                                                                        The .NET standard library also has the same kind of unsafe and unperformant functions the author is talking about, the type of runtime safety he’s talking about just isn’t really a focus when working with .NET. He’s referring to functions like head which throw an exception if passed an empty list instead of returning a Maybe a, and I think we can agree that there are plenty of .NET functions that throw exceptions on invalid input.

                                                                                        Right, but even the community explicitly mentions not using base in Haskell. With .NET that is not the case. It’s an important distinction; the community expects the .NET stdlib to be “first class” in .NET development, the Haskell community does not have the same expectation with base.

                                                                                        1. 3

                                                                                          Right, but even the community explicitly mentions not using base in Haskell.

                                                                                          Maybe? Basically all Haskell code is still written targeting base. People have feelings about things like head and maximum and foldl, but in practise it’s at most a “maybe avoid those” tip from people with those feelings.

                                                                                          1. 3

                                                                                            Right but that adds a barrier to entry for new folks. There are good reasons not to use Prelude (and OP’s article brings up some of these good reasons imo), but if I were a new programmer learning Haskell, it would be get another thing I need to learn and understand. Why should I use an alternate Prelude? When? Which Prelude is better than others? Haskell has long prioritized progress and innovation over production stability which I think is a very reasonable for a language like Haskell but isn’t always the best fit for being productive.

                                                                                            Edit: FWIW I think developing in Haskell is great (oh and thanks so much for working on JMP @singpolyma!). I’m just trying to say I’d love to see these barbs in the ecosystem fixed a bit. Good on the Haskell Foundation for pushing this forward, and I’d love to see more production-oriented advice (and reports, like the OPs) with Haskell.

                                                                                      2. 5

                                                                                        The author doesn’t really explain how monads help with parsing,

                                                                                        That’s true,

                                                                                        but I’ve written parsing logic in C# that didn’t result in a “pyramid of doom”

                                                                                        But it’s worth reading up on the monadic parsing approach before declaring an opinion on the matter. The search term you want is “parser combinators”. It really is a fantastic approach.

                                                                                        1. 2

                                                                                          For what it’s worth I wasn’t really declaring an opinion, I was just saying that if they only thing you’re attempting to avoid is a pyramid of doom, there are ways to do that in other languages. I haven’t looked at Haskell parsing logic but it sounds like there are some interesting advantages it offers.

                                                                                          1. 1

                                                                                            There a various ways in other languages to avoid the ‘pyramid of doom’, but they are much more situational. For example, ruby has &. for null safe access ie. person&.pet&.num_legs which avoids the pyramid. However, they are very specific and break down quite easily.

                                                                                            Say, you want person&.pet(if pet_type == 'cat')&.is_purring which you want to return null if any of these objects are null (or the pet is not a cat), or true/false if they are not null and the pet is cat. Probably you can do something like this in ruby, but Haskell via ‘monad’ (and friends) has highly generic and composable techniques like this that are very clean to use.

                                                                                            I’m glossing over ‘monad’, it is not complex, but you need to learn the prerequisites before it makes sense. Actually I thought https://www.youtube.com/watch?v=J1jYlPtkrqQ was pretty good as a shotgun explanation if you are somewhat familiar with ruby.

                                                                                            1. 2

                                                                                              person&.pet(if pet_type == 'cat')&.is_purring

                                                                                              This looks more like a place where I’d use a prism, in Haskell with lens I imagine it’d be person ^? pet . asCat . isPurring :: Maybe (). I’d like to know how this is a parsing issue, it looks more like a object traversal problem.

                                                                                              1. 1

                                                                                                Yes not a parsing problem. Just trying to explain how Haskell avoids the pyramid of doom problem in a general way that is very flexible. Whereas other languages solutions are very specific and situational.

                                                                                        2. 3

                                                                                          What progress were you hoping for but didn’t see? The Haskell Foundation was created to work on ways to improve the Haskell ecosystem and I’m always looking for more improvement ideas myself.

                                                                                          1. 2

                                                                                            It’s worth stressing that that 17m compile time is a one off cost. GHC is of course an incremental compiler, so you only pay for the graph of modules affected by a change when developing which is significantly quicker, on the of seconds, not minutes.

                                                                                            1. 1

                                                                                              I don’t think the pyramid of doom is a language issue, but an architecture issue.

                                                                                              I haven’t seen it come up in the context of parsing but instead in chaining together asynchronous operations in languages that lack coroutines, so it’s most common in JS where nearly all IO is forced to be async. In C# you can just do normal IO so it’s not surprising it doesn’t come up. (or maybe because C# has coroutines? honestly not that familiar with it)

                                                                                            1. 9

                                                                                              This is great news, it’s always great to see there OpenBSD devs working to support new hardware.

                                                                                              Is there a list of OSs which are known to boot on M1? I’d love to so see a isapplesiliconeready style site for OS support.

                                                                                              1. 2

                                                                                                I studied under Prof. Paar, one of the authors of the textbook you linked. Amazing material, great didactics but mostly introductory.

                                                                                                Follow it at your own pace (was 1 year in Uni, but I expect it could be done faster). and after that, I’d recommend self-study and programming with the crypto pals challenges.

                                                                                                1. 1

                                                                                                  His lectures are available online for free, and they’re probably the most approachable and comprehensive ones I’ve seen: https://youtube.com/channel/UC1usFRN4LCMcfIV7UjHNuQg

                                                                                                1. 3

                                                                                                  A few years ago I went to a talk by Uncle Bob at a conference, and he was pushing some developer pledge which included that all developers would commit to using TDD in all projects. As a Haskell developers, my immediate response was but what about types? - I think since then I’ve worked at a few places where testing was done quite well, and have somewhat changed my view on testing, but still think that TDD is too much and as the article says, locks down the code too soon and makes refactoring painful.

                                                                                                  I’ve realised that my goal for writing tests for Haskell code is to first develop a failing test when a bug is found, and then make it impossible to write that bug in the first place, so I cannot write the test. This isn’t always possible, and in many cases the outer edges of the code I want to have tests for - parsing external input is usually of the form String -> Either Error SomeWellFormedType, and I want tests to ensure that all the strings I expect to parse do, and the ones I expect not to don’t, but within the app, working as hard as possible to make impossible states unrepresentable is often not too hard.

                                                                                                  1. 2

                                                                                                    I’m so keen to see what apple does with the second and third generation chips, it felt to me like this first generation was intentionally held back in specs to get them into the hands of people (devs) who really need them, but make them hopefully unattractive to most buyers. The results so far are pretty astounding, and Rosetta 2 being more than usable is a huge win, I hope Apple give a talk about how it works one day (when it stops being a killer app).