It took me several reads of that README to really get much of an idea about eio. I think there are a couple of discussions on discuss.ocaml.org that really help understand the model and api decisions.
A discussion that largely focus on capabilities in eio: https://discuss.ocaml.org/t/eio-0-1-effects-based-direct-style-io-for-ocaml-5/9298
An update with links to common libraries with eio backends, in particular the cohttp port is worth reading as it’s pretty straight forward. https://discuss.ocaml.org/t/update-on-eio-effects-based-direct-style-io-for-ocaml-5/10395
Materialized views are one of the few places I feel Oracle DB still leads, and that’s a scary comment. I would really love Postgresql adopt something similar to the REFRESH ON COMMIT
you can utilize.
TIL! This will be a powerful feature if Postgresql adopts it. More people would quickly adopt materialized views. Most folks use it sparingly because of the complexities of stale data.
How does the REFRESH ON COMMIT
work in Oracle DB? Does it just reload the delta, or the entire materialized view is refreshed?
There is ongoing work on incrementally maintained views in Postgres (see pg_ivm on GitHub). It uses triggers and “for each statement” delta tables.
It’s been ~20year since I worked with an Oracle system, so I’m mostly working off of documentation. But there are several options to do incremental and partial rebuilds. They all have different requirements of the underlying materialized view, indices, change tracking… But you can have it just update the effected values in the view once you understand those requirements.
If you meet the DDL requirements you should be able to just use the REFRESH FAST ON COMMIT
clause
I’ve had good luck so far with using generated columns for cases where I want an always fresh materialized view. It wouldn’t work across tables though.
I truly hope 2023 is the year we find OSS funding and support models evolve. I would love if we could form some foundations to ensure developers with highly depended on code (npm packages, gems or other), could ensure some form of income for the developer if they desire it.
I’ve tried and failed a couple of times writing the leftPad in mcrl2. It’s just awkward to perform this sort of proof when the entire concept is communicating processes. All that said reading this yesterday motivated me to write a small spec of a single process that I think I can fit all the criteria. It made me realize some limitations of mcrl2 I don’t often hit.
To maintain compatibility with FreeBSD (although there is no business need for it), we switch our dev env from Linux to FreeBSD regularly. The backend is in java so this is basically a no-issue. However setting up Postgres on FreeBSD is different than linux and that has hindered the ‘seamless switching’.
Looking into docker, I realized that it does not work for FreeBSD (or other BSDs for that matter) – and that’s the reason why docker was not considered for PG setup, in our case.
I have not found a docker-compatible PG install/mgmt tooling on freebsd. Otherwise, it would have helped a lot.
Looking into docker, I realized that it does not work for FreeBSD (or other BSDs for that matter) – and that’s the reason why docker was not considered for PG setup, in our case.
There is work underway to fix this. There are a bunch of layers here:
runc
) manages a tangle of cgroups, namespaces, and so on. runj
uses jails on FreeBSD, but it’s not really ready for widespread use. Or, wasn’t last time I tried it. It looks as if a load of changes landed in the last few weeks though.containerd
, which manages snapshots (on FreeBSD, it can use ZFS), fetching images, and so on. I believe the latest version supports FreeBSD but the container image spec doesn’t yet everything that it would want for defining FreeBSD containers.moby
. There are alternatives such as Buildah. These talk to containerd
via a control interface and do some things. moby
almost worked with FreeBSD last time I tried it but it wasn’t very reliable. It made a bunch of assumptions about the specific CNI plugins that work on Linux that didn’t work on FreeBSD.The FreeBSD Foundation is currently hiring (or has just hired, not sure what the status is) a Go developer to work on improving this tooling.
Unfortunately, the Docker tooling is not very usefully modular if you want to replace the lower-level parts of the stack. Even switching from Debian to Arch, for example, requires modifying the Dockerfiles to specify different base layers. I think Buildah might be better here because it replaces Dockerfiles with shell scripts that run commands in a container and so is able to more easily add conditional execution.
thank you for the insightful reply. Mimicking interfaces across deep layers within FBSD Jails, hive, networking – such that the whole ‘user-visible’ Docker ecosystem ‘just works’, is an arduous undertaking.
It shouldn’t be that bad, I hope. The OCI container infrastructure is pretty modular and supports things like:
Getting these things to work with FreeBSD jails (and, I hope, bhyve), involves:
containerd
shim that uses jails (OpenBSD has one that uses their hypervisor to run Linux VMs, I hope FreeBSD will also get one that can use bhyve for Linux and FreeBSD VMs).I suspect the last one will be the most complex because FreeBSD makes a bunch of assumptions about how jails are mapped to networks that may be less flexible than the OCI container model expects.
I have not found a docker-compatible PG install/mgmt tooling on freebsd. Otherwise, it would have helped a lot.
I’ve been using CBSD to manage this and the workflow is pretty smooth, but perhaps under-documented. The basic workflow is described in a nice article which you can use with the correct cbsd form.
https://freebsdfoundation.org/wp-content/uploads/2022/03/CBSD-Part-1-Production.pdf
https://github.com/cbsd/modules-forms-postgresql
It’s been problem free and I quite like the mix of TUI and shell scripting.
Thank you, from what I understood about CBSD, it cannot use docker image definition files.
I would not mind executing cbsd-compose up
if that’s the only thing we needed to change.
But it seems that CBSD would require separate definitions
True you need your own definitions, but at least personally I find the CBSD approach comforting in that it just uses Puppet/Chef/Ansible for the configuration. I know those tools and enjoy a lot of their features not available in Docker world. I will admit, if you only use CBSD for development that effort is likely not worth it. I’ve take to just running a docker machine in bhyve for work that uses docker, but I’m not completely happy with it.
Everything I’ve read in this is great, I don’t know if I’ve read more than a blog covering any of these topics before. I highly recommend reading the emails section if you read just one, it git a real big smile out of me. https://un.curl.dev/emails
I was just coming to post this!
I know people who have been waiting for this for years - in fact I randomly overheard complete strangers in a local pub complaining about waiting for the release to happen a year or two ago! (This probably says more about the nerdiness of my local area than anything else…)
Hopefully not too off topic but I’ve had this exact conversation in an Irish pub in Chicago, makes me wonder if we crossed paths.I have been waiting to see this code base for a lot of reasons. Specifically the parser and literate style. Off to go review the structure :)
A very cool thing and I especially love the test data. But I do worry about those learning or without enough knowledge using this for private keys. I’m not sure there is a good way to communicate that on the site unfortunately. It took me a couple of times reading the site before I found the:
(While supported, uploading private keys is obviously discouraged for production keys.)
While this question is on the better side of interview questions I think it’s important to understand why, it helps drive productive conversation between the interviewer and interviewee.
Personally I’m not a fan of coding-challenge style or taking the interviewee any further outside of their daily comfort zone than an interview already does. I personally like to talk about past projects / personal projects until get the sense of greater interest about some topic. I then ask questions to gain some minimal level of understanding, and ask for some change of that system. My hopes are that this shows the candidate in their best light, puts them in a comfort zone (they know more about the problem than I do), and helps to find often idea’s the held while they worked at a previous company or personal project. Something about this always felt right and the response seems to be very positive from those on both side of it.
Joe Armstrong is really a treasure trove of fun, interesting and accessible ideas. Of all the projects I remember that I really looked forward to being realized, sadly never really was. That was UBF it was the first time I was remember seeing a contract that so succulently described an event system. Having lived in the event sourcing world I sometimes wonder how much a contract like this would help be grok new parts of a system.
stored procs can have their uses, but there are defiantly costs to using them as well. In my experience the hardest part of developing with them is debugging and error messages when they don’t function as expected. SQL is a fine set language, but I don’t think any of the flavors works well with procedural logic, especially as the complexity grows. That complexity generally is most visible in the parameters of the stored procedures with inserts and updates… it’s not uncommon to have store procedures with 15+ parameters, it’s not fun to review client calls like that. The other major issue I’ve dealt with is change control, deployments and rollout of features is much harder when your application storage engine also holds your application logic. Do I release a new named version of my stored procedure and then release my services pointing to the new version of the proc? Do I update it in place? Will one service run a migration and update my stored proc for all services?
All that said using stored procs to tightly control the actions of a client can be a big security boon. Also, if you need the most performance possible store procs will likely get you there.
I think it is very important to look at the author here, and the intended audience. I think Sivers has two distinct things: minimalistic product vision and minimal “team”.
The author is focused on simplicity and minimalism, doing the least amount of work to get the job done. In such way, I doubt he’d ever have a store procedure with 15+ parameters.
And the other part is connected too: he’s often either working alone, or has a very small team. (At least that’s my understanding). So even if one more complicated proc sneaks in, it’s probably not overly complicated, and if it is, he probably wrote it anyway so he would likely manage with smaller risk then your usual 6-people team.
Fair points, and I’m not very familiar with any additional context of the author so hopefully I’m not sounding too dismissive. I’m not against using stored procs or the pursuit toward minimalism, but hope to add some details of why store procs are often avoided.
RE: the 15+ parameters, assuming you use stored procs for inserting or updating data this is very hard to avoid as a project grows. TSQL provides table value functions to help bring more structure to this, but at least as far as I know PL/SQL for postgres doesn’t have any way to support that.
Love the addition of exhaustive checks in case statements. After learning oCaml it is something I’ve wanted in every language I touch. Nicely done
The entire Bolt compiler series is a fantastic resource. The writing is clear and it does a great job of being “just real enough” to go beyond the usual clean surface.
I feel attacked, amused and saddened all that the same time, well done.
To nitpick the satire, I do believe you should at least start with an understanding of how we could scale. I’ve been at the tail end of more than a few large systems where session management and single state-full processes limited not only scaling but reliability. The horrors of single process failures would eventually stop the all user actions, because there could be only one! stay with me
I wish I had a good generic answer to how to scale, but it’s a “it depends” and there is no one-size fits all response.
That’s not quite true…the generic answer is spewing cash into consultants until the magic happens.
The tricky bit is the marginal cost per unit scale starts absurdly high and only gets worse. :P
The horrors of single process failures would eventually stop the all user actions, because there could be only one! stay with me
Not sure if serious but topically relevant article.
I’ve slowly been migrating toward managing all my home lab towards this same method. It’s amazing how simple and power the jails system is once you get into the basic setup. If you ever you just want to get up an running I’ve also found https://cbsd.io/ to be a really nice way to manage jails.
There’s also pot
, which now has a small ecosystem of off-the-shelf recipes for building jails.
I keep meaning to spend some time with pot. The integration with nomad as a orchestrator opens a lot of use cases for the jails system.
Klara systems just released a blog entry describing this, I’ve submitted a link here at lobste.rs: https://lobste.rs/s/14mhbn/cluster_provisioning_with_nomad_pot_on
I’ve been using iocage to automate some of the tedium, but wow cbsd looks pretty nice—and it works with bhyve as well (which I’ve previously been handling manually). I might look at migrating.
I’ve been meaning to create an ansible role (as it’s the only automation tool I’m comfortable with) for iocage but have been postponing it for a long time. How did you automate iocage in this case?
Oh, sorry, I could have been clearer. I didn’t automate iocage. I just meant that I’m using iocage rather than manually setting up jails, as it handles some of the repetitive and tedious tasks of doing so. I’ve considered using ansible as well, but to be honest, if I were to go down that route, I think I’d prefer to ditch iocage and set up ansible roles for jails directly rather than adding yet another layer of abstraction.
In any case, I’m planning on upgrading the main machine in my homelab sometime this year, and I’m not yet 100% decided on whether I’ll stick with FreeBSD. I’m currently looking at Fedora CoreOS or possibly even SmartOS.
I recently came across this https://github.com/alcestes/mpstk, and found the above this talk really interesting for a couple of reasons.
There’s no overhead compared to calling regular Erlang code, and the generated APIs are idiomatic Erlang.
Any Gleam packages published to Hex (the BEAM ecosystem package manager) now includes the project code compiled to Erlang. Once the various build tools of the other languages have been updated they will be able to depend on packages written in Gleam without having the Gleam compiler installed.
We want to fit into the wider ecosystem so smoothly that Erlang and Elixir programmers might not even know they’re using Gleam at all! All they need to know is that the library they are using is rock-solid and doesn’t fail on them.
This is awesome! It’s really a shame that Elixir never did this. All the Elixir libraries are really painful to use in Erlang projects. I intentionally use Erlang for the open source libraries I’ve created so they are usable from all languages built on the VM.
You feel exactly the same way I do! There’s so much good stuff in Elixir I wish the other languages could use.
What are some other notable languages on the BEAM runtime? I’m only aware of Elixir, Erlang, and Gleam.
Elixir and Erlang are the big ones but the other ones that get used in production that I’m aware of are LFE, Purerl, Hamler, and Luerl
It’s really a shame that Elixir never did this. All the Elixir libraries are really painful to use in Erlang projects.
Aside from needing an Elixir compiler in all cases, and not being able to use Elixir macros (not Elixir’s fault!), where is the pain point? I am on the Elixir side 99.9% of the time so I don’t see it.
There’s a few things:
'Elixir.Project.Module':'active?'(X)
.These are not huge barriers, but if you use a library written in Erlang (or in future Gleam) you don’t have to worry about any of this. You add it to your list of deps and that’s it.
I think you missed my point. It’s troublesome when you want to use Elixir libraries from Erlang, or from any other language on the Erlang VM. The overhead and hassle of retrofitting an existing project with support for Elixir compilation makes it not worth the effort. Thus I stick with Erlang because I need to write code that can be used from any language on the VM.
It could be (and probably is) pretty easy to use one of the existing rebar3 plugins for using Elixir from a Erlang project. Take https://github.com/barrel-db/rebar3_elixir_compile as an example integration. I haven’t used it, but it looks pretty straight forward. I still wonder how any configuration would be wired together, but that should be pretty reasonable to write if needed.
Beyond the build system, I imagine Elixir code doesn’t feel ergonomic in other languages. It’s function argument ordering differs from a lot of functional languages, the pervasive use of structs in library APIs, the use of capitalization for modules doesn’t read well, and the constant prefix of 'Elixir.AppModue':some_function().
all make it less appealing.
All together it’s just enough hassle to use outside of Elixir, and often that bit of work + usability issues isn’t worth it.
Interesting! Didn’t know GitLab flavored Markdown supported that (and more).
https://docs.gitlab.com/ee/user/markdown.html#diagrams-and-flowcharts
Gitea also supports mermaid diagrams.
GitHub has a lot more social features. I’ve had a clue of projects on GitHub get issues and pull request with no marketing. So people are finding and using things.
I’ve considered if I should set up mirrors of my GitLab projects on GitHub for the marketing effects.
The social features are one of my biggest turn-offs, but you’re not the first to voice that opinion.
I pretend that I don’t care about the social features. I really like that you can follow releases and the reaction option is kinda nice (so people can voice their support on new releases, without any comment noise).
I don’t follow anyone, because that’s just adding a ton of stuff into my feed. But honestly it makes me happy when somebody does give me a “star” and I think it’s ok to have this vague indicator of credibility for projects.
But I do actually search on github first when I’m looking for some kind of software I don’t find directly by using DDG. So the network effect is definitely there. Same goes for inter-project linking of issues or commits. And I won’t be surprised if moving my crates.io source to gitlab would decrease the amount of interaction as much as moving it to my private gogs/gitea instance.
I’m curious what the social features are? I’ve used GitHub since it came out, but have never specifically noticed this
follows. watches. stars. networks. there might be more. Github has been on my radar since it came up and these have long annoyed me. I think one of their early slogans was “social coding” and it was irritating then. Some people really like it though.
For me it was largely about switching of culture of my work, shortly followed by me switching companies.
Personally if I were to start again I think I would utilize gitlab again. While occasionally their solutions lack the polish and simplicity of github, The “whole package” of their offerings is really nice. The only consistent downside is performance of certain UI elements.
I think (hope) the techniques of formal methods can be used to better describe systems of systems. Having a specification of how a set of services interact has long been a dream of mine. The types of properties I would love to prove are:
One aspect of this that think needs better tooling and exposure is creating more meaningful simulations. I imagine other approaches might work but my personal experience with using formal methods in a organizations end with “that’s magic I don’t trust it”, “I don’t understand the presentation of this” or “I think the journey is more important that the outcome”. The few break through moments I’ve had are with simulations[1]. I think that this simulation approach allows a wider audience and meaning to any investment in a specification.
[1] https://runway.systems/ or https://www.mcrl2.org/web/user_manual/index.html
Who’s got two thumbs and just received his copy of Modeling and Analysis of Communicating Systems?
this guy
That’s an interesting idea. I’ve purely looked at it from an ordering of events or actions. If you have any recommended toolsets or articles about that, I would love to try them out.
PRISM introduced probabilistic model checking. There’s a lot of literature, tutorials and case studies associated to it: https://www.prismmodelchecker.org/
Before I tried expect tests, I was very suspect of the workflow. However it really is the most productive workflow I’ve found. It’s very close to a REPL driven workflow that just happens to be repeatable.
That said I find that I often remove my expect tests over time with model or property testing. I do that because I’ve found lots of small simple assert tests are much harder to maintain over the life of a project.