Excellent! I love that it’s enabled flakes by default. The choice of Rust over shell in this case is pretty choice, too. Mac ships an appallingly old
bash that is wildly different from … every Linux distribution today.
Can’t wait to try it. Does it work on FreeBSD?
Unfortunately it doesn’t support FreeBSD yet. Nix/Nixpkgs itself has pretty poor support for FreeBSD today, so we didn’t build it out yet. We could definitely add it!
I haven’t had a lot of luck getting anything to work. If you have some idea/blog/website of how to bootstrap a new system, I’d love to hear about it. The official docs still download a little tarball of binaries built from a pre-existing nix system, and so far as I can tell, it’s not generally available for FreeBSD.
I’ve been lurking in the Exotic Nix Targets Matrix room, and it looks like the folks there have been putting a lot of good work into supporting less widespread targets, including BSDs and Illumos, recently. That room might be worth joining if you’re interested in those developments :)
Almost 20 years ago, I read a paper by some HCI folks who wanted to test how good the Star Trek computer interface would be. They simulated it by having a series of tests where either a human used a computer directly or a human asked another human to use a computer. They timed the tasks and subtracted the time that it took for the human who was pretending to be the computer to interact with the computer. In the majority of cases, the human using a GUI or CLI was faster. Humans evolved to interact with the universe with our hands long before we developed speech, so this wasn’t surprising. The only tasks where the voice interface was best were the ones that required a lot of creativity and interpretation by the user and even then they often required a lot of iteration because humans are really bad at asking for exactly what we want in an unambiguous format.
I think augmented reality with haptic feedback, done well, would be far more of a revolution in how we interact with computers than any natural language model.
Maybe I’m understanding it wrong but that’s no surprising outcome and also not testing Star Trek Computers?
Human -> Human -> Computer
is a longer path than
Human -> Computer
So why would it be faster?
They timed the tasks and subtracted the time that it took for the human who was pretending to be the computer to interact with the computer
Perhaps this is the key point?
That’s an interesting-sounding paper. I tried to find it, is it this one? I was able to find a really crappy scan of the full text.
I’m afraid the paper you found doesn’t match @david_chisnall’s description.
David described an experiment involving humans listening to voice input and then operating a computer according to those instructions. In the paper you linked, “Voice recognition based human-computer interface design” by Wei Zhang et al., voice input went not to humans, but to “InCube software, which is a voice recognition software that […] handles lexicons of up to 75 commands”. Its voice recognition capabilities sound much weaker than that of humans:
For the voice input interfaces, the ASGC [automatic semester grade calculation] software is pre-loaded with the oral commands such as “start”, “enter”, “OK”, etc., as well as 10 digits and 26 letters. A command consists of several keystrokes, a voice template, and an attribute. The commands are then linked with the equivalent keystrokes. Before a command can be recognized, a corresponding voice template must first be created.
I, too, would be curious to read the paper @david_chisnall described.
I wonder how different it is when one doesn’t have the high-bandwidth output channel of vision. It would be interesting to repeat the experiment except with blind people.
The way Tony Stark uses JARVIS in the marvel movies kinda makes more sense, now that you mention it. Even though he uses voice commands extensively, there’s almost always a visual, and often, if not tactile, at least manipulatable, holographic, component.
The voice commands are either for doing background tasks, or for additional, complex, queries or transformations in the visual representation.
But it’s all focused on the visual and touchable representation: the voice commands are shortcuts, not the main interface. Like keyboard shortcuts in a GUI program.
Looks like a much cooler future than blank text boxes everywhere.
A bit verbose, i find myself just reading the first sentence of every paragraph. As a ML researcher deep into the details however it find this to be an interesting hands-off perspective.
I love asianometry. He presents a wide range of topics in a very accessible way without omitting the interesting details.
Just started learning nim, and this looks like it solves a few of the gripes i have with the language. I’m still missing a good way of chaining iterators in a functional way without allocating a sequence for each step.
I’m still missing a good way of chaining iterators in a functional way without allocating a sequence for each step.
I admit this is somehow the first time I’ve heard of the
npm cache add command. For
buildNpmPackage (the new npm builder within Nixpkgs), we construct the cache ourselves in a reproducible manner (which then gets stuffed into a FOD). I went this route because (a) the cache format has been stable for long enough that I feel comfortable with this, and (b) I didn’t know the aforementioned command existed.
I’m sure there are some benefits that exist for my method – for example, the cache doesn’t have to be reconstructed on every build, leading to quicker builds, but it might be something to consider… not sure to be honest.
If you happen to be interested in discussing this further, feel free to reach out. I’m
winter on Libera and OFTC, and
@winterqt:nixos.dev on Matrix. We can also talk in the Nix Node.js channel on Matrix, which is
Came to the comments to mention this builder, but it is a nice surprise to find its author @winter on lobsters
Thanks for the reply! To be honest the way the cache is built isn’t very important IMO, although I was glad to see that
npm cache had improved enough to be used. You do also mention that by building the cache by hand, it does not need to be reconstructed on every build; that’s true, though extracting the
npm cache commands to another derivation would be straightforward. The example in the article was built to be easy to understand.
What I find really important on the other hand is getting rid of the FOD step, because this step forces users to update a value in the Nix config. This increases friction and adoption, and (IMO) is completely unnecessary since all the information is already present in the lockfile – provided a lockfile exists.
I’d be happy to chat! Although I think I forgot how to access matrix…
Never really having socialized online within my own country in my own language i chose the largest one tied to my country. Lots of odd but normal people and i love it. Have already covered my techy needs on matrix, so i go to mastodon for something outside my usual online echo chamber. Loving it
I’ve been searching for a mastadon alternative written in Go, so that I can contribute if need be. I’m total n00b to fediverse and activitypub and so all of my searches never found this. Thank you greatly for posting.
Another option that fits the small and hackable criteria is honk. There are a handful of people that run their own forks I’ve discovered since running my own instance.
I’d say that if you’re wanting high compatibility with the fediverse, honk is probably further away than gotosocial and with less inclination to fix the issues.
High compatibility with Mastodon, you mean. honk is perfectly ActivityPub compliant. Can’t blame honk if Mastodon does things in a non-standard way.
I have been considering honk for a while but haven’t made the switch yet. Do you have experience with moving an account using the account migration feature from Mastodon? (https://docs.joinmastodon.org/user/moving/#move)
Not sure about honk, but gotosocial doesn’t support the move activity yet, there’s an issue and the thing is on the roadmap for next year.
Honk does have an import command for pulling in content from Mastodon and Twitter backups. I’ve never tried it before; I started with Honk and left my Twitter behind.
One more question, do you maybe have some pointers to forks? I couldn’t find any and I’d like to remove the sqlite dependency and let honk serve tls itself instead of requiring relayd or a webserver in front of it.
Sure. Here’s a handful of honk forks.
Thanks! Just added some patches myself: https://github.com/timkuijsten/honk
I have a wip gotosocial package and nixos module in the works. The PR might surface once I get postgres to behave
Huh, this article portrays rust as being very limited. But Asahi Lina is building her kernel driver in rust? Is there currently somr unstable branch of the rust integration, yet to be fully upstreamed?
I should clarify that by “self-hosted” I mean “by organizations employing teams of engineers” and not “by individuals”.
While easy to use software is possible, I reckon making it is quite hard.
At #PreviousJob, we used BuildKite, and it was fantastic - you could deploy the agent pretty much anywhere you wanted, and I found the configuration much easier than any other system I’ve used (such as GitHub Actions).
I recently realized that what I really want out of a CI system is running locally. Ideally, with decoupled backends, so, it can run tasks in containers, but also just processes, for simpler projects.
Most CI configuration languages don’t really let you do that, so you need to duplicate the commands for building, linting, testing, etc.
There’s that terra something tool, but it requires containers always, I think.
I had a couple (very) rough ideas on a CI system. One was to make the actual task configuration a Lua script with a bunch of predefined functions to make a declarative pipeline possible, but to also allow the user to drop into more imperative steps. Using Lua lets you more effectively sandbox the tasks than using something like the JVM, the runner could be much leaner, and users could possibly get real autocompletion for their steps instead of only relying on docs for some made up yaml DSL. I also really want to integrate it more deeply with metrics, so you can see annotations in Grafana when you deployed and have automatic rollback when something goes wrong.
(nods) Though something like this remains, to me, the most ideal architecture.
The venv stuff is easy. Although mostly unique to python, it is not a difficult very problem to solve. When i dunk on python dependency management i refer to the issues that have been plaguing the ecosystem due to shortcomings in its design. I am sorry for the incoming rant.
Pip used to ignore version conficts, which has resulted in package ecosystem having bonkers version constraints. I’ve found it common for packages which require compilation to build, to have an upper python version constraint, as each new python version is likely to break the build. The most common drive-by PR i do is bumping the upper python version bound.
Poetry, although a major improvement for reproducibility, is in my opinion a bit too slow (poetry –help takes 0.7 seconds) and unstable (Of course this poetry version wipes the hashes in my lockfile!), has poor support for 3rd party package repositories, and does not even support the local-version-identifier part of the version schema correctly, which has resulted in people overriding some packages in poetry venv using pip.
Every python package manager (other than conda, which is fully 3rd party) is super slow during dependency resolution, as it can’t know what the subdependencies of a package are without first downloading the the full package archive and extracting the list of dependencies (pypi issue here from 2020), which is incredibly fun when dealing with large frameworks such as pytorch, tensorflow, scipy and numpy, where each wheel is at least a gigabyte in size.
For source distributions, dependencies are usually defined by setup.py, which must be executed to allow us to inspect its dependencies. This of course cannot be cached on pypi, as it is possible for setup.py to select its dependencies depending on the machine it runs on.
Then there is the setup.py build scripts which never seem to quite work on any of my machines. Some build scripts only ever work in docker environments I would never have been able to reproduce had it not been for dockerhub caching images build with distros whose repositories now have gone offline. This is especially becomes a problem when the prebuilt binary packages, made available by the package author, typically don’t target ARM and/or musl based platforms.
Neat, this mode seems to be a recent addition, but it is currently a bit restrictive though. I like the improved ANY behavior, but there is no json, coordinate or date type for it to constrain. Yet this is a big improvement! Thanks
You can enforce the syntax of column values (ensure they are valid JSON for example) using check constraints in a CREATE TABLE.
Here’s how to do that:
sqlite> create table test (id integer primary key, tags text, check (json(tags) is not null)); sqlite> sqlite> insert into test (tags ('["one", "two"]'); sqlite> insert into test (tags) values ('["one", "two"'); Error: stepping, malformed JSON (1)
Yeah. Every time I see this stuff I think it’s cool, but I really don’t want to give up the features of postgres in terms of types and triggers.
I’m less interested in immediate feedback to the query I’m writing. I’d rather have a tool that helps me write the cryptic dsl, like a symantic editor or something with suggestions. Maybe like those regex editors online
I get that. For me, it’s all about visualising the input data and what’s produced with what I express in the
jq filter. The shape of the data and how it morphs is a key part of my understanding of
jq and the data itself.
Python type hints accept any valid python expression, and stores the result as the annotation. You can print as a annotation…
I had some fun with this in the past, making the type of something depend on the time of day or value of a GET request
Another counter example:
I commonly match the name of the directory instead.
Yep! That’s a perfect example of when that observation doesn’t hold. Maybe I didn’t make things clear enough, but I do understand that file names aren’t always unique. That’s why I also have a feature for making path matching more accurate later in the post.
Great! Once again i did not read the full post.
No worries :)