Question 3 is fun, because the first 100 Fibonacci numbers will overflow a 64-bit integer. I wonder what the original author wanted. Are they happy for you to go to floating-point approximations? Do they want you to use a BigInt library of some kind (or a language that has it built in)?
The author’s own answer to Question 4 was wrong, which made me laugh somewhat. The tests in the blog don’t cover any of the fun corner cases, so I’m not sure if it’s right either.
Problem 5 is a terrible question because it’s easy to get rabbit holed on trying to solve it in a clever way, whereas a brute-force solution (8 values in tri-state logic) should be very feasible even in an interpreted language. Someone is more likely to go down the rabbit hole if they have a bit more mathematical knowledge, so this question will select for people with less maths knowledge, as well as for people with more maths knowledge and an awareness that some problems can be brute forced.
emacs 29 was the version on the master branch, but is now on a separate branch, being tested and prepared for release.
emacs 30 is the continuation of the master branch with the new version number.
No new features to emac 29, only bug fixes, although it’s rumoured there may be some leeway due to tree sitter.
So emacs 30 will have new features and become the - relatively - less stable, cutting edge version, while 29 is tested and fixed as needed before release.
29 on master was surprisingly stable, and I suspect 30 will probably be reasonably stable too.
IMHO it’s very exciting to see how Emacs is getting modernized without loosing its ethos.
package.el added a lot of dynamism to the community. Without package management, it was very cumbersome to install things, track dependencies and keep them up to date.
use-package is now more or less a de facto standard for package configuration, and I think merging it into Emacs would bring lots of benefits (spoiler, it seems it might happen [1]).
The end goal should be make it possible to run a modern Emacs with plenty of features yet a tiny .emacs or init.el.
my biggest problem with use-package is that it hides away a lot of the implementation details, which I think is a barrier between being a user and a programmer of emacs.
Of course it’s (e)lisp, so you can do the same with the built in macro expansion facilities and introspection, with finer control. This package makes it trivial to explore macros.
Love macrostep! Tho the problem I have with use-package is also that it expands to pretty (imo over-) complicated code. I prefer a package like setup.el or even my own thing I’m playing around with that’s more of a progn around a related block of configurations.
I agree that packages are great, and that keeping related stuff together in a toplevel form is good. I just don’t like how use-package does it, I think … like I prefer pkal’s setup.el (see my comment above for a link) or more transparent idioms….
I almost feel like use-package is a packaging equivalent to cl-loop, with its own DSL you have to learn.
I guess the main thing for me is I never saw the problem with the old style that use-package is meant to replace. Maybe back before with-eval-after-load existed, granted the old eval-after-load was clunky, but there’s no reason to use that nowadays.
The use-package style is a little more concise but not enough to justify installing a 3rd-party dependency, at least for me.
If you put that directly in the top level it would error out when it was first evaluated because fennel-mode-hook isn’t a list. Previously you would have done:
The one thing I don’t like about use-package (although I do use it myself) is that, if you use its conveniences like :bind, :hook, etc, it becomes more awkward to just eval config changes, without doing the entire use-package form. I’ve moved to doing everything in :config; I miss out on a bunch of the nice features, but I greatly value the ability to incrementally change config stuff with ease.
This is one of the reasons I started using DuckDuckGo. It doesn’t have these garbage widgets that suddenly pop up 2 seconds after the page is ‘loaded’ making everything jump around causing miss clicks.
Funny you should say that, because I have had that exact problem with DDG because of their “instant answers” or whatever they call it that pop in at the top of the results.
At least they have a decluttered version - DDG lite - that I switched to because I’m so fed up with the lack of results - 10 after the initial search plus other features I don’t like - “more results”, embedded image or video results above the actual results - there are already tabs for images or videos.
I set up 2 keyword searches (in firefox) - one for lite and one for regular search pages.
The good thing about keyword searches is that you can take full advantage of their URL parameter support to control the look, feel and functionality (including turning instant answers off). Some of those options may no longer work though but most of them do.
Writing these book is fundamentally incompatible with Haskell’s approach of constantly making breaking changes to everything. Books take time to write. On the time scale of 1-2 years. The reality of the Haskell world is that the contents of the resulting book would be so woefully out of date as to be useless.
I say this sitting on most of a book on modern ML in Haskell.
It takes a lot of time to explain something coherently, make examples, describe the logic behind how a system is designed, etc. How can you possibly do that if everything constantly changes? I can either write materials to explain things in Haskell, where everything will be out of date within a year, or I can explain things in Python, where years later I don’t need to revise the materials.
Take ‘Haskell in Depth’ published in 2021. People literally cannot run the code months later: https://github.com/bravit/hid-examples/issues/10 That’s absurd. Writing niche books isn’t particularly lucrative anyway, having to constantly rewrite the same book is borderline madness.
The best example of how impossible it is to write such materials is Haskell itself. There is no document explaining the Haskell type system and how to perform inference in it. At best, there are some outdated papers going back many years that each sort of describe pieces of the system.
“Serious breaking changes” aren’t the biggest issue at all. Small constant breaking changes to the compiler, to language, to core libraries, and to the entire ecosystem in general are the issue. Seriously, that book is barely out and the code doesn’t work anymore.
Every change that breaks the API of a package, a commandline flag, etc. you need to track all of those down in hundreds pages of material. In hundreds of examples. Constantly. It’s hard enough to keep up with changes to code that compiles. If I did this for a book on ML it would be my full-time job.
Seriously, that book is barely out and the code doesn’t work anymore.
Do you mean doesn’t work on the latest GHC with bleeding edge packages from Cabal? Maybe I’m insulated from this a bit by sticking to the version of GHC in Debian stable and avoiding new versions of packages when I don’t need them.
I’m not very familiar with how stack works. Does this mean that stack allowed breaking changes inside an existing release, or what exactly is the actual issue?
This seems unlikely, as both stack and cabal are fully reproducible[1]. One wouldn’t expect a correct build description to stop working because new versions of packages and the compiler are released. Perhaps there’s just
a bug in the book?
There is no document explaining the Haskell type system and how to perform inference in it.
The Haskell Report and the GHC User’s Guide should fully describe the behaviour of the type system. If you mean there’s no single document explaining the implementation of the type system then you may be right, but is there such a document for similar compilers such as OCaml, Scala, F#? Perhaps Rust has one because it is much newer, but I’m not sure.
The Haskell Report describes a 20 year old version of the language. The GHC user guide is a vague overview, it’s completely impossible to implement anything from the details provided in the user guide.
It’s not a matter of describing the current implementation. There is no document that describes the type system as it exists today at all. As in, there is no mathematical description of what Haskell is and how you are supposed to perform type inference. The best that exists is a 10+ year old paper (the OutsideIn paper) but that’s incredibly outdated, you couldn’t type check common Haskell code used today with it, and it’s not even clear it ever corresponded to any version Haskell precisely.
There are good reasons why this is. It takes a lot of time to write such documents. But if the language developers can’t keep up with the language themselves, it’s hard to imagine that others will be able to do so.
For what it’s worth, OCaml is very clearly described mathematically in plenty of places, even in textbooks. My understanding about the situation in Scala is not just that the type system is described accurately, it’s actually been machine checked. I’m least familiar with the situation in F#, but it’s in the OCaml family. There probably aren’t any surprises there.
The Haskell Report describes a 20 year old version of the language.
12 year old version, let’s not over age ourselves :)
There has been no new version of Haskell since 2010. Some consider that an issue, and it may well be, but until there is a new one the fact that it is old does not make it wrong.
Haskell 2010 is not Haskell as it was in 2010. It’s a minor update to Haskell 98 because already no one could keep up with the changes.
It’s all in the first few paragraphs of the report where they describe this problem. Then they say this is an incremental and conservative update that only adds a few minor fixes and features to 98.
Haskell 2010 defines what Haskell is it’s not descriptive of some mystical “Haskell” that may exist somewhere else that it incompletely describes. It’s a definition.
Standards being conservative is good. Can you imagine if every crazy language extension in GHC became part of Haskell? Some of them are even in competition or contradictory!
I linked the Haskell 2010 Report in a sibling thread. I don’t see where it says it doesn’t define what Haskell is in 2010. Could you please point it out?
I linked the Haskell 2010 Report in a sibling thread. I don’t see where it says it doesn’t define what Haskell is in 2010. Could you please point it out?
page xiv says the language has grown so much and the effort to document it is now so high, that this is going to be a small incremental update with more to come. More did not come, the documentation burden was so high everyone gave up. And the update was indeed very very small, covering only 3 major changes: FFI, pattern guards, and hierarchical module names. I pasted the contents below.
For reference, GHC 7 came out in 2010.
Even at the time in 2009 people were wondering what is up, because bread and butter parts of the language weren’t going to be included in Haskell 2020. For example, no GADTs, no associated types, no rank n types, etc. Here is someone in 2009 asking about this and getting the response that, no, this doesn’t reflect the language, but it’s the best anyone can do because keeping up with the language is so hard: https://mail.haskell.org/pipermail/haskell-prime/2009-July/002817.html The main barrier to entry is if anyone can describe that extension faithfully, but no one could.
Sadly, most of the archives of the haskell-prime effort seem to have been lost.
In any case. This is not a criticism of the Haskell2010 authors. They did their best. But, it’s important that the community realizes that the sorry state of the documentation, the lack of high quality materials like books, and the lack of in-depth ecosystems for areas like ML, are all a consequence of this decision to keep making breaking changes to the language, core libraries, and ecosystem as a whole.
At the 2005 Haskell Workshop, the consensus was that so many extensions to the official language were
widely used (and supported by multiple implementations), that it was worthwhile to define another iteration
of the language standard, essentially to codify (and legitimise) the status quo.
The Haskell Prime effort was thus conceived as a relatively conservative extension of Haskell 98, taking on
board new features only where they were well understood and widely agreed upon. It too was intended to be
a “stable” language, yet reflecting the considerable progress in research on language design in recent years.
After several years exploring the design space, it was decided that a single monolithic revision of the language
was too large a task, and the best way to make progress was to evolve the language in small incremental steps,
each revision integrating only a small number of well-understood extensions and changes. Haskell 2010 is
the first revision to be created in this way, and new revisions are expected once per year.
I see. So, reflecting on what you wrote, the lack of published Haskell standard doesn’t seem to be the problem you are experiencing just the symptom. After all, Python doesn’t have a standard but you state that it would be a fine target for writing reference materials.
I can think of only one change to the language that has caused me frustration: simplified subsumption. There have been a few frustrating library-level changes too.
Could you elaborate on which changes in the Haskell ecosystem have led to concrete problems for you? Firstly, I may be able to offer advice or a fix. Secondly, I may actually be able to tackle the root cause and make future experience better for you and others.
I am highly sceptical that there is a mathematical description of OCaml or Scala that matches how the language is used in production today. I could easily be wrong, because it’s not my area of expertise, but I know for sure that those languages move, albeit slower than Haskell, and I doubt that any published material can keep up with how those compilers develop in practice. Someone was telling me recently he is adding algebraic effects to the OCaml compiler! It seems unlikely that is in the textbooks that you mentioned.
I would be grateful for any material you can link me to that elaborates on the difference between Haskell and the other languages in this regard.
That said, this is getting somewhat removed from your original comment, because if you stick to the well-trodden parts of the type system, Haskell2010 certainly, and what is becoming GHC2021 most likely, then none of the breakage to the book you are writing will be to do with the type system per se.
For what it’s worth, OCaml is very clearly described mathematically in plenty of places, even in textbooks. My understanding about the situation in Scala is not just that the type system is described accurately, it’s actually been machine checked. I’m least familiar with the situation in F#, but it’s in the OCaml family. There probably aren’t any surprises there.
Are you confusing OCaml with Standard ML?
Scala 3’s core language (the DOT calculus) has been specified and verified in Coq, but not the surface language as far as I’m aware. I’m also not sure if the implementation is based on elaboration into the core language (this is generally the approach that dependently typed languages use to improve confidence in the implementation).
For locking in vowels easily, I find OUIJA to be pretty effective. Yes, you miss out on the E but you get a wide cut from 80% of the vowels right away.
Alternative title: Solving Advent of Code 2021 with Python before Rust can compile (It‘s a joke, but now when I think about it, I‘m somehow interested)
It’s good to have a trade-off space for these things. I tend to use C++ for simple programs, making heavy use of standard-library things like shared pointers, maps, sets, vectors, regexes, and so on. Compiling at -O0 tends to be pretty quick and the combined compile+run time is usually similar to Python. When I have something I expect to run a load of times, applying some constexpr and templates to force compile-time evaluation of certain bits and compiling with optimisation gives me something that takes a few seconds to compile but runs an order of magnitude or so faster.
C++ isn’t ideal here. Once you move past something that you want to fit in a single compilation unit, the compiler ends up doing a lot of redundant work. Every template function (including members of template classes) and every inline function gets compiled in every compilation unit that uses it and then thrown away. For something like LLVM, you end up generating over a gigabyte of intermediate object code for a final binary that’s a few tens of megabytes. All of that work at the compile stage is redundant. Sony has done some work on a thing that they call a ‘compilation database’ that memoises a load of these steps (don’t generate the AST more than once for template functions that are instantiated the same way for different compilation units, don’t generate IR from two identical ASTs in different compilation units twice [but still make it available for inlining in both], don’t optimise functions if they and their callees haven’t changed since the last build, and so on) but it’s still not really ready for large-scale deployment.
I’d love to see language implementations focus a bit more on this whole spectrum. When I’m prototyping, I want fast incremental builds and code that isn’t too slow to be usable. When I’m deploying to a million nodes in a datacenter, I don’t mind if the build takes an hour if it saves 1% on execution time, because that’s still a net win (especially if that’s an hour of CPU time and is parallelisable).
There are also useful points in the middle. For example, running the entire LLVM test suite on an optimised build takes several hours of CPU time. When I’m working on LLVM itself, I tend to do three different builds:
Release, runs very quickly, used for running the tests as I work.
Release+Asserts, a bit slower, used for running tests before I push because it catches more bugs.
Debug+ASan (include asserts), is very slow, but if a test fails when I run the test suite with Release+Asserts then this is the one that will let me find the source of the bug most quickly.
Each one of these is a completely independent build. This means that each one needs to parse the entire source tree, generate an AST, and then do different amounts of optimisation. A load of functions have no asserts, so are identical in the final binary for the first two. The ASTs for the last two are identical except in the containers that have some extra ASan instrumentation. I’d love to see toolchains designed for doing incremental builds in multiple configurations, so I could get something immediately for a rapid compile/test/debug cycle and have it spit out the more optimised or more instrumented builds in the background.
So it’s more about convenient access to various modules, rapid prototyping &c, where I do believe Python has an edge.
I cloned the repo and opened the bench.rs file in emacs and waited while about 400mb of dependencies were downloaded. That’s before any compilation even starts.
I’m not criticising rust in any way, but I’d be very surprised if I cloned almost any python AoC repo and couldn’t run any of the solutions immediately.
criterion, the benchmarking library used for this project is a dev-only dependency and it’s somewhat fancy, including html&svg reports, test suite tooling, supports a lot of things, etc. which makes is quite heavy https://crates.io/crates/criterion/0.3.5/dependencies
My first thoughts with articles like this one mirror your words here. When looking at the whole of AOC more like an engineering problem, minimizing solution run time is making a whole lot of unnecessary and high cost trade-offs.
At the same time - the author’s conclusion indicates they are actually setting out to learn something, instead of to optimize a path towards an AOC engineering solution. That was how I got started with Rust too, so I guess I can try to put aside my cynicism (that’s what it is for me) and read these articles in that vein.
I think a big part of AoC’s success is that it’s open-ended. There’s no requirement of a specific language, and plenty of people dash off a quick solution to “get on the list”, then spend a lot of time refining it, testing different solutions, improving runtime etc.
What I like about this thread is the many light themes rather than dark.
Looking around the web you’d think those of us that want/need light themes are unfashionable or misguided or just the minority. Maybe we’re just less vocal.
That’s cool and all, but knowing Mozilla, an about:config option called “legacyUserProfileCustomizations” is gonna disappear.
Related: In Firefox 90, they removed the “Compact” density option, which I kind of rely on to be comfortable with Firefox. They added an about:config option, “browser.compactmode.show”, to add it back. In Firefox 91, if you have the option enabled, they renamed the option from “Compact” to “Compact (not supported)”. I know it’s only a matter of time before they remove it entirely. And I’m kind of panicking about that, because I really, really don’t want more vertical space to be wasted by padding in the tab bar on small laptop screens. If anything, I find the “Compact” option too big. I wish Mozilla would stop changing things for the worse, and I wish Mozilla would stop taking away configuration options.
Every major release, they’ve taken away functionality that I depend on.
I feel the same. I even often joke that Mozilla is spying on me with the sole goal of knowing what features I rely on and yanking them away from me :).
What is wrong with them?
Nothing is wrong with them. We just aren’t part of their main demographic target.
I absolutely dread every single firefox update for the same reason - something I rely on or have burned in have and muscle memory get altered or removed.
It feels completely hopeless to me as well because I can’t see another acceptable choice. I can’t support Google’s browser engine monopoly and every other browser I research has some other issue that makes me reject it in comparison.
It feels like abuse by endless paper cuts, unwanted and unnecessary changes forced on me with no realistic choice to opt out. These changes seem to be accelerating too and meanwhile firefox market share declines further and further.
That’s cool and all, but knowing Mozilla, an about:config option called “legacyUserProfileCustomizations” is gonna disappear.
The reason it was made an option is to slightly improve startup time since it won’t have to check for this file on the disk. Comparatively few people use it, which is hardly surprising since it’s always been a very hidden feature, so it kind of makes sense: if you’re going to manually create CSS files then toggling an about:config option is little trouble.
That being said, instead of relegating userChrome to a “hidden feature”, it would seem to me that properly integrating support for these kind of things in Firefox would give a lot more benefits. In many ways, almost everyone is a “power user” because a lot of folks – including non-technical people – spend entire days in the browser. For example, I disable the “close tab” button because I press it by accident somewhat frequently which is quite annoying. While loads of people don’t have this problem, I suspect I’m not the only person with this small annoyance considering some other browsers just offer this as a setting, but I am one of the comparatively small group of people who has the know-how to actually fix it in Firefox.
The technical architecture to do a lot of these things is right there and actually works quite well; it just doesn’t fit in the “there is one UX to satisfy everyone, and if you don’t like it then there’s something wrong with you”-mentality 🤷
All preference changes through about:config are officially unsupported and come with zero guarantees. This includes creating custom chrome folders.
There actually is a maintenance and complexity cost for keeping these things alive. We’ve done a lot of hardening lately that was cumbersome to pull off in light of those customizations. In essence, we are disabling some protections when we detect profile hackery. I want to repeat: despite our team working around and “acknowledging” the existence of those hacks, they are still unsupported and we can’t promise to always work around custom profile quirks.
The best way to be heard and change things in an open source project is to show up and help drive things. I know this isn’t easy for big project like ours…
I’ve heard this startup time justification before, but surely the additional hassle of implementing, testing, and documenting a new configuration parameter isn’t worth saving a single stat call on startup? It’s hard to imagine that even shows up in the profile.
If everything else has already been tightly optimized, the stat call being performed on a spinning rust drive could be shown as being a major performance bottleneck when profiling startup performance.
When I rebuild LLVM, ninja does a stat system call for every C++ source and header file on the disk, about 10,000 of them in total. If I have no changes, then it completes in so little time that it appears instantaneous.
If the cost of a stat system call is making a user-noticeable difference to start times, then you’re in an incredibly rare and happy place.
I glanced at lucky-commit to see what it was doing:
lucky-commit adds its whitespace to a 64-byte-aligned block at the very end of the commit message. Since everything that precedes the whitespace is constant for any particular commit, this allows lucky-commit to cache the SHA1 buffer state and only hash a single 64-byte block on each attempt.
I used it when it was a regular zsh script. I got tired of having to re-configure it whenever updates broke my configuration as it changed or removed features I used.
I’m not sure how stable it is now but I did enjoy it while it lasted.
There was one significant backwards-incompatible change that I can recall, but it was trivial to address. And for my part, I’d rather have my software evolving than never releasing a change in case it breaks someone somewhere.
I used starship previously and something happened to make it grind to a halt. I moved to romkatv/powerlevel10k and have had no such issues since. I once saw a discussion around a plug-in being implemented which was rejected as it took 10ms to run!
Edit: found the discussion, it was 11.4ms and it was reworked rather than rejected, but hopefully you get the point I was trying to make
One aspect of my choice of Starship that doesn’t come through in this post is that I use the fish shell as my daily driver, but I use zsh and bash on remote hosts in various places. Having a consistent prompt on all three of those is a huge selling point, even if my prompt might take 50-70ms to render.
Don’t worry, not trying to invalidate your choices or anything. I’m sure it was something I did otherwise the GitHub issues would be full of noise! Having a consistent prompt is a really solid pro for sure.
I’m curious why such a dichotomy? Are you required to use zsh/bash on the remote machines or is it a matter of convenience? I’m forced to use bash, but would gladly switch to zsh if I could install it…
For a few years now, my response to this question is a counter-question:
Is reading/writing/arithmetic a science or an art?
Of course they’re neither. So it is with programming. You can use programming to do either science or art. But it’s just a tool.
There’s a cool question in the middle of the post. As I understand “ontological status”:
Where does the essence of a program reside?
Following Peter Naur, the unsatisfactory answer for most programs is, “in the programmers’ heads,” with the increasing implication of incoherence for large teams of programmers. Very large programs stop being comprehensible in humanistic terms, and start to rather resemble egregores. Like gods, or bureaucracies.
But I think we can do better, retain more control. A better answer, for well-written programs, is “in the space of inputs accepted, as categorized into different regimes with relatively linear processing logic.” Personally I like to visualize this input space as tests. Haskell/OCaml and APL propose alternative approaches that also seem plausible.
But I think we can do better, retain more control. A better answer, for well-written programs, is “in the space of inputs accepted, as categorized into different regimes with relatively linear processing logic.” Personally I like to visualize this input space as tests. Haskell/OCaml and APL propose alternative approaches that also seem plausible.
I think I’d say you were somewhere near the mark with “egregore”, a cool word I just learned from your comment, but that the space of what a program “really is” extends not just to the minds of the people who write / maintain / administer it, but to the entire substrate it operates on, including physical hardware and the minds of its users and the shadowy, informal conceptual footprint it has therein. Much like the way that, say, a fictional / mythological / historical narrative exists in the often vast and highly ramified space between authors and readers and cultural context.
Not that there’s anything wrong with trying to delineate space-of-inputs-accepted and the like - just that we’re probably kidding ourselves if we think that we’re going to get to comprehensibility at all, for large things. Creating bounded spaces within which to reason and exercising extreme humility about the big pictures seems like as good as it’s ever really going to get, most days.
Very valid considerations. My answer to that is: if making it large gives up control over it, perhaps we shouldn’t make it large.
As a concrete example, one project I work on explores maintaining control over a full computing stack with the following limitations:
single instruction set (32-bit x86)
single video mode (1024x768 with 256 colors)
single fixed-width font (8x16px)
mostly text mode
only ATA (IDE) disk drives, running in a slow non-DMA mode
It’ll add features over time, but only so fast as I can figure out how to track – and communicate to others! – the input space. For example, it doesn’t have a mouse yet, because I don’t understand it and because I don’t want to just pull in a mouse driver without understanding it and having thorough unit tests (in machine code!) for it.
Basically I want to consider the possibility that humanity has been adopting computers way too aggressively and introducing lots of risks to all of society at large. I don’t want software to go from explosive adoption to another hamstrung and stiflingly regulated field, bureaucracy by other means. There’s too much promise here for that, even if we take a hundred years to slowly develop it.
Very valid considerations. My answer to that is: if making it large gives up control over it, perhaps we shouldn’t make it large.
…
Basically I want to consider the possibility that humanity has been adopting computers way too aggressively and introducing lots of risks to all of society at large.
Well said, even if I’m personally pretty sure the genie is out of the bottle at this point.
I don’t think I can really draw a clear line in my own mind between what bureaucracy is and what software is, but I welcome efforts to ameliorate and rate-limit the poisoning of things that seems to come with scaling software.
hmm. I was initially unsure reading this, but I think I agree writing a grocery list is probably not an art. I’m not shocked there’s an edge case here, but it’s interesting to think about. I guess the programming equivalent would be code ls or similar trivial cases
It is by no means a rare edge case. If you assume all writing is art, that implies science is also art. Which is a reasonable and internally consistent interpretation of the terms, but not the frame of reference of the original question.
It sounds like you aren’t bothered by the original question. Which is great; it’s not a very interesting question. I’m personally more curious what you think about the second half of my first comment :)
It’s a skill for sure, but that doesn’t necessarily make it an art. If I look at my own writing (which isn’t scientific but is technical) then the amount of creativity that’s involved isn’t much more than, say, fixing my bike: it requires a bit at times, but mostly it’s just working at it.
That’s been my opinion for a long time now. Like other crafts it isn’t automatically included or excluded from other human activities - art, science, commerce, hobbies …
If you accept that then the question posed isn’t very interesting.
Not necessarily. Designing a building can an art, a science, or a combination of both. A friend of mine does ghost writing for non-fiction. Most of it follows a formula and neither of us would consider it art.
Words are a tool used to communicate the same way verbal speech is.
I’ve done a fair amount of technical writing for hire. Art enters the picture somewhere, to be sure, but where you’re writing to strict house standards and pre-set outlines it does have a way of driving most of the feeling of creative expression out of things. And I suppose that gets at how I understand “art” - a feeling as much as anything.
Does anyone here use straight? I never got the advantage, compared to just cloning the repositories yourself (or as submodules). Plus you have to run random code from a github repository.
I minimize the number of packages I use and usually skim through the source code. Automatically downloading and evaluating code from a repository seems more dangerous, but I do get your point.
I (ab)used Cask for a long time because it was a nice declarative way of configuring packages in ~/.emacs.d. Since I’ve returned back to using package.el and the fact that it handles dependencies much better now, I just rely on it maintaining package-selected-packages and if needed I can just call (package-install-selected-packages) but I don’t have more sophisticated needs.
You can usually get the appropriate slug from the url but It’s not always clear what the channel id is especially since youtube introduced urls like https://www.youtube.com/c/stanford for Stanford university. You need to scrape the page to find the channel id.
I should add that their feeds are limited to the most recent additions so don’t expect to see a full history after you subscribe.
RSS is the only way I can handle Youtube subscriptions. Being able to watch the video from my feed reader instead of the actual site, which is designed to get you to watch a ton of semi related videos, is the best.
I see a lot of comments (not just here but everywhere I;ve seen discussions) about mirrors which will address the immediate problem of the source code not being available.
The bigger issue though is how to host/maintain is distributed/federated project, not just hosting source code but discussions, bug reports, issue tracking etc. that currently github adds on top of git.
The current version of youtube-dl will continue to work until youtube changes something and it stops working. It also worked with a massive amount of other streaming sites.
For example the api key for one quite popular streaming service had to be changed from time to time, presumably as the service began to notice and invalidated the key.
The bigger issue though is how to host/maintain is distributed/federated project, not just hosting source code but discussions, bug reports, issue tracking etc. that currently github adds on top of git.
Email. And there are plenty of privacy-friendly email providers.
While the tools doesn’t currently have a UX as good as GitHub, it is definitely possible to develop a project in a distributed way.
git repository is already distributed, and tools like git-ipfs-rehost can be used to make the repository easily available. Issues/bug reports can be stored in the git repo itself using git-bug. Patches can be sent over email. Pull requests can be made in an informal way over any kind of communication channels, for example Matrix.org
Self-hosted git accessible via Tor Onion Service. HardenedBSD uses Gitea, which manages issues, wiki, and code storage. All of HardenedBSD’s public infrastructure is made available through Tor as well as through normal means.
One of the main benefits is having the title and (usually) an overview/description or excerpt.
It makes it so much easier to prioritise and filter content before (and often instead of) having to deal with the modern web experience directly.
Version of the original article in the wayback machine.
It’s not the URL linked in this article, but archive.org redirects internally to the link I posted.
Question 3 is fun, because the first 100 Fibonacci numbers will overflow a 64-bit integer. I wonder what the original author wanted. Are they happy for you to go to floating-point approximations? Do they want you to use a BigInt library of some kind (or a language that has it built in)?
The author’s own answer to Question 4 was wrong, which made me laugh somewhat. The tests in the blog don’t cover any of the fun corner cases, so I’m not sure if it’s right either.
Problem 5 is a terrible question because it’s easy to get rabbit holed on trying to solve it in a clever way, whereas a brute-force solution (8 values in tri-state logic) should be very feasible even in an interpreted language. Someone is more likely to go down the rabbit hole if they have a bit more mathematical knowledge, so this question will select for people with less maths knowledge, as well as for people with more maths knowledge and an awareness that some problems can be brute forced.
I’ve been using this for a while. I emacs 30 usable?
emacs 29 was the version on the master branch, but is now on a separate branch, being tested and prepared for release.
emacs 30 is the continuation of the master branch with the new version number.
No new features to emac 29, only bug fixes, although it’s rumoured there may be some leeway due to tree sitter.
So emacs 30 will have new features and become the - relatively - less stable, cutting edge version, while 29 is tested and fixed as needed before release.
29 on master was surprisingly stable, and I suspect 30 will probably be reasonably stable too.
IMHO it’s very exciting to see how Emacs is getting modernized without loosing its ethos.
package.el
added a lot of dynamism to the community. Without package management, it was very cumbersome to install things, track dependencies and keep them up to date.use-package
is now more or less a de facto standard for package configuration, and I think merging it into Emacs would bring lots of benefits (spoiler, it seems it might happen [1]).The end goal should be make it possible to run a modern Emacs with plenty of features yet a tiny
.emacs
orinit.el
.[1] https://lists.gnu.org/archive/html/emacs-devel/2022-09/msg01690.html
my biggest problem with use-package is that it hides away a lot of the implementation details, which I think is a barrier between being a user and a programmer of emacs.
I found the macrostep package incredibly helpful to understand what use-package was doing and to help track down issues.
Of course it’s (e)lisp, so you can do the same with the built in macro expansion facilities and introspection, with finer control. This package makes it trivial to explore macros.
Love macrostep! Tho the problem I have with
use-package
is also that it expands to pretty (imo over-) complicated code. I prefer a package like setup.el or even my own thing I’m playing around with that’s more of aprogn
around a related block of configurations.In what sense? I find it very neat to standardize trivial things and avoid cluttering my configuration. E.g.:
One can always default to plain Elisp. Before
package.el
anduse-package
, it was hard to maintain complex configurations.I agree that packages are great, and that keeping related stuff together in a toplevel form is good. I just don’t like how
use-package
does it, I think … like I prefer pkal’s setup.el (see my comment above for a link) or more transparent idioms….I almost feel like
use-package
is a packaging equivalent tocl-loop
, with its own DSL you have to learn.I guess the main thing for me is I never saw the problem with the old style that use-package is meant to replace. Maybe back before
with-eval-after-load
existed, granted the oldeval-after-load
was clunky, but there’s no reason to use that nowadays.The use-package style is a little more concise but not enough to justify installing a 3rd-party dependency, at least for me.
I’m curious, how do you use
with-eval-after-load
? I got started withuse-package
, so not really familiar with thecore
facilities unfortunately.Well, for example:
If you put that directly in the top level it would error out when it was first evaluated because
fennel-mode-hook
isn’t a list. Previously you would have done:…which is fine in this example because there’s only one form but often you have to toss a
progn
in there for nontrivial examples and it looks ugly.Thanks! Looks pretty reasonable for most of my configs.
The one thing I don’t like about
use-package
(although I do use it myself) is that, if you use its conveniences like:bind
,:hook
, etc, it becomes more awkward to just eval config changes, without doing the entireuse-package
form. I’ve moved to doing everything in:config
; I miss out on a bunch of the nice features, but I greatly value the ability to incrementally change config stuff with ease.This is one of the reasons I started using DuckDuckGo. It doesn’t have these garbage widgets that suddenly pop up 2 seconds after the page is ‘loaded’ making everything jump around causing miss clicks.
Funny you should say that, because I have had that exact problem with DDG because of their “instant answers” or whatever they call it that pop in at the top of the results.
DDG is similar in my opinion.
At least they have a decluttered version - DDG lite - that I switched to because I’m so fed up with the lack of results - 10 after the initial search plus other features I don’t like - “more results”, embedded image or video results above the actual results - there are already tabs for images or videos.
I set up 2 keyword searches (in firefox) - one for lite and one for regular search pages.
The good thing about keyword searches is that you can take full advantage of their URL parameter support to control the look, feel and functionality (including turning instant answers off). Some of those options may no longer work though but most of them do.
Writing these book is fundamentally incompatible with Haskell’s approach of constantly making breaking changes to everything. Books take time to write. On the time scale of 1-2 years. The reality of the Haskell world is that the contents of the resulting book would be so woefully out of date as to be useless.
I say this sitting on most of a book on modern ML in Haskell.
It takes a lot of time to explain something coherently, make examples, describe the logic behind how a system is designed, etc. How can you possibly do that if everything constantly changes? I can either write materials to explain things in Haskell, where everything will be out of date within a year, or I can explain things in Python, where years later I don’t need to revise the materials.
Take ‘Haskell in Depth’ published in 2021. People literally cannot run the code months later: https://github.com/bravit/hid-examples/issues/10 That’s absurd. Writing niche books isn’t particularly lucrative anyway, having to constantly rewrite the same book is borderline madness.
The best example of how impossible it is to write such materials is Haskell itself. There is no document explaining the Haskell type system and how to perform inference in it. At best, there are some outdated papers going back many years that each sort of describe pieces of the system.
I’ve been writing Haskell for 15+ years and not yet encourtered a serious breaking change.
“Serious breaking changes” aren’t the biggest issue at all. Small constant breaking changes to the compiler, to language, to core libraries, and to the entire ecosystem in general are the issue. Seriously, that book is barely out and the code doesn’t work anymore.
Every change that breaks the API of a package, a commandline flag, etc. you need to track all of those down in hundreds pages of material. In hundreds of examples. Constantly. It’s hard enough to keep up with changes to code that compiles. If I did this for a book on ML it would be my full-time job.
Do you mean doesn’t work on the latest GHC with bleeding edge packages from Cabal? Maybe I’m insulated from this a bit by sticking to the version of GHC in Debian stable and avoiding new versions of packages when I don’t need them.
It’s the opposite problem. The published code doesn’t build using the versions it states.
The repo’s stack.yaml uses
resolver: lts-17.4
and the linked issue’s solution is to upgrade tolts-18.17
. and upgrade dependency versions.I’m not very familiar with how stack works. Does this mean that stack allowed breaking changes inside an existing release, or what exactly is the actual issue?
this is false
This seems unlikely, as both stack and cabal are fully reproducible[1]. One wouldn’t expect a correct build description to stop working because new versions of packages and the compiler are released. Perhaps there’s just a bug in the book?
[1] Ok, not Nix level reproducible!
The Haskell Report and the GHC User’s Guide should fully describe the behaviour of the type system. If you mean there’s no single document explaining the implementation of the type system then you may be right, but is there such a document for similar compilers such as OCaml, Scala, F#? Perhaps Rust has one because it is much newer, but I’m not sure.
The Haskell Report describes a 20 year old version of the language. The GHC user guide is a vague overview, it’s completely impossible to implement anything from the details provided in the user guide.
It’s not a matter of describing the current implementation. There is no document that describes the type system as it exists today at all. As in, there is no mathematical description of what Haskell is and how you are supposed to perform type inference. The best that exists is a 10+ year old paper (the OutsideIn paper) but that’s incredibly outdated, you couldn’t type check common Haskell code used today with it, and it’s not even clear it ever corresponded to any version Haskell precisely.
There are good reasons why this is. It takes a lot of time to write such documents. But if the language developers can’t keep up with the language themselves, it’s hard to imagine that others will be able to do so.
For what it’s worth, OCaml is very clearly described mathematically in plenty of places, even in textbooks. My understanding about the situation in Scala is not just that the type system is described accurately, it’s actually been machine checked. I’m least familiar with the situation in F#, but it’s in the OCaml family. There probably aren’t any surprises there.
12 year old version, let’s not over age ourselves :)
There has been no new version of Haskell since 2010. Some consider that an issue, and it may well be, but until there is a new one the fact that it is old does not make it wrong.
It’s more like 24 years actually.
Haskell 2010 is not Haskell as it was in 2010. It’s a minor update to Haskell 98 because already no one could keep up with the changes.
It’s all in the first few paragraphs of the report where they describe this problem. Then they say this is an incremental and conservative update that only adds a few minor fixes and features to 98.
Haskell 2010 defines what Haskell is it’s not descriptive of some mystical “Haskell” that may exist somewhere else that it incompletely describes. It’s a definition.
Standards being conservative is good. Can you imagine if every crazy language extension in GHC became part of Haskell? Some of them are even in competition or contradictory!
You haven’t read the report. You should. It literally starts by saying it doesn’t define what Haskell is in 2010.
I linked the Haskell 2010 Report in a sibling thread. I don’t see where it says it doesn’t define what Haskell is in 2010. Could you please point it out?
page xiv says the language has grown so much and the effort to document it is now so high, that this is going to be a small incremental update with more to come. More did not come, the documentation burden was so high everyone gave up. And the update was indeed very very small, covering only 3 major changes: FFI, pattern guards, and hierarchical module names. I pasted the contents below.
For reference, GHC 7 came out in 2010.
Even at the time in 2009 people were wondering what is up, because bread and butter parts of the language weren’t going to be included in Haskell 2020. For example, no GADTs, no associated types, no rank n types, etc. Here is someone in 2009 asking about this and getting the response that, no, this doesn’t reflect the language, but it’s the best anyone can do because keeping up with the language is so hard: https://mail.haskell.org/pipermail/haskell-prime/2009-July/002817.html The main barrier to entry is if anyone can describe that extension faithfully, but no one could.
Sadly, most of the archives of the haskell-prime effort seem to have been lost.
In any case. This is not a criticism of the Haskell2010 authors. They did their best. But, it’s important that the community realizes that the sorry state of the documentation, the lack of high quality materials like books, and the lack of in-depth ecosystems for areas like ML, are all a consequence of this decision to keep making breaking changes to the language, core libraries, and ecosystem as a whole.
I see. So, reflecting on what you wrote, the lack of published Haskell standard doesn’t seem to be the problem you are experiencing just the symptom. After all, Python doesn’t have a standard but you state that it would be a fine target for writing reference materials.
I can think of only one change to the language that has caused me frustration: simplified subsumption. There have been a few frustrating library-level changes too.
Could you elaborate on which changes in the Haskell ecosystem have led to concrete problems for you? Firstly, I may be able to offer advice or a fix. Secondly, I may actually be able to tackle the root cause and make future experience better for you and others.
I am highly sceptical that there is a mathematical description of OCaml or Scala that matches how the language is used in production today. I could easily be wrong, because it’s not my area of expertise, but I know for sure that those languages move, albeit slower than Haskell, and I doubt that any published material can keep up with how those compilers develop in practice. Someone was telling me recently he is adding algebraic effects to the OCaml compiler! It seems unlikely that is in the textbooks that you mentioned.
I would be grateful for any material you can link me to that elaborates on the difference between Haskell and the other languages in this regard.
That said, this is getting somewhat removed from your original comment, because if you stick to the well-trodden parts of the type system, Haskell2010 certainly, and what is becoming GHC2021 most likely, then none of the breakage to the book you are writing will be to do with the type system per se.
Are you confusing OCaml with Standard ML?
Scala 3’s core language (the DOT calculus) has been specified and verified in Coq, but not the surface language as far as I’m aware. I’m also not sure if the implementation is based on elaboration into the core language (this is generally the approach that dependently typed languages use to improve confidence in the implementation).
It seems I have a very different strategy than people here. I want to lock in the vowels first so I use URINE/AORTA maybe I’m doing something wrong.
The doubled A and R seem like a wasted opportunity to get letter clues.
Maybe CHAOS as the 2nd word since S and H are quite common letters.
As a kid I learned ETANOIRSH for playing hangman which isn’t 100% accurate but it’s what stuck in my head.
By the way thanks for the tip - the adjustment I mentioned got me today’s word in 3!
Indeed, CHAOS looks like a better option.
For fun I checked /usr/share/dict/words for word pairs with
Using this letter frequency list
Those final 10 pairs included URINE and my (new) favourite starting pair for how ordinary they are: HOUSE + TRAIN
For locking in vowels easily, I find OUIJA to be pretty effective. Yes, you miss out on the E but you get a wide cut from 80% of the vowels right away.
Alternative title: Solving Advent of Code 2021 with Python before Rust can compile (It‘s a joke, but now when I think about it, I‘m somehow interested)
It’s good to have a trade-off space for these things. I tend to use C++ for simple programs, making heavy use of standard-library things like shared pointers, maps, sets, vectors, regexes, and so on. Compiling at -O0 tends to be pretty quick and the combined compile+run time is usually similar to Python. When I have something I expect to run a load of times, applying some
constexpr
and templates to force compile-time evaluation of certain bits and compiling with optimisation gives me something that takes a few seconds to compile but runs an order of magnitude or so faster.C++ isn’t ideal here. Once you move past something that you want to fit in a single compilation unit, the compiler ends up doing a lot of redundant work. Every template function (including members of template classes) and every
inline
function gets compiled in every compilation unit that uses it and then thrown away. For something like LLVM, you end up generating over a gigabyte of intermediate object code for a final binary that’s a few tens of megabytes. All of that work at the compile stage is redundant. Sony has done some work on a thing that they call a ‘compilation database’ that memoises a load of these steps (don’t generate the AST more than once for template functions that are instantiated the same way for different compilation units, don’t generate IR from two identical ASTs in different compilation units twice [but still make it available for inlining in both], don’t optimise functions if they and their callees haven’t changed since the last build, and so on) but it’s still not really ready for large-scale deployment.I’d love to see language implementations focus a bit more on this whole spectrum. When I’m prototyping, I want fast incremental builds and code that isn’t too slow to be usable. When I’m deploying to a million nodes in a datacenter, I don’t mind if the build takes an hour if it saves 1% on execution time, because that’s still a net win (especially if that’s an hour of CPU time and is parallelisable).
There are also useful points in the middle. For example, running the entire LLVM test suite on an optimised build takes several hours of CPU time. When I’m working on LLVM itself, I tend to do three different builds:
Each one of these is a completely independent build. This means that each one needs to parse the entire source tree, generate an AST, and then do different amounts of optimisation. A load of functions have no asserts, so are identical in the final binary for the first two. The ASTs for the last two are identical except in the containers that have some extra ASan instrumentation. I’d love to see toolchains designed for doing incremental builds in multiple configurations, so I could get something immediately for a rapid compile/test/debug cycle and have it spit out the more optimised or more instrumented builds in the background.
Another value axis is to check how many of each day’s top scorers use one or the other language.
AoC is seldom about machine efficiency. As stated on the project homepage:
So it’s more about convenient access to various modules, rapid prototyping &c, where I do believe Python has an edge.
Note I base this as an active participant in AoC, and checking the daily solutions threads for how other people do it. Python is very popular.
I cloned the repo and opened the bench.rs file in emacs and waited while about 400mb of dependencies were downloaded. That’s before any compilation even starts.
I’m not criticising rust in any way, but I’d be very surprised if I cloned almost any python AoC repo and couldn’t run any of the solutions immediately.
criterion
, the benchmarking library used for this project is a dev-only dependency and it’s somewhat fancy, including html&svg reports, test suite tooling, supports a lot of things, etc. which makes is quite heavy https://crates.io/crates/criterion/0.3.5/dependenciesOther than this the actual dependencies look reasonably lightweight.
That 2nd link is also to the criterion deps.
I’m not sure what rustic (the lsp server for rust) is doing when I open an rs file in emacs and it may also be downloading packages it feels it needs:
The target/release folder is 183,607,296 about 175MiB
On my machine:
It’s impressive. From the article…
This repo on my machine runs in 12915 μs a little under 13ms.
I’m happy people post their runtime optimised solutions like this. I’m mainly interested to learn techniques to improve my own runtime and efficiency.
This is a nitpick, but the LSP server rustic uses is rust-analyzer.
Right, thanks for that correction.
My first thoughts with articles like this one mirror your words here. When looking at the whole of AOC more like an engineering problem, minimizing solution run time is making a whole lot of unnecessary and high cost trade-offs.
At the same time - the author’s conclusion indicates they are actually setting out to learn something, instead of to optimize a path towards an AOC engineering solution. That was how I got started with Rust too, so I guess I can try to put aside my cynicism (that’s what it is for me) and read these articles in that vein.
I think a big part of AoC’s success is that it’s open-ended. There’s no requirement of a specific language, and plenty of people dash off a quick solution to “get on the list”, then spend a lot of time refining it, testing different solutions, improving runtime etc.
I like the treatment in Exercise 1.19 of SICP which I think is a version of matrix exponentiation expressed in a different way.
What I like about this thread is the many light themes rather than dark.
Looking around the web you’d think those of us that want/need light themes are unfashionable or misguided or just the minority. Maybe we’re just less vocal.
Dark text on light background works better for people with astigmatism, and astigmatism is quite common.
This looks great, thanks for sharing it here! I’m keen to join but can’t make the first session due to other commitments.
Given that the reading material is clearly laid out it should be possible to catch up but are meetings recorded or transcribed at all?
I’ve asked about this in chat. It is planned that the meetings will also be transmitted on Twitch, with recordings kept for 14 days.
Please encourage them to do that and to post the recording links.
My time zone means I can’t join live, but I’d like to follow along “offline”.
Working through something with recordings of discussions is strangely motivating and you actually feel involved, despite not being able to contribute.
Thank you!
For anyone else following along at home.
I found the twitch channel with recordings from the first meeting.
That’s cool and all, but knowing Mozilla, an about:config option called “legacyUserProfileCustomizations” is gonna disappear.
Related: In Firefox 90, they removed the “Compact” density option, which I kind of rely on to be comfortable with Firefox. They added an about:config option, “browser.compactmode.show”, to add it back. In Firefox 91, if you have the option enabled, they renamed the option from “Compact” to “Compact (not supported)”. I know it’s only a matter of time before they remove it entirely. And I’m kind of panicking about that, because I really, really don’t want more vertical space to be wasted by padding in the tab bar on small laptop screens. If anything, I find the “Compact” option too big. I wish Mozilla would stop changing things for the worse, and I wish Mozilla would stop taking away configuration options.
What is wrong with them? Every major release, they’ve taken away functionality that I depend on.
I feel the same. I even often joke that Mozilla is spying on me with the sole goal of knowing what features I rely on and yanking them away from me :).
Nothing is wrong with them. We just aren’t part of their main demographic target.
I absolutely dread every single firefox update for the same reason - something I rely on or have burned in have and muscle memory get altered or removed.
It feels completely hopeless to me as well because I can’t see another acceptable choice. I can’t support Google’s browser engine monopoly and every other browser I research has some other issue that makes me reject it in comparison.
It feels like abuse by endless paper cuts, unwanted and unnecessary changes forced on me with no realistic choice to opt out. These changes seem to be accelerating too and meanwhile firefox market share declines further and further.
The reason it was made an option is to slightly improve startup time since it won’t have to check for this file on the disk. Comparatively few people use it, which is hardly surprising since it’s always been a very hidden feature, so it kind of makes sense: if you’re going to manually create CSS files then toggling an about:config option is little trouble.
Apparently, the name “legacy” is in there “to avoid giving the impression that with this new preference we are adding a new customization feature”. I would be surprised if it was removed in the foreseeable future, because it’s actually not a lot of code/effort to support this.
That being said, instead of relegating userChrome to a “hidden feature”, it would seem to me that properly integrating support for these kind of things in Firefox would give a lot more benefits. In many ways, almost everyone is a “power user” because a lot of folks – including non-technical people – spend entire days in the browser. For example, I disable the “close tab” button because I press it by accident somewhat frequently which is quite annoying. While loads of people don’t have this problem, I suspect I’m not the only person with this small annoyance considering some other browsers just offer this as a setting, but I am one of the comparatively small group of people who has the know-how to actually fix it in Firefox.
The technical architecture to do a lot of these things is right there and actually works quite well; it just doesn’t fit in the “there is one UX to satisfy everyone, and if you don’t like it then there’s something wrong with you”-mentality 🤷
All preference changes through
about:config
are officially unsupported and come with zero guarantees. This includes creating customchrome
folders.There actually is a maintenance and complexity cost for keeping these things alive. We’ve done a lot of hardening lately that was cumbersome to pull off in light of those customizations. In essence, we are disabling some protections when we detect profile hackery. I want to repeat: despite our team working around and “acknowledging” the existence of those hacks, they are still unsupported and we can’t promise to always work around custom profile quirks.
The best way to be heard and change things in an open source project is to show up and help drive things. I know this isn’t easy for big project like ours…
This used to be a longer comment but I edited it and now it only shows this note 😳. I was needlessly nasty. Sorry, man, rough day.
No harm done 👍
I, too, prefer the “Compact” theme. Is there still anything that can be done to keep it going forward?
I’ve heard this startup time justification before, but surely the additional hassle of implementing, testing, and documenting a new configuration parameter isn’t worth saving a single stat call on startup? It’s hard to imagine that even shows up in the profile.
If everything else has already been tightly optimized, the stat call being performed on a spinning rust drive could be shown as being a major performance bottleneck when profiling startup performance.
When I rebuild LLVM, ninja does a stat system call for every C++ source and header file on the disk, about 10,000 of them in total. If I have no changes, then it completes in so little time that it appears instantaneous.
If the cost of a stat system call is making a user-noticeable difference to start times, then you’re in an incredibly rare and happy place.
The terminology and lack of consistency mapping concepts to commands caused me so much confusion early on.
One tool I stumbled on that helped clear up a lot of confusion was NDP software’s Git Cheatsheet - no affiliation, just a grateful user.
I glanced at lucky-commit to see what it was doing:
Did you look at the commit history :)
Oh wow, that’s clever.
TIL about starship for prompts. Looks good, I will give that a try.
I used it when it was a regular zsh script. I got tired of having to re-configure it whenever updates broke my configuration as it changed or removed features I used. I’m not sure how stable it is now but I did enjoy it while it lasted.
I’ve been using it for a while and haven’t had any of the problems that the other commenter seemed to have. My setup is pretty simple though.
There was one significant backwards-incompatible change that I can recall, but it was trivial to address. And for my part, I’d rather have my software evolving than never releasing a change in case it breaks someone somewhere.
I used starship previously and something happened to make it grind to a halt. I moved to romkatv/powerlevel10k and have had no such issues since. I once saw a discussion around a plug-in being implemented which was rejected as it took 10ms to run!
Edit: found the discussion, it was 11.4ms and it was reworked rather than rejected, but hopefully you get the point I was trying to make
One aspect of my choice of Starship that doesn’t come through in this post is that I use the
fish
shell as my daily driver, but I usezsh
andbash
on remote hosts in various places. Having a consistent prompt on all three of those is a huge selling point, even if my prompt might take 50-70ms to render.Don’t worry, not trying to invalidate your choices or anything. I’m sure it was something I did otherwise the GitHub issues would be full of noise! Having a consistent prompt is a really solid pro for sure.
I didn’t think you were :D
I just figured it might help to have a bit of clarity on the rationale. Arguably I should edit the post with that :)
I’m curious why such a dichotomy? Are you required to use zsh/bash on the remote machines or is it a matter of convenience? I’m forced to use bash, but would gladly switch to zsh if I could install it…
For a few years now, my response to this question is a counter-question:
Of course they’re neither. So it is with programming. You can use programming to do either science or art. But it’s just a tool.
There’s a cool question in the middle of the post. As I understand “ontological status”:
Following Peter Naur, the unsatisfactory answer for most programs is, “in the programmers’ heads,” with the increasing implication of incoherence for large teams of programmers. Very large programs stop being comprehensible in humanistic terms, and start to rather resemble egregores. Like gods, or bureaucracies.
But I think we can do better, retain more control. A better answer, for well-written programs, is “in the space of inputs accepted, as categorized into different regimes with relatively linear processing logic.” Personally I like to visualize this input space as tests. Haskell/OCaml and APL propose alternative approaches that also seem plausible.
I think I’d say you were somewhere near the mark with “egregore”, a cool word I just learned from your comment, but that the space of what a program “really is” extends not just to the minds of the people who write / maintain / administer it, but to the entire substrate it operates on, including physical hardware and the minds of its users and the shadowy, informal conceptual footprint it has therein. Much like the way that, say, a fictional / mythological / historical narrative exists in the often vast and highly ramified space between authors and readers and cultural context.
Not that there’s anything wrong with trying to delineate space-of-inputs-accepted and the like - just that we’re probably kidding ourselves if we think that we’re going to get to comprehensibility at all, for large things. Creating bounded spaces within which to reason and exercising extreme humility about the big pictures seems like as good as it’s ever really going to get, most days.
Very valid considerations. My answer to that is: if making it large gives up control over it, perhaps we shouldn’t make it large.
As a concrete example, one project I work on explores maintaining control over a full computing stack with the following limitations:
It’ll add features over time, but only so fast as I can figure out how to track – and communicate to others! – the input space. For example, it doesn’t have a mouse yet, because I don’t understand it and because I don’t want to just pull in a mouse driver without understanding it and having thorough unit tests (in machine code!) for it.
Basically I want to consider the possibility that humanity has been adopting computers way too aggressively and introducing lots of risks to all of society at large. I don’t want software to go from explosive adoption to another hamstrung and stiflingly regulated field, bureaucracy by other means. There’s too much promise here for that, even if we take a hundred years to slowly develop it.
Well said, even if I’m personally pretty sure the genie is out of the bottle at this point.
I don’t think I can really draw a clear line in my own mind between what bureaucracy is and what software is, but I welcome efforts to ameliorate and rate-limit the poisoning of things that seems to come with scaling software.
Uh, writing is definitely an art?
Not if I’m writing a grocery list.
hmm. I was initially unsure reading this, but I think I agree writing a grocery list is probably not an art. I’m not shocked there’s an edge case here, but it’s interesting to think about. I guess the programming equivalent would be code
ls
or similar trivial casesIt is by no means a rare edge case. If you assume all writing is art, that implies science is also art. Which is a reasonable and internally consistent interpretation of the terms, but not the frame of reference of the original question.
Scientific writing (such as journal articles) is certainly an art. Many good scientists are bad at the writing.
It sounds like you aren’t bothered by the original question. Which is great; it’s not a very interesting question. I’m personally more curious what you think about the second half of my first comment :)
It’s a skill for sure, but that doesn’t necessarily make it an art. If I look at my own writing (which isn’t scientific but is technical) then the amount of creativity that’s involved isn’t much more than, say, fixing my bike: it requires a bit at times, but mostly it’s just working at it.
It’s definitely a craft.
That’s been my opinion for a long time now. Like other crafts it isn’t automatically included or excluded from other human activities - art, science, commerce, hobbies …
If you accept that then the question posed isn’t very interesting.
I think the interesting question is whether it’s a craft or an engineering discipline.
Not necessarily. Designing a building can an art, a science, or a combination of both. A friend of mine does ghost writing for non-fiction. Most of it follows a formula and neither of us would consider it art.
Words are a tool used to communicate the same way verbal speech is.
But surely in ghostwriting non-fiction you still have choice in what words you use to convey the concepts?
I’ve done a fair amount of technical writing for hire. Art enters the picture somewhere, to be sure, but where you’re writing to strict house standards and pre-set outlines it does have a way of driving most of the feeling of creative expression out of things. And I suppose that gets at how I understand “art” - a feeling as much as anything.
Does anyone here use straight? I never got the advantage, compared to just cloning the repositories yourself (or as submodules). Plus you have to run random code from a github repository.
You run random code anyway. Even without straight? Or do you audit all the packages and their dependencies before you use them?
I minimize the number of packages I use and usually skim through the source code. Automatically downloading and evaluating code from a repository seems more dangerous, but I do get your point.
No I haven’t felt the need.
I (ab)used Cask for a long time because it was a nice declarative way of configuring packages in
~/.emacs.d
. Since I’ve returned back to usingpackage.el
and the fact that it handles dependencies much better now, I just rely on it maintainingpackage-selected-packages
and if needed I can just call(package-install-selected-packages)
but I don’t have more sophisticated needs.Racket:
Also using fold, with
mnm.l
:Racket’s
group-by
is wonderful but I usually want to group consecutive equal items into clumps as they arise rather than a single monolithic group.I didn’t know youtube had a RSS feed as well, cool!
I think you’ll find many RSS users here at lobste.rs, it’s really handy (even though I subscribe using email and not RSS ;)
They have 3 feed types that I’m aware of:
You can usually get the appropriate slug from the url but It’s not always clear what the channel id is especially since youtube introduced urls like https://www.youtube.com/c/stanford for Stanford university. You need to scrape the page to find the channel id.
I should add that their feeds are limited to the most recent additions so don’t expect to see a full history after you subscribe.
RSS is the only way I can handle Youtube subscriptions. Being able to watch the video from my feed reader instead of the actual site, which is designed to get you to watch a ton of semi related videos, is the best.
I see a lot of comments (not just here but everywhere I;ve seen discussions) about mirrors which will address the immediate problem of the source code not being available.
The bigger issue though is how to host/maintain is distributed/federated project, not just hosting source code but discussions, bug reports, issue tracking etc. that currently github adds on top of git.
The current version of youtube-dl will continue to work until youtube changes something and it stops working. It also worked with a massive amount of other streaming sites.
For example the api key for one quite popular streaming service had to be changed from time to time, presumably as the service began to notice and invalidated the key.
That’s what I’m more concerned about.
Email. And there are plenty of privacy-friendly email providers.
While the tools doesn’t currently have a UX as good as GitHub, it is definitely possible to develop a project in a distributed way.
git repository is already distributed, and tools like git-ipfs-rehost can be used to make the repository easily available. Issues/bug reports can be stored in the git repo itself using git-bug. Patches can be sent over email. Pull requests can be made in an informal way over any kind of communication channels, for example Matrix.org
Self-hosted git accessible via Tor Onion Service. HardenedBSD uses Gitea, which manages issues, wiki, and code storage. All of HardenedBSD’s public infrastructure is made available through Tor as well as through normal means.
I still use RSS
Me too. I love it.
One of the main benefits is having the title and (usually) an overview/description or excerpt. It makes it so much easier to prioritise and filter content before (and often instead of) having to deal with the modern web experience directly.
Me too. Instead of “still” I would also say that I use RSS more than ever. All the news sources I read I subscribe to via RSS, Atom or JSON Feed.