I think it’s risible to state something grandiose like: “The project structure is designed with independent packages in mind, according to hexagonal architecture, and targeted to provide […] benefits” when your structure has 5 folders and contains just README files with no Go code, not even an example.
And I see no reason why the pkg folder exists. Making the default way to start a package the “enterprisey” way is a bad idea in my opinion.
My limited defense of pkg is that you end up with a lot of stuff in the root of a project anyway, so for an application, it can be nice to shove things down one level. But it’s probably not necessary for most applications.
I really enjoy using go-chi. It’s straight forward but has options for complex use cases. It uses the handler from the standard library so it integrates well with everything. It also comes with middleware, which is easy enough to use, or to crib of off to write your own.
As a long time SPA apologist and licensed reverend of the church of tiny backends, I find this genuinely difficult to follow. What is “hypermedia” even? A tree with some implied semantics? How is that different than any other data? Why should I be constructing it on the backend (that place that knows comparatively nothing about the client)?
The back button has been solved for over a decade.
The complexity of “the backend has to care what things look like” is also enormous.
Theres talk of longevity and churn, but I’m pretty sure if I wrote hx-target=... in 2012, I would not get the desired effect.
I haven’t managed state on a server beyond session cookies and auth in ages.
I saw a computer from 20 years ago use the internet just fine last weekend, and it needed some horrifying reverse proxy magic to make a secure connection, so “I’m using HTTPS” and “I’m supporting old hardware/OSs” is a contradiction anyway because decrypting HTTPS is more computationally intense than doom, and it’s also a moving target that we don’t get to pin. The end result is that if you can securely exchange information with a browser, it’s not ancient enough to need more than a few servings of polyfills to run a reasonably modern app.
React is the currently popular thing that makes stuff go vroom on the screen, so of course a lot of people make it more complicated than it needs to be, but like… remember early 2000s PHP CMSs? Those weren’t better, and if you did those wrong it was a security issue. At least a poorly written react UI can’t introduce a SQL injection.
To each their own, but I don’t get it 🤷♀️. I also don’t get how people end up with JS blobs bigger than a geocities rainbow divider gif, so maybe I’m just a loony.
Anything can be done wrong, and the fact that popular tools are used wrong often and obviously seems like a statistical inevitability, not a reason to try to popularize something different.
not a reason to try to popularize something different.
Why would you prevent people to popularize anything that actually solves some problems? Isn’t having choice a good thing? I’m this author of this talk about a React->htmx move, and I’m completely freaked out by how many people have seen my talk, as if it was a major relief for the industry. I am also amazed, when hiring young developers, by how most of them don’t even know that sending HTML from the server is possible. Javascript-first web UI tools have become so hegemonic that we need to remind people that they have been invented to tackle certain kind of issues, and come with costs and trade-offs that some (many? most?) projects don’t have to bear. And that another way is possible.
Anything can be done wrong, and the fact that popular tools are used wrong often and obviously seems like a statistical inevitability,
Probably the statistics are way higher for technologies that carry a lot of complexity. Like I said in my talk, it’s very easy for JS programmers to feel overwhelmed by the complexity of their stack. Many companies have to pay for a very experienced developer, or several of them. And it’s becoming an impossible economical equation.
The complexity of “the backend has to care what things look like” is also enormous.
With htmx or other similar technologies, “what things look like” is obviously managed in the browser: that’s where CSS and JS run. Server-side web frameworks are amazingly equipped for more than a decade now to generate HTML pages and fragments very easily and serve them at high speed to the browser without the need of a JS intermediary.
young developers … most of them don’t even know that sending HTML from the server is possible
I am shocked and stunned every single time I talk to someone who doesn’t know this. And if they are interested, I explain a little bit about how the web server can return any data, not just json.
Hypermedia encapsulates both current object state and valid operations on it in one partially machine-readable and partially user-readable structure.
A lobsters page, for example, lists the link and comments (the current state) and has a definition of how to comment: you can type in text and post it to the server. After you do that, the system replies with the updated state and possibly changed new valid operations. These are partially machine-readable - a generic program that understands HTML* can see it wants text to post to a particular server point - and partially user-readable, with layout and English text describing what it means and what it does.
Notice that this is all about information the backend applications knows: current data state and possible operations on it. It really has nothing to do with the client… which is part of why, when done well, it works on such a wide variety of clients.
hypermedia doesn’t have to be html either, but that’s the most common standard
I disagree. The web is a place for art and graphic design as much as anything else. Websites are allowed to be pretty.
That extra flair on your lowercase “t” doesn’t help the user better interact with your features or UI. It just slows them down.
Anecdotal at best. Misleading at worst.
The user wants to complete a task - not look at a pretty poster
You are not all users. I, for one, do not enjoy using websites that don’t look nice.
many designers design more for their own ego and portfolio rather than the end-user
Again, anecdotal (though it does seem plausible).
I find myself agreeing with all the other points brought up in the article (system fonts are usually good enough, consistency across platforms isn’t essential, performance). I don’t have any extra fonts used on my website (except for where Katex needs to be used) and I think it’s fine (in most cases. I’ve seen the default font render badly on some devices and it was a little sad).
I still disagree about “websites are tools and nothing else”. I don’t want my website to be a tool. I want it to be art. I’ve poured in time and effort and money and my soul into what I’ve made. I do consider it art. I consider it a statement. And if I have to make a 35kb request to add in a specific typeface, then I’ll do it to make it reach my vision.
That extra flair on your lowercase “t” doesn’t help the user better interact with your features or UI. It just slows them down.
Anecdotal at best. Misleading at worst.
That was obviously not the real question though: the point is, do web fonts help users in any way, compared to widely available system fonts? My guess is that the difference is small enough to be hard to even detect.
As a user, they make me happy and I tend to be more engaged with the content (when used effectively), so yes I find them helpful. I don’t want to live in a world without variety or freedom of expression. As long as there are ways to turn them off, surely everyone can be happy.
We live in a world full of colour. I don’t like this idea of the hypothetical ‘user’ who ‘just wants to get things done’ and has no appreciation for the small pleasures in life. I don’t have anything against anyone who feels that way of course (it’s completely valid). Just this generalisation of ‘the user’.
It’s impossible not to detect my own instinctive, positive reaction to a nice web design, and typography is a big part of that. I am quite certain I’m not alone in that feeling. That enjoyment is “helpful” enough for me to feel strongly that web fonts are here to stay, and that’s a good thing. There’s also plenty of UX data about what typography communicates to users, even if those findings aren’t normally presented in terms of “helping.”
A poorly chosen font can be hard to read in a certain context, but that’s a far cry from “all custom web fonts are bad for usability” and I haven’t seen any evidence to back up that claim. So given there are obvious positives I think the question is really what harm they actually do.
Now obviously there’s a difference between a good web font and a crappy system font. But are all system fonts crappy? I haven’t checked, but don’t we already have a wide enough selection of good fonts widely installed on user’s systems already? Isn’t the difference between those good fonts and and (presumably) even better web fonts less noticeable? Surely we’re past the age of Arial and Times New Roman by now?
I mean, I guess it won’t be as good as the best web font someone would chose for a given application, but if anything is “close enough”, that could be it.
Obviously fonts are a subset of typography (didn’t mean to imply they are the same), but they are absolutely central to it. And I didn’t say that system fonts are all crappy. My argument doesn’t rely on that premise, and anyway, I like system fonts! I think that designing within constraints can foster creativity! I just don’t think we should impose those constraints on everyone. At least not without a lot more evidence of actual harm than I’ve seen presented.
And although we are definitely past the New Roman Times ;) I don’t think that means that a striking use of a good font is any less effective on the web.
I was expecting to read about how the cheap hardware with open source firmware can be set up with all the features of expensive mesh networks, but that’s not what it was.
I don’t like expensive mesh networks because (a) expensive (b) tend to require special proprietary control systems (c) which often want logins and other privacy-violators. So I buy $40 wifi routers that are known to work well with DD-WRT/OpenWRT and set them up as follows:
all wifi radios set the same SSID
turn off 2.4GHz on the AP nearest the kitchen (microwave fun)
channels are set by hand for minimum overlap
NAT, firewalling, DHCP and DNS are turned off
Cat5e runs to the nearest switch port (three switches: office, den, living room, all interconnected)
Five of these cover the house and the back yard nicely. No meshing. No outside-the-house dependencies except power.
Recent versions support 802.11 r, k and v, but not on all radios. Support is necessary on both ends. If you aren’t active while moving from one ‘best’ station area to another, none of them are needed.
TP-Link Archer C7 with OpenWRT is great.
If you have a home server running a VM with OpenWRT and dumb Access Points from Mikrotik is fun and can cover easily cover multiple rooms/area/house.
AVM‘s FritzBox have DSL/Cable/LTE Modem or Fiber included had quick stable but expensive.
That sounds pretty great. Do you have a wiki/post breaking all of that down? Or least have solid suggestions for cheap routers? Sounds very interesting.
Most of my routers are TP-Link Archer C7, which are routinely on sale in the US for $45 each. If I see a sale on some new plausible router, my criteria are:
at least one gigabit ethernet port, preferably 4.
one for uplink to a switch, the others for local devices that I might want to position there
802.11ac and n on 2.4 and 5Ghz bands
the most usable protocols as of early 2023 – machines that were new in 2010 onwards use n, machines new in 2015 onwards use ac. ax has been out for almost 4 years and is still uncommon except on the newest phones and laptops.
known good firmware from dd-wrt or openwrt in the most recent stable release
It’s reasonable to get everything set up well on machines that don’t have open source firmware, even if they don’t support an AP mode, by carefully turning off all the things I wrote about before and avoiding the ‘WAN’ port.
I don’t trust any of these things as firewalls for outside connections, strictly as access points.
[Speaking with no hat on, and note I am not at Google anymore]
Any origin can ask to be excluded from automatic refreshes. SourceHut is aware of this, but has not elected to do so. That would stop all automated traffic from the mirror, and SourceHut could send a simple HTTP GET to proxy.golang.org for new tags if it wished not to wait for users to request new versions. That would have caused no disruption.
This is definitely a manual mechanism, but I understand why the team has not built an automated process for something that was requested a total of two times, AFAIK. Even if this process was automated, relying on the general robots.txt feels inappropriate to me, given this is not web traffic, so it would still require an explicit change to signal a preference to the Go Modules Mirror, taking about as much effort from origin operators as opening an issue or sending an email.
Everyone can make their own assessment of what is a reasonable default and what counts as a DoS (and they are welcome to opt-out of any traffic), but note that 4GB per day is 0.3704 Mbps.
I don’t have access to the internal mirror architecture and I don’t remember it well, nor would I comment on it if I did, but I’ll mention that claims like a single repository being fetched “over 100 times per hour” sound unlikely and incompatible with other public claims on the public issue tracker, unless those repositories host multiple modules. Likewise, it can be easily experimentally verified that fetchers don’t in fact operate without coordination.
Sounds like it’s 4 GB per day per module, and presumably there are a lot of modules.
The more I think about it, the more outrageous it seems. Google’s a giant company with piles of cash, and they’re cutting corners and pushing work (and costs) off to unrelated small and tiny projects?
They really expect people with no knowledge of Go whatsoever (Git hosting providers) will magically know to visit an obscure GitHub issue and request to be excluded from this potential DoS?
Why is the process so secretive and obscure? Why not make the proxy opt-in for both users and origins? As a user, I don’t want my requests (no, not even my Go module selection) going to an adware/spyware company.
It’s a question of reliability. Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then (1-.999**32)*100 means there’s a 3.15% chance your build will fail. I think companies like creating these kinds of problems, since the only solution ends up yielding a treasure trove of business intelligence. The CIA loves funding things like package managers. However it becomes harder to come across looking like the good guys when you get lazy writing the backend and shaft service operators, who not only have to pay enormous egress bandwidth fees, but are also denied any visibility into who and how many people their resources are actually supporting.
Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then (1-.999**32)*100 means there’s a 3.15% chance your build will fail.
I do hope they do the sane thing and only try to download packages when you mash the update button, instead of every time you do yet another debug build? Having updates fail from time to time is annoying for sure, but it’s not a “sorry boss, can’t test anything today, build is failing because left pad is down” kind of hard blocker.
Ah… well I remember having our CI at work failing semi-infrequently because of a random network problem. Often restarting it was enough. But damn was this annoying. All external dependencies should be cached and locked in some way, so the CI provides a stable, reliable environment.
For instance, CI shouldn’t have to build a brand knew Docker image or equivalent each time it does its thing. It should instead depend on a standard image with the dependencies we need and everyone uses. Only when we update those external dependencies should the image be refreshed.
I have a lot of sympathy with Google here. I am using vcpkg for some personal projects and hit a problem last year where the canonical source of the libxml2 sources (which I needed as an indirect dependency of something else) was down. Unlike go and the FreeBSD ports infrastructure, vcpkg does not maintain a cache of the distribution files and so it was impossible to build my local project until I found a random mirror of the libxml2 tarball that had the right SHA hash and manually downloaded it.
That said, 4 GiB/day/repo sounds excessive. I’d expect that the update should need to sync only when it sees a new revision and even if it’s doing a full git clone rather than an update, that’s a surprising amount of traffic.
Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take. And suggesting that “any origin” write a custom integration to workaround Go’s abuse of the git protocol? Cool, put the work on others.
And according to the blog post, the proxy didn’t provide a user agent until prompted by sr.ht. That kind of poor behaviour makes it hard to open issues or send emails.
Moreover, I don’t think the blog post claimed 4Gb/day is a DoS. It said a single module could produce that much traffic. It said the total traffic was 70% of their load.
No empathy for organisations that aren’t operating at Google scale?
Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take.
No, I am saying that looking at the Crawl-delay clause of a robots.txt which is probably 1 or 10 (e.g. https://lobste.rs/robots.txt) is malicious compliance at best, since one git clone per second is probably not what the origin meant. Please don’t flame based on the least charitable interpretation, there’s already a lot of that around this issue.
For what it’s worth, 1 clone per second would still probably be less than what Google is currently sending them. Their metrics are open, and as you can see over the last day they have served about 2.2 clones per second, and if we assume that 70% of clones is from Google, it comes out to roughly 1.6 clones per second.
I think it’s pretty obvious to any bystander that SourceHut has requested a stop to the automatic refreshes. The phrase “patrician and sadistic” comes to mind when I think about this situation.
Sure, filling out Google’s form legitimizes Google’s actions up to that point. Nonetheless, there was a clear request to stop the excess traffic, and we should not ignore that request simply because it did not fit Google’s bureaucracy.
I appreciate your position, but I think it’s something of a beware-of-the-leopard situation; it’s quite easy to stand atop a bureaucracy and disclaim responsibility for its actions. We shouldn’t stoop to victim-blaming, even when the victims are not personable.
I haven’t taken a position. I’m stating that your statement was factually incorrect. You said that it is “pretty obvious” that they requested something when the exact opposite is true, and I wanted to correct the record.
You are arguing that they did not fill out the Google provided form. The person you’re arguing didn’t say they did, they said they requested Google stops doing the thing.
They did not request that Google stops doing the thing. There is no form to fill out. Literally stating “please stop the automatic refreshes” would be enough. They explicitly want Google to continue doing the thing but at a reasonable rate.
They explicitly want Google to continue doing the thing but at a reasonable rate.
Which in my opinion is the only reasonable course of action. Right now Google is imposing a lazy, harmful, and monopolistic dilemma: either suck up the unnecessary traffic and pay for this wasted bandwidth & server power (the default), or seriously hurt your ability to provide Go packages. That’s a false dichotomy, Google can do better. They just won’t.
Don’t get me wrong, I’m not blaming any single person in the Go team here, I have no idea what environment they are living in, and what orders they might be receiving. The result nevertheless makes the world a worse place.
It’s also a classic: big internet companies give us the same crap about email and spam filtering, where their technical choices just so happen to seriously hamper the effectiveness of small or personal email providers. They have lots of plausible reasons for these choices, but the result is the same: if you want your email to get through, you often need their services. How convenient.
That’s a false dichotomy, Google can do better. They just won’t.
You may disagree with the prioritization, but they have made progress and will continue to do so. Saying “they just won’t” is hyperbolic and false.
The result nevertheless makes the world a worse place.
You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place? This statement is indicative of your characteristic bad faith when discussing anything about Go.
I don’t think everything that the Go developers have done is correct, or that every language decision Go has made is correct, but it’s important to root those judgements in facts and reality instead of uncharitable interpretations and fiction.
Because you seem inclined to argue in bad faith about Go both here and in past discussions we’ve had [1], I think any further discussion on this is going to fall on deaf ears, so I won’t be engaging any further on this topic with you.
–
[1] here you realize you don’t have very good knowledge of how Go works (commendable!) and later here you continue to claim knowledge of mistakes they made without having full knowledge of what they even did.
My, the mere fact that you remember my only significant Go thread on Lobsters is surprisingly informative. But… aren’t you going a little off topic here? This is a software distribution and networking issue, nothing to do with the language itself.
You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place?
That’s a nice fully general counterargument you have there: no matter what fix or feature I request, you can always say maybe something else should take precedence. Bonus points for quoting me out of context.
Now in that quote, “they” is referring to Google’s employees, not Google itself. I’ve seen enough good people working in toxic environments to make the difference between a faceless company and the actual people working there. This was just me trying to criticise Google itself, not any specific person or team.
As for your actual argument, Google isn’t exactly resourceless. They can spend money and hire people, so opportunity costs are hardly a thing for them. Moreover, had they cared about bandwidth from the start, they could have planned a decent architecture up front and spent event less time on this.
But all this is weaselling around the core issue: Google is wasting other people’s bandwidth, and they ought to stop two years ago. When you’re not a superhuman AI with all self-serving biases programmed out of you, you don’t get to play the “greater good” card without a damn good specific reason. We humans need ethics.
If you were calling someone several times a day and they said “Hey. Don’t call me several times a day. Call me less often. Maybe weekly,” but you persisted in calling them multiple times a day, it would not be a reasonable defense to say “They don’t want me no not call them, they only want me to not call them the amount I am calling them, which I will continue to do.”
But like, also you should know better than to bother people like that. They shouldn’t need to ask. It is not reasonable to assume a lack of confrontation is acquiescence to poor behavior. Quit bothering people.
In your hypothetical the caller then said “Sorry for calling you so often. Would you like me to stop calling you entirely while I figure out how to call you less?” and the response was “No, I want you to continue to call me while you figure out how to call me less.”
That is materially different than a request to stop all calls.
No one is arguing that the request to be called less is unreasonable. I am pointing out that the option to have no calls at all was provided and that they explicitly rejected for whatever reasons they decided. This is not a value judgement on either party, but a clarification of facts.
Don’t ignore the fact that those calls (to continue the analogy) actually contained important information. The way I understand it, being left out of the refresh service significantly hurts the ability of the provider to serve Go packages. It’s really a choice between several calls a day and a job opportunity once a fortnight or so; or no call at all and missing out.
I appreciate where you’re coming from with this. Having VERY recently separated from a MegaCorp this is exactly the logic a bizdev person deciding what work gets funded and what doesn’t would use in this situation.
But again I ask - is this level of dependence on a corporate owner healthy for a programming language with such massively wide adoption?
It would be interesting to do a detailed compare and contrast between the two.
I can’t think of any names right now but I’m sure I’ve heard of cases in the past where some project officially declaring itself halted has been sufficient impetus for a fork to emerge with new maintainers, even when a call for maintainers hadn’t succeeded before.
I think there are some libraries that are better than some of the gorilla components, but there are some gorilla libraries that I haven’t seen many other options for. A lot of things use gorilla sessions. I found another session library I like the design of, but the docs are really unhelpful/nonexistent.
That is to say, I can see some parts of gorilla getting forked.
I’ve also seen a situation where declaring a project halted has cased other people to step up as maintainers, avoiding the need for a fork where the original developers are willing to bless the new owners.
I’m using this in one of my projects and it works well so I guess there’s no urgency but I should look for alternatives. Does anyone have any recommendations?
My go-to’s for the past few projects have been Chi and, more recently, Echo.
Mostly Echo because it was the default for oapi-codegen whereas Chi is what I use when I’m not generating code. Both have worked well and seem actively maintained. Echo’s backing company, Labstack, does seem to have a dead homepage so… be wary.
I never got into vim or emacs and I encourage anyone who feels that I am already wrong to keep reading.
Why do I prefer bloated IDEs like anything Jetbrains makes? Because they just work.
If I get a junior dev on my project, I can give them PyCharm and they’ll have syntax highlighting, code completion, and type hinting with zero configuration. They won’t have to google how to save a file or exit the editor. They can set a breakpoint, right-click the file and debug it with zero configuration. Wanna know what arguments go to that function? Press ctrl+p. Wanna go to the function definition even if that function comes from a library? Press ctrl+b. And it doesn’t matter if we’re working on Python, C, Rust, Go, or anything else — the buttons and features are all there every time.
I just don’t see the benefit of the other side. Sure I could spend weeks configuring dot files and installing supplemental language servers and documenting all of it, but I can’t give that to a junior. I now also have to maintain this system of dependencies just to write code.
Working on new systems is also difficult. What about offline systems? I can transfer a single CLion binary and it has everything bundled and it works.
I’m missing something clearly, but I just don’t get the other side.
I would never deny anyone a tool that they find useful and productive. And for myself I do not find all that configuring and setup productive in and of itself, luckily I don’t have to do it all that often. What the configuring does offer me is the power to precisely configure my environment. To create tools that help me be productive.
Recently I used Helix Editor for a few coding sessions, and the subtle differences from Neovim killed my productivity. It made me realize that it is all the little things that I use that help me code, and edit, faster. I was able to configure some shortcuts to act like vim, but not enough to make it useful.
The draw for me, to Helix, is that everything is built-in, and I don’t have to spend time knowing what lspconfig is and why I need it if LSP is built into Neovim. Which is the same draw to IDEs, however IDEs are take it or leave it and don’t allow for precise configuration that vim allows. At least to the best of my knowledge.
Why do I prefer bloated IDEs like anything Jetbrains makes? Because they just work.
They are amazingly good out of the box 99% of the time… but when things go bad, they go really bad. I got to fight a lot with PyCharm vs. 1+ million lines of Python 2.7. The codebase won. PyCharm could never really work properly in that very odd, customized mess of an environment.
If I get a junior dev …
I have worked with hundreds of juniors, and you know what I don’t do? Try to shove my personal environment onto them. If they don’t have one already, I generally recommend Jetbrains or VSCode.
Experts and junior developers use very different tools and workflows often. A junior barely knows how they want to work yet, let alone would grave the power of having a full lisp eval bound to M-:.
As you pointed out, learning Jetbrains IDEs is simple and well-documented, which means I can still show others how to use it while using a more adept setup personally.
I just don’t see the benefit of the other side.
Flexibility, customization, alternative workflows, advanced automation and integration, and in the case of emacs best in industry accessibility support – putting to shame the Jetbrains tools (which I am a $25 a month subscriber to). I know exactly how I want to work…
Sure I could spend weeks configuring dot files and installing supplemental language servers and documenting all of it,
It doesn’t take weeks, it shouldn’t take days unless you fall off the rails. It is a slow evolution, you don’t build a new workflow all at once, it comes in drips and drabs, you find something that bothers you, you fix it. Rinse and repeat for a few years and you got something exceedingly well built for you.
but I can’t give that to a junior.
Again, that is a very strange thing to optimize for – as you said, you can give a junior a link to jetbrains and it works out of the box, why constrain yourself to the worst developer on your team? That isn’t meant as an insult to them just – they are a brand new developer. Then you go on to speak of basic use of the tools and how they won’t have to google for this and that… again, I don’t think anyone is in favor of force-feeding junior developers emacs or vim. Most of the time, I am convincing them NOT to try to copy my environment, as it is mine and suited to me.
Working on new systems is also difficult. What about offline systems? I can transfer a single CLion binary and it has everything bundled and it works.
Most users of vim or emacs have fairly well defined solutions for setting up their environments, emacs and vim dwarf Jetbrains IDEs in portability. Look at the list of ancient/odd systems they are actively running on…
I have a somewhat kludgy solution versus lots of others and I am setup and running in under 6 minutes on most systems, even if I only have SSH access to them.
I’m missing something clearly, but I just don’t get the other side.
Keep in mind, most of the vim / emacs users I know are also able to work perfectly fine in Jetbrains, as you repeatedly pointed out – it is easy, consistent, and ready to go. But they choose not to, you on the other hand have never even tried the other side yet are willing to fairly aggressively discount its utility. Who knows, come take a vacation in alternative editors land (Kakoune, Emacs, Neovim), you might like it so much you stay.
The two must useful and time saving features I use with FairMail were in K-9 when I left, and were not planned either. I don’t know if they are there now.
I absolutely love the ability to Trash or Archive a message from the notification pull down. Also, Trash or Archive in the message list with a single swipe.
These two features have greatly improved my experience with email, and increased the speed with which I act on my email.
The “swipe to archive or delete” UX is fantastic! I don’t recall where I first encountered it, probably Gmail for Android a long time ago, but it was a game changer for me as well!
Honestly, for me the big thing about Arch isn’t a lack of “stability”, it’s more the number of sharp edges to cut yourself on.
For example, the author mentioned that the longest they’ve gone without a system update is 9 months. Now, the standard way to update an Arch system is pacman -Syu, but this won’t work if you haven’t updated in 9 months – the package signing keys (?) would have changed and the servers would have stopped keeping old packages, so what you instead want to do is pacman -Sy archlinux-keyring && pacman -Su.
There’s a page on the ArchWiki telling you about this, but you wouldn’t find it until after you run a system update and it fails with some cryptic errors about gpg. It also doesn’t help that pacman -Sy <packagename> is said to be an unsupported operation on the wiki otherwise, so you wouldn’t think to do it yourself, and might even hesitate if someone on a forum tells you to do it. Any other package manager would just… take care of this.
It’s little things like this that make me not want to use Arch, and what I think gives it a reputation for instability - it seems to break all the time, but that’s not actually instability that’s just The Arch Way, as you can clearly read halfway down this random wiki page.
If you’re worried about sharp edges like that, then yeah you probably don’t want to deal with Arch. Someone who uses the official install guide though should be made pretty clear that they exist. You hand prepare the system, and then install every package you want from there. It’s quite a bit different than a distro that provides a simple out of the box install. (I’m ignoring the projects that try to simplify the install for Arch here.)
It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.
I think the perception of a lack of stability does come from the Arch way, but from my experience it’s usually down to changes in the upstream software being packaged and clearly nothing that the distro is adding. It seems obvious to me that if you’re pulling in newer versions of software constantly you will have less stability by design. There’s real benefit in distros that take snapshots when it comes to predictability and thus stability.
I use Arch on exactly one system, my laptop/workstation, and I’m quite happy with it there. I get easy access to updated packages, and through the AUR a wide variety of software not officially packaged. It’s perfect for what I want on this machine and lets me customize my desktop environment exactly how I want. Doing the same with Debian and Ubuntu was much more effort on my part.
I wouldn’t use Arch on a server, mostly because I don’t want all the package churn.
It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.
That’s actually why I stopped using arch: I’ve got some devices that I won’t use for 6-12 months, but then I’ll start to use them daily again. And turns out, if you do that, pacman breaks your whole system, you’ll have to hunt down package archives and manually untangle that mess.
I wish there was a happy medium between Arch and Debian. Something relatively up to date but also with an eye for stability, and also minimalist when it comes to the default install.
I think that’s a bit of an exaggeration, when compared to other Linux distros where upgrades are always that scary thing.
Also the keyring stuff .. I’m not sure when that was introduced. So might have been before that?
I’ve done pretty long jumps on an Arch Linux System for my mother which isn’t really good with computers and technology in general on a netbook. Just a few buttons on the side in xfce worked really well until the web became too demanding for first gen Intel atoms. But I updated it quite a bit later looking for some files or something. I don’t remember that being a big issue. But I do remember how I was surprised that it wasn’t.
I actually had many more problems, like a huge amount of them with apt for example.
Worse of course package managers trying to be smart. If there’s one thing that I would never want to be smart it’s a package manager. I haven’t seen an instance yet where that didn’t backfire.
Upgrades have been relatively fear-free for me on both Ubuntu and Fedora, though that may be a recent thing, and due to the fact that my systems don’t stray too far from a “stock” install.
One thing I will give Arch props for is that it’s super easy to build a lightweight system for low-end devices, as you mentioned. Currently my only Arch device is an Intel Core m5 laptop, because everything else chugs on it.
Have you tried Alpine? It’s really shaping up to be a decent general purpose distro, but provides snapshot stability. It’s also about as lightweight as you can get.
I haven’t for that particular device, but in my time using it I couldn’t get it to function right on another netbook I had. Probably user error on account of me being too used to systemd, but I’m not in a rush to try it again either.
Good to know, thanks. I’m on my first non-test Arch install at the moment and so far I’ve been surprised by the actual lack of anything being worse than on other distros. Everything worked out of the box.
This seems to be playing a little loose with the facts. At some point Firefox changed their versioning system to match Chrome, I assume so that it wouldn’t sound like Firefox was older or behind Chrome in development. Firefox did not literally travel from 1.0 to 100. So it probably either has fewer or more than 100 versions, depending on how you count. UPDATE: OK I was wrong, and that was sloppy of me, I should have actually checked instead of relying on my flawed memory. There are in fact at least 100 versions of Firefox. Seems like there are probably more than 100, but it’s not misleading to say that there are 100 versions if there are more than 100.
That said, this looks like a great release with useful features. Caption for picture-in-picture video seems helpful, and I’m intrigued by “Users can now choose preferred color schemes for websites.” On Android, they finally have HTTPS-only mode, so I can ditch the HTTPS Everywhere extension.
Oh, but they did. In the early days they used a more “traditional” way of using the second number, so we had 1.5, and 3.5, and 3.6. After 5.0 (if I’m reading Wikipedia correctly) they switched to increasing the major version for every release regardless of its perceived significance. So there were in fact more than 100 Firefox releases.
I kinda dislike this “bump major version” every release scheme, since it robs me of the ability to visually determine what may have really changed. For example, v2.5 to v2.6 is a “safe” upgrade, while v2.5 to v3.0 potentially has breaking changes. Now moving from v99 to v100 to v101, well, gotta carefully read release notes every single time.
Oracle did something similar with JDK. We were on JDK 6 for several years, then 7 and then 8, until they ingested steroids and now we are on JDK 18! :-) :-)
Changing the user interface (e.g. keyboard shortcuts) in backwards-incompatible ways, for one.
And while it’s true that “Firefox is an application”, it’s also effectively a library with an API that’s used by numerous extensions, which has also been broken by new releases sometimes.
My take is that it is the APIs that should be versioned because applications may expose multiple APIs that change at different rates and the version numbers are typically of interest to the API consumers, but not to human users.
I don’t think UI changes should be versioned. Just seems like a way to generate arguments.
It doesn’t apply to consumer software like Firefox, really. It’s not a library for which you care if it’s compatible. I don’t think version numbers even matter for consumer software these days.
Those are all backported to the ESR release, right? I’ve just noticed that my distro packages that; perhaps I should switch to it as a way to get the security fixes without the constant stream of CADT UI “improvements”…
Personal: framework laptop kitted with wifi, 1 tb nvme, zfs on root, and 32 gigs of ram on one stick (for easy upgrading later to 64). I still like it, would still pick it again.
Work: 2018 mbp 16 inches, 16 gigs of ram. Given a chance I’d upgrade to an M1. I thrash out of ram without really trying.
As far as I know, the issue is with Tiger Lake being broken, not capable of entering certain sleep states. It should manifest itself on Windows too. I’ve stopped using sleep, instead I turn off my laptop every time lid is closed. Fortunately, startup time is really small.
I’m using Fedora on my Framework, as that’s what Framework was recommending as having the best hardware support when I got it. (They now also bill Ubuntu as “Essentially fully functional out of the box.”)
There are probably more advanced things I could do to improve battery life, but with that straightforward fix, it doesn’t lose more than 10-20% of battery charge if left sleeping unplugged overnight. That’s good enough to be usable for my purposes.
This seems correct. I tell it to go into deep sleep, but the battery drain when suspended is still too high. Three days unplugged at most.
But I use my suspend to ram ability, and the battery drain there is zero. I get about 6 hours active usage if I squeeze on my entirely untuned Void Linux install. I’m comfortable using about 30 to 50 percent of the battery on my most common flight routes (2.5, 3.5 hours flight time)
I tell it to go into deep sleep, but the battery drain when suspended is still too high. Three days unplugged at most.
But I use my suspend to ram ability, and the battery drain there is zero. I get about 6 hours active usage if I squeeze on my entirely untuned Void Linux install. I’m comfortable using about 30 to 50 percent of the battery on my most common flight routes (2.5, 3.5 hours flight time)
Layman’s question: why is systemd not using semantic versioning? Hard to understand if any breaking changes will be coming to distros upgrading to systemd 250. I am assuming it should correspond to something like 1.250.0 if full compatibility is preserved?
The first Semver commit (ca64580) was 14 Dec 2009 with it’s first release (v0.1.0) the next day. The first Systemd commit (f0083e3) was 27 Apr 2005 with it’s first release (0.1) the same day.
I think you’re right that the first stable release of Systemd came after the Semver spec and that various forms of that practice were already around before it. In my (somewhat unreliable) memory it took years for Semver to reach the popularity it has now where it’s often expected of many projects.
I think that semantic versioning is a lie, albeit a well-intentioned one. All it tells you is what the author thinks is true about their consumers’ expectations of their code, not what is actually true, so it can mislead. Having an incrementing release version says the same thing: “Something has changed” and gives the same practical guarantees: “We hope nothing you did depended on what we changed”.
True in this case, but there are ecosystems that will help authors enforce semantic versioning, e.g. Elm where the compiler makes you increase the major version if it can know there are API changes, i.e. when the type of an exported function changed.
I still use gvim for pasting things in scratch files quite a bit. Is there a recommended GUI for neovim these days? I tried one a few months ago, but it was barely usable
I don’t know if recommended but I’m enjoying neovide quite a bit, for being basically console nvim with small improvements that don’t try to make it into something else.
I started blogging in [[http://boston.conman.org/archive/][1999]], and back then, there wasn’t much in the way of blogging software. At first, it was just a series of hand written HTML files as I started on the software [1]. As time went on, I imported the entries I had into the blogging engine I wrote and continued on [2].
The original idea for my blog was to keep friends up to date while I was away in Boston for a short contracting job (I never did get that job) but over the years, it’s more for me than for other people, but if others read it, that’s fine. Because of that, there’s no single subject my blog is about—it’s more of an online journal (which was popular in the mid-to-late 90s), but for me, that’s okay.
[2] It took me nearly two years writing the software before I said “Enough! This won’t ever be perfect” and released it. There were features I was stressing over that in the long term, turned out not to matter at all.
I think it is wonderful that you have all your old post still published and available. I’ve lost anything I had before 2008, but I’ve made a point to keep everything available that I can.
Depending on the corruption, it may all be lost. The archive is validated but has no error-correction metadata. I pondered raid-like wrapper bottles, but haven’t done anything about them yet.
The de-duplicated database sounds similar to solid compression, which would lose the whole archive on a small damage, but streaming aspect makes me wonder if it’s organized in a way that enables resiliency.
Go’s secret sauce is that they never† break BC. There’s nothing else where you can just throw it into production like that because you don’t need to check for deprecations and warnings first.
† That said, 1.17 actually did break BC for security reasons. If you were interpreting URL query parameters so that ?a=1&b=2 and ?a=1;b=2 are the same, that’s broken now because they removed support for semicolons for security reasons. Seems like the right call, but definitely one of the few times where you could get bitten by Go.
Another issue is that the language and standard library has a compatibility guarantee, but the build tool does not, so e.g. if you didn’t move to modules, that can bite you. Still, compared to Python and Node, it’s a breath of fresh air.
I’ve been upgrading since 1.8 or so. There have been (rarely) upgrades that broke my code, but it was always for a good reason and easy to fix. None in recent memory.
I had a different experience, going from Java 8 to Java 11 broke countless libraries for me. Especially bad is that they often break at run- and not at compile time.
As someone with just a little experience with Go, what’s the situation with dependencies? In Java and maven, it becomes a nightmare with exclusions when one wants to upgrade a dependency, as transitive dependencies might then clash.
It’s a bit complicated, but the TL;DR is that Go 1.11 (this is 1.17, recall) introduced “modules” which is the blessed package management system. It’s based on URLs (although weirdly, it’s github.com, not com.github, hmm…) that tell the system where to download external modules. The modules are versioned by git tags (or equivalent for non-git SCMs). Your package can list the minimum versions of external packages it wants and also hardcode replacement versions if you need to fork something. The expectation is that if you need to break BC as a library author, you will publish your package with a new URL, typically by adding v2 or whatever to the end of your existing URL. Package users can import both github.com/user/pkg/v1 and github.com/user/pkg/v2 into the same program and it will run both, but if you want e.g. both v1 and v1.5 in the same application, you’re SOL. It’s extremely opinionated in that regard, but I haven’t run into any problems with it.
Part of the backstory is that before Go modules, you were just expected to never break BC as a library author because there was no way to signal it downstream. When they switched to modules, Russ Cox basically tried to preserve that property by requiring URL changes for new versions.
The module name and package ImportPath are not required to be URLs. Them being a URL is overloading done by go get. Nothing in the language spec requires them to be URLs.
I also have only a little experience with Go. I have not yet run into frustrations with dependencies via Go modules.
Russ Cox gave a number of great articles talking about how Go’s dependency management solves problems with transitive dependencies. I recall this one being very good (https://research.swtch.com/vgo-import). It also calls out a constraint that programmers must follow:
In Go, if an old package and a new package have the same import path,
the new package must be backwards compatible with the old package.
Is this constraint realistic and followed by library authors? If not, you’re going to run into problems with Go modules.
I’ve run into dependency hell in: Java, JavaScript, Python, and PHP – In every programming language I’ve had to do major development in. It’s a hard problem to solve!
I strongly agree. The first time major stuff broke was Java 9, which is exceedingly recent, and wasn’t an LTS. And that movement has more in common with the Go 2 work than anything else, especially as Java 8 continues to be fully supported.
Oh God, not another. :)
I think it’s risible to state something grandiose like: “The project structure is designed with independent packages in mind, according to hexagonal architecture, and targeted to provide […] benefits” when your structure has 5 folders and contains just README files with no Go code, not even an example.
And I see no reason why the pkg folder exists. Making the default way to start a package the “enterprisey” way is a bad idea in my opinion.
I’ve never cared for the pkg folder either. It seems redundant at best, or confusing at worst. I just use the root as my lib and types package.
My limited defense of pkg is that you end up with a lot of stuff in the root of a project anyway, so for an application, it can be nice to shove things down one level. But it’s probably not necessary for most applications.
I really enjoy using go-chi. It’s straight forward but has options for complex use cases. It uses the handler from the standard library so it integrates well with everything. It also comes with middleware, which is easy enough to use, or to crib of off to write your own.
As a long time SPA apologist and licensed reverend of the church of tiny backends, I find this genuinely difficult to follow. What is “hypermedia” even? A tree with some implied semantics? How is that different than any other data? Why should I be constructing it on the backend (that place that knows comparatively nothing about the client)?
The back button has been solved for over a decade.
The complexity of “the backend has to care what things look like” is also enormous.
Theres talk of longevity and churn, but I’m pretty sure if I wrote
hx-target=...
in 2012, I would not get the desired effect.I haven’t managed state on a server beyond session cookies and auth in ages.
I saw a computer from 20 years ago use the internet just fine last weekend, and it needed some horrifying reverse proxy magic to make a secure connection, so “I’m using HTTPS” and “I’m supporting old hardware/OSs” is a contradiction anyway because decrypting HTTPS is more computationally intense than doom, and it’s also a moving target that we don’t get to pin. The end result is that if you can securely exchange information with a browser, it’s not ancient enough to need more than a few servings of polyfills to run a reasonably modern app.
React is the currently popular thing that makes stuff go vroom on the screen, so of course a lot of people make it more complicated than it needs to be, but like… remember early 2000s PHP CMSs? Those weren’t better, and if you did those wrong it was a security issue. At least a poorly written react UI can’t introduce a SQL injection.
To each their own, but I don’t get it 🤷♀️. I also don’t get how people end up with JS blobs bigger than a geocities rainbow divider gif, so maybe I’m just a loony.
Anything can be done wrong, and the fact that popular tools are used wrong often and obviously seems like a statistical inevitability, not a reason to try to popularize something different.
You must be using a different web than me.
Why would you prevent people to popularize anything that actually solves some problems? Isn’t having choice a good thing? I’m this author of this talk about a React->htmx move, and I’m completely freaked out by how many people have seen my talk, as if it was a major relief for the industry. I am also amazed, when hiring young developers, by how most of them don’t even know that sending HTML from the server is possible. Javascript-first web UI tools have become so hegemonic that we need to remind people that they have been invented to tackle certain kind of issues, and come with costs and trade-offs that some (many? most?) projects don’t have to bear. And that another way is possible.
Probably the statistics are way higher for technologies that carry a lot of complexity. Like I said in my talk, it’s very easy for JS programmers to feel overwhelmed by the complexity of their stack. Many companies have to pay for a very experienced developer, or several of them. And it’s becoming an impossible economical equation.
With htmx or other similar technologies, “what things look like” is obviously managed in the browser: that’s where CSS and JS run. Server-side web frameworks are amazingly equipped for more than a decade now to generate HTML pages and fragments very easily and serve them at high speed to the browser without the need of a JS intermediary.
I am shocked and stunned every single time I talk to someone who doesn’t know this. And if they are interested, I explain a little bit about how the web server can return any data, not just json.
Hypermedia encapsulates both current object state and valid operations on it in one partially machine-readable and partially user-readable structure.
A lobsters page, for example, lists the link and comments (the current state) and has a definition of how to comment: you can type in text and post it to the server. After you do that, the system replies with the updated state and possibly changed new valid operations. These are partially machine-readable - a generic program that understands HTML* can see it wants text to post to a particular server point - and partially user-readable, with layout and English text describing what it means and what it does.
Notice that this is all about information the backend applications knows: current data state and possible operations on it. It really has nothing to do with the client… which is part of why, when done well, it works on such a wide variety of clients.
To be fair, “the client” is a web page 9 out of 10 times so why abstract it away.
here you go:
https://hypermedia.systems
TLDR: hypermedia is a media, say, a text, with hypermedia controls in it. A lot more detail to be found in the book, or on the essays page:
https://htmx.org/essays
I disagree. The web is a place for art and graphic design as much as anything else. Websites are allowed to be pretty.
Anecdotal at best. Misleading at worst.
You are not all users. I, for one, do not enjoy using websites that don’t look nice.
Again, anecdotal (though it does seem plausible).
I find myself agreeing with all the other points brought up in the article (system fonts are usually good enough, consistency across platforms isn’t essential, performance). I don’t have any extra fonts used on my website (except for where Katex needs to be used) and I think it’s fine (in most cases. I’ve seen the default font render badly on some devices and it was a little sad).
I still disagree about “websites are tools and nothing else”. I don’t want my website to be a tool. I want it to be art. I’ve poured in time and effort and money and my soul into what I’ve made. I do consider it art. I consider it a statement. And if I have to make a 35kb request to add in a specific typeface, then I’ll do it to make it reach my vision.
That was obviously not the real question though: the point is, do web fonts help users in any way, compared to widely available system fonts? My guess is that the difference is small enough to be hard to even detect.
As a user, they make me happy and I tend to be more engaged with the content (when used effectively), so yes I find them helpful. I don’t want to live in a world without variety or freedom of expression. As long as there are ways to turn them off, surely everyone can be happy.
We live in a world full of colour. I don’t like this idea of the hypothetical ‘user’ who ‘just wants to get things done’ and has no appreciation for the small pleasures in life. I don’t have anything against anyone who feels that way of course (it’s completely valid). Just this generalisation of ‘the user’.
It really depends on the metrics measured.
Does the font help the user fill out the form and submit it? No, not really.
Does the font help engender a brand feeling of trust across platforms and mediums? Probably yes.
It’s impossible not to detect my own instinctive, positive reaction to a nice web design, and typography is a big part of that. I am quite certain I’m not alone in that feeling. That enjoyment is “helpful” enough for me to feel strongly that web fonts are here to stay, and that’s a good thing. There’s also plenty of UX data about what typography communicates to users, even if those findings aren’t normally presented in terms of “helping.”
A poorly chosen font can be hard to read in a certain context, but that’s a far cry from “all custom web fonts are bad for usability” and I haven’t seen any evidence to back up that claim. So given there are obvious positives I think the question is really what harm they actually do.
I’d wager typography is not limited to fonts.
Now obviously there’s a difference between a good web font and a crappy system font. But are all system fonts crappy? I haven’t checked, but don’t we already have a wide enough selection of good fonts widely installed on user’s systems already? Isn’t the difference between those good fonts and and (presumably) even better web fonts less noticeable? Surely we’re past the age of Arial and Times New Roman by now?
See this related submission: https://lobste.rs/s/tdiloe/modern_font_stacks
It’s basically grouping “typeface styles” across different systems’ installed fonts.
This is big, thank you.
I mean, I guess it won’t be as good as the best web font someone would chose for a given application, but if anything is “close enough”, that could be it.
Obviously fonts are a subset of typography (didn’t mean to imply they are the same), but they are absolutely central to it. And I didn’t say that system fonts are all crappy. My argument doesn’t rely on that premise, and anyway, I like system fonts! I think that designing within constraints can foster creativity! I just don’t think we should impose those constraints on everyone. At least not without a lot more evidence of actual harm than I’ve seen presented.
And although we are definitely past the New Roman Times ;) I don’t think that means that a striking use of a good font is any less effective on the web.
I was expecting to read about how the cheap hardware with open source firmware can be set up with all the features of expensive mesh networks, but that’s not what it was.
I don’t like expensive mesh networks because (a) expensive (b) tend to require special proprietary control systems (c) which often want logins and other privacy-violators. So I buy $40 wifi routers that are known to work well with DD-WRT/OpenWRT and set them up as follows:
all wifi radios set the same SSID
turn off 2.4GHz on the AP nearest the kitchen (microwave fun)
channels are set by hand for minimum overlap
NAT, firewalling, DHCP and DNS are turned off
Cat5e runs to the nearest switch port (three switches: office, den, living room, all interconnected)
Five of these cover the house and the back yard nicely. No meshing. No outside-the-house dependencies except power.
Interesting. I’m curious, do you know if Openwrt supports anything for handover protocol as you move from one client to the next?
Recent versions support 802.11 r, k and v, but not on all radios. Support is necessary on both ends. If you aren’t active while moving from one ‘best’ station area to another, none of them are needed.
Which routers are you using? What do you recommend?
TP-Link Archer C7 with OpenWRT is great. If you have a home server running a VM with OpenWRT and dumb Access Points from Mikrotik is fun and can cover easily cover multiple rooms/area/house. AVM‘s FritzBox have DSL/Cable/LTE Modem or Fiber included had quick stable but expensive.
That sounds pretty great. Do you have a wiki/post breaking all of that down? Or least have solid suggestions for cheap routers? Sounds very interesting.
Most of my routers are TP-Link Archer C7, which are routinely on sale in the US for $45 each. If I see a sale on some new plausible router, my criteria are:
It’s reasonable to get everything set up well on machines that don’t have open source firmware, even if they don’t support an AP mode, by carefully turning off all the things I wrote about before and avoiding the ‘WAN’ port.
I don’t trust any of these things as firewalls for outside connections, strictly as access points.
[Speaking with no hat on, and note I am not at Google anymore]
Any origin can ask to be excluded from automatic refreshes. SourceHut is aware of this, but has not elected to do so. That would stop all automated traffic from the mirror, and SourceHut could send a simple HTTP GET to
proxy.golang.org
for new tags if it wished not to wait for users to request new versions. That would have caused no disruption.This is definitely a manual mechanism, but I understand why the team has not built an automated process for something that was requested a total of two times, AFAIK. Even if this process was automated, relying on the general robots.txt feels inappropriate to me, given this is not web traffic, so it would still require an explicit change to signal a preference to the Go Modules Mirror, taking about as much effort from origin operators as opening an issue or sending an email.
Everyone can make their own assessment of what is a reasonable default and what counts as a DoS (and they are welcome to opt-out of any traffic), but note that 4GB per day is 0.3704 Mbps.
I don’t have access to the internal mirror architecture and I don’t remember it well, nor would I comment on it if I did, but I’ll mention that claims like a single repository being fetched “over 100 times per hour” sound unlikely and incompatible with other public claims on the public issue tracker, unless those repositories host multiple modules. Likewise, it can be easily experimentally verified that fetchers don’t in fact operate without coordination.
Sounds like it’s 4 GB per day per module, and presumably there are a lot of modules.
The more I think about it, the more outrageous it seems. Google’s a giant company with piles of cash, and they’re cutting corners and pushing work (and costs) off to unrelated small and tiny projects?
They really expect people with no knowledge of Go whatsoever (Git hosting providers) will magically know to visit an obscure GitHub issue and request to be excluded from this potential DoS?
Why is the process so secretive and obscure? Why not make the proxy opt-in for both users and origins? As a user, I don’t want my requests (no, not even my Go module selection) going to an adware/spyware company.
It’s a question of reliability. Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then
(1-.999**32)*100
means there’s a 3.15% chance your build will fail. I think companies like creating these kinds of problems, since the only solution ends up yielding a treasure trove of business intelligence. The CIA loves funding things like package managers. However it becomes harder to come across looking like the good guys when you get lazy writing the backend and shaft service operators, who not only have to pay enormous egress bandwidth fees, but are also denied any visibility into who and how many people their resources are actually supporting.I do hope they do the sane thing and only try to download packages when you mash the update button, instead of every time you do yet another debug build? Having updates fail from time to time is annoying for sure, but it’s not a “sorry boss, can’t test anything today, build is failing because left pad is down” kind of hard blocker.
Go has a local cache and only polls for lib changes when explicitly told to do.
Thanks. I was worried there for a minute.
If you have CI that builds on every commit and you don’t take extra steps to set up cache for it, you will download the packages on every commit
Ah… well I remember having our CI at work failing semi-infrequently because of a random network problem. Often restarting it was enough. But damn was this annoying. All external dependencies should be cached and locked in some way, so the CI provides a stable, reliable environment.
For instance, CI shouldn’t have to build a brand knew Docker image or equivalent each time it does its thing. It should instead depend on a standard image with the dependencies we need and everyone uses. Only when we update those external dependencies should the image be refreshed.
I have a lot of sympathy with Google here. I am using vcpkg for some personal projects and hit a problem last year where the canonical source of the libxml2 sources (which I needed as an indirect dependency of something else) was down. Unlike go and the FreeBSD ports infrastructure, vcpkg does not maintain a cache of the distribution files and so it was impossible to build my local project until I found a random mirror of the libxml2 tarball that had the right SHA hash and manually downloaded it.
That said, 4 GiB/day/repo sounds excessive. I’d expect that the update should need to sync only when it sees a new revision and even if it’s doing a full git clone rather than an update, that’s a surprising amount of traffic.
Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take. And suggesting that “any origin” write a custom integration to workaround Go’s abuse of the git protocol? Cool, put the work on others.
And according to the blog post, the proxy didn’t provide a user agent until prompted by sr.ht. That kind of poor behaviour makes it hard to open issues or send emails.
Moreover, I don’t think the blog post claimed 4Gb/day is a DoS. It said a single module could produce that much traffic. It said the total traffic was 70% of their load.
No empathy for organisations that aren’t operating at Google scale?
No, I am saying that looking at the
Crawl-delay
clause of a robots.txt which is probably 1 or 10 (e.g. https://lobste.rs/robots.txt) is malicious compliance at best, since one git clone per second is probably not what the origin meant. Please don’t flame based on the least charitable interpretation, there’s already a lot of that around this issue.For what it’s worth, 1 clone per second would still probably be less than what Google is currently sending them. Their metrics are open, and as you can see over the last day they have served about 2.2 clones per second, and if we assume that 70% of clones is from Google, it comes out to roughly 1.6 clones per second.
I think it’s pretty obvious to any bystander that SourceHut has requested a stop to the automatic refreshes. The phrase “patrician and sadistic” comes to mind when I think about this situation.
They explicitly have stated in other locations that they have not requested the opt out for automatic refreshes for various reasons.
Sure, filling out Google’s form legitimizes Google’s actions up to that point. Nonetheless, there was a clear request to stop the excess traffic, and we should not ignore that request simply because it did not fit Google’s bureaucracy.
I was specifically responding to
No, they did not. They have explicitly rejected the option to have them stopped for various reasons, perhaps even the ones you hypothesized.
I appreciate your position, but I think it’s something of a beware-of-the-leopard situation; it’s quite easy to stand atop a bureaucracy and disclaim responsibility for its actions. We shouldn’t stoop to victim-blaming, even when the victims are not personable.
I haven’t taken a position. I’m stating that your statement was factually incorrect. You said that it is “pretty obvious” that they requested something when the exact opposite is true, and I wanted to correct the record.
You are arguing that they did not fill out the Google provided form. The person you’re arguing didn’t say they did, they said they requested Google stops doing the thing.
They did not request that Google stops doing the thing. There is no form to fill out. Literally stating “please stop the automatic refreshes” would be enough. They explicitly want Google to continue doing the thing but at a reasonable rate.
Which in my opinion is the only reasonable course of action. Right now Google is imposing a lazy, harmful, and monopolistic dilemma: either suck up the unnecessary traffic and pay for this wasted bandwidth & server power (the default), or seriously hurt your ability to provide Go packages. That’s a false dichotomy, Google can do better. They just won’t.
Don’t get me wrong, I’m not blaming any single person in the Go team here, I have no idea what environment they are living in, and what orders they might be receiving. The result nevertheless makes the world a worse place.
It’s also a classic: big internet companies give us the same crap about email and spam filtering, where their technical choices just so happen to seriously hamper the effectiveness of small or personal email providers. They have lots of plausible reasons for these choices, but the result is the same: if you want your email to get through, you often need their services. How convenient.
You may disagree with the prioritization, but they have made progress and will continue to do so. Saying “they just won’t” is hyperbolic and false.
You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place? This statement is indicative of your characteristic bad faith when discussing anything about Go.
I don’t think everything that the Go developers have done is correct, or that every language decision Go has made is correct, but it’s important to root those judgements in facts and reality instead of uncharitable interpretations and fiction.
Because you seem inclined to argue in bad faith about Go both here and in past discussions we’ve had [1], I think any further discussion on this is going to fall on deaf ears, so I won’t be engaging any further on this topic with you.
–
[1] here you realize you don’t have very good knowledge of how Go works (commendable!) and later here you continue to claim knowledge of mistakes they made without having full knowledge of what they even did.
My, the mere fact that you remember my only significant Go thread on Lobsters is surprisingly informative. But… aren’t you going a little off topic here? This is a software distribution and networking issue, nothing to do with the language itself.
That’s a nice fully general counterargument you have there: no matter what fix or feature I request, you can always say maybe something else should take precedence. Bonus points for quoting me out of context.
Now in that quote, “they” is referring to Google’s employees, not Google itself. I’ve seen enough good people working in toxic environments to make the difference between a faceless company and the actual people working there. This was just me trying to criticise Google itself, not any specific person or team.
As for your actual argument, Google isn’t exactly resourceless. They can spend money and hire people, so opportunity costs are hardly a thing for them. Moreover, had they cared about bandwidth from the start, they could have planned a decent architecture up front and spent event less time on this.
But all this is weaselling around the core issue: Google is wasting other people’s bandwidth, and they ought to stop two years ago. When you’re not a superhuman AI with all self-serving biases programmed out of you, you don’t get to play the “greater good” card without a damn good specific reason. We humans need ethics.
If you were calling someone several times a day and they said “Hey. Don’t call me several times a day. Call me less often. Maybe weekly,” but you persisted in calling them multiple times a day, it would not be a reasonable defense to say “They don’t want me no not call them, they only want me to not call them the amount I am calling them, which I will continue to do.”
But like, also you should know better than to bother people like that. They shouldn’t need to ask. It is not reasonable to assume a lack of confrontation is acquiescence to poor behavior. Quit bothering people.
In your hypothetical the caller then said “Sorry for calling you so often. Would you like me to stop calling you entirely while I figure out how to call you less?” and the response was “No, I want you to continue to call me while you figure out how to call me less.”
That is materially different than a request to stop all calls.
No one is arguing that the request to be called less is unreasonable. I am pointing out that the option to have no calls at all was provided and that they explicitly rejected for whatever reasons they decided. This is not a value judgement on either party, but a clarification of facts.
Don’t ignore the fact that those calls (to continue the analogy) actually contained important information. The way I understand it, being left out of the refresh service significantly hurts the ability of the provider to serve Go packages. It’s really a choice between several calls a day and a job opportunity once a fortnight or so; or no call at all and missing out.
Tough choice.
Yes, instead they have requested that the automatic refreshes be made better.
Which is a very reasonable request as right now they’re just bad.
I appreciate where you’re coming from with this. Having VERY recently separated from a MegaCorp this is exactly the logic a bizdev person deciding what work gets funded and what doesn’t would use in this situation.
But again I ask - is this level of dependence on a corporate owner healthy for a programming language with such massively wide adoption?
It would be interesting to do a detailed compare and contrast between the two.
Java used to have such a dependence. It wasn’t good indeed.
I can’t think of any names right now but I’m sure I’ve heard of cases in the past where some project officially declaring itself halted has been sufficient impetus for a fork to emerge with new maintainers, even when a call for maintainers hadn’t succeeded before.
I think there are some libraries that are better than some of the gorilla components, but there are some gorilla libraries that I haven’t seen many other options for. A lot of things use gorilla sessions. I found another session library I like the design of, but the docs are really unhelpful/nonexistent.
That is to say, I can see some parts of gorilla getting forked.
I wouldn’t say
gorilla sessions
has good docs. I found this alternative: https://github.com/alexedwards/scsscs
is pretty good. It’s made by the author of the Let’s Go and Let’s Go Further books which are solid books on building web services in Go.Axios I think went through this, and src-d/go-git
Thanks
I’ve also seen a situation where declaring a project halted has cased other people to step up as maintainers, avoiding the need for a fork where the original developers are willing to bless the new owners.
I’m using this in one of my projects and it works well so I guess there’s no urgency but I should look for alternatives. Does anyone have any recommendations?
My go-to’s for the past few projects have been Chi and, more recently, Echo.
Mostly Echo because it was the default for oapi-codegen whereas Chi is what I use when I’m not generating code. Both have worked well and seem actively maintained. Echo’s backing company, Labstack, does seem to have a dead homepage so… be wary.
I use go-chi also, and I really enjoy it. I think it’s been able to learn a lot of lessons from earlier routers and middleware providers.
I’m using chi as well. It provides just enough for API servers and is sufficiently unopionated.
I never got into vim or emacs and I encourage anyone who feels that I am already wrong to keep reading.
Why do I prefer bloated IDEs like anything Jetbrains makes? Because they just work.
If I get a junior dev on my project, I can give them PyCharm and they’ll have syntax highlighting, code completion, and type hinting with zero configuration. They won’t have to google how to save a file or exit the editor. They can set a breakpoint, right-click the file and debug it with zero configuration. Wanna know what arguments go to that function? Press ctrl+p. Wanna go to the function definition even if that function comes from a library? Press ctrl+b. And it doesn’t matter if we’re working on Python, C, Rust, Go, or anything else — the buttons and features are all there every time.
I just don’t see the benefit of the other side. Sure I could spend weeks configuring dot files and installing supplemental language servers and documenting all of it, but I can’t give that to a junior. I now also have to maintain this system of dependencies just to write code.
Working on new systems is also difficult. What about offline systems? I can transfer a single CLion binary and it has everything bundled and it works.
I’m missing something clearly, but I just don’t get the other side.
I would never deny anyone a tool that they find useful and productive. And for myself I do not find all that configuring and setup productive in and of itself, luckily I don’t have to do it all that often. What the configuring does offer me is the power to precisely configure my environment. To create tools that help me be productive.
Recently I used Helix Editor for a few coding sessions, and the subtle differences from Neovim killed my productivity. It made me realize that it is all the little things that I use that help me code, and edit, faster. I was able to configure some shortcuts to act like vim, but not enough to make it useful.
The draw for me, to Helix, is that everything is built-in, and I don’t have to spend time knowing what lspconfig is and why I need it if LSP is built into Neovim. Which is the same draw to IDEs, however IDEs are take it or leave it and don’t allow for precise configuration that vim allows. At least to the best of my knowledge.
They are amazingly good out of the box 99% of the time… but when things go bad, they go really bad. I got to fight a lot with PyCharm vs. 1+ million lines of Python 2.7. The codebase won. PyCharm could never really work properly in that very odd, customized mess of an environment.
I have worked with hundreds of juniors, and you know what I don’t do? Try to shove my personal environment onto them. If they don’t have one already, I generally recommend Jetbrains or VSCode.
Experts and junior developers use very different tools and workflows often. A junior barely knows how they want to work yet, let alone would grave the power of having a full lisp eval bound to M-:.
As you pointed out, learning Jetbrains IDEs is simple and well-documented, which means I can still show others how to use it while using a more adept setup personally.
Flexibility, customization, alternative workflows, advanced automation and integration, and in the case of emacs best in industry accessibility support – putting to shame the Jetbrains tools (which I am a $25 a month subscriber to). I know exactly how I want to work…
It doesn’t take weeks, it shouldn’t take days unless you fall off the rails. It is a slow evolution, you don’t build a new workflow all at once, it comes in drips and drabs, you find something that bothers you, you fix it. Rinse and repeat for a few years and you got something exceedingly well built for you.
Again, that is a very strange thing to optimize for – as you said, you can give a junior a link to jetbrains and it works out of the box, why constrain yourself to the worst developer on your team? That isn’t meant as an insult to them just – they are a brand new developer. Then you go on to speak of basic use of the tools and how they won’t have to google for this and that… again, I don’t think anyone is in favor of force-feeding junior developers emacs or vim. Most of the time, I am convincing them NOT to try to copy my environment, as it is mine and suited to me.
Most users of vim or emacs have fairly well defined solutions for setting up their environments, emacs and vim dwarf Jetbrains IDEs in portability. Look at the list of ancient/odd systems they are actively running on…
I have a somewhat kludgy solution versus lots of others and I am setup and running in under 6 minutes on most systems, even if I only have SSH access to them.
Keep in mind, most of the vim / emacs users I know are also able to work perfectly fine in Jetbrains, as you repeatedly pointed out – it is easy, consistent, and ready to go. But they choose not to, you on the other hand have never even tried the other side yet are willing to fairly aggressively discount its utility. Who knows, come take a vacation in alternative editors land (Kakoune, Emacs, Neovim), you might like it so much you stay.
The two must useful and time saving features I use with FairMail were in K-9 when I left, and were not planned either. I don’t know if they are there now.
I absolutely love the ability to Trash or Archive a message from the notification pull down. Also, Trash or Archive in the message list with a single swipe.
These two features have greatly improved my experience with email, and increased the speed with which I act on my email.
The “swipe to archive or delete” UX is fantastic! I don’t recall where I first encountered it, probably Gmail for Android a long time ago, but it was a game changer for me as well!
s/were/were not/
Honestly, for me the big thing about Arch isn’t a lack of “stability”, it’s more the number of sharp edges to cut yourself on.
For example, the author mentioned that the longest they’ve gone without a system update is 9 months. Now, the standard way to update an Arch system is
pacman -Syu
, but this won’t work if you haven’t updated in 9 months – the package signing keys (?) would have changed and the servers would have stopped keeping old packages, so what you instead want to do ispacman -Sy archlinux-keyring && pacman -Su
.There’s a page on the ArchWiki telling you about this, but you wouldn’t find it until after you run a system update and it fails with some cryptic errors about gpg. It also doesn’t help that
pacman -Sy <packagename>
is said to be an unsupported operation on the wiki otherwise, so you wouldn’t think to do it yourself, and might even hesitate if someone on a forum tells you to do it. Any other package manager would just… take care of this.It’s little things like this that make me not want to use Arch, and what I think gives it a reputation for instability - it seems to break all the time, but that’s not actually instability that’s just The Arch Way, as you can clearly read halfway down this random wiki page.
As it happens, they recently added a systemd timer to update the keyring.
ahh, that’s a good start. Still doesn’t help my system that’s been sitting shut down in a basement for four months, but at least there’s something.
If you’re worried about sharp edges like that, then yeah you probably don’t want to deal with Arch. Someone who uses the official install guide though should be made pretty clear that they exist. You hand prepare the system, and then install every package you want from there. It’s quite a bit different than a distro that provides a simple out of the box install. (I’m ignoring the projects that try to simplify the install for Arch here.)
It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.
I think the perception of a lack of stability does come from the Arch way, but from my experience it’s usually down to changes in the upstream software being packaged and clearly nothing that the distro is adding. It seems obvious to me that if you’re pulling in newer versions of software constantly you will have less stability by design. There’s real benefit in distros that take snapshots when it comes to predictability and thus stability.
I use Arch on exactly one system, my laptop/workstation, and I’m quite happy with it there. I get easy access to updated packages, and through the AUR a wide variety of software not officially packaged. It’s perfect for what I want on this machine and lets me customize my desktop environment exactly how I want. Doing the same with Debian and Ubuntu was much more effort on my part.
I wouldn’t use Arch on a server, mostly because I don’t want all the package churn.
That’s actually why I stopped using arch: I’ve got some devices that I won’t use for 6-12 months, but then I’ll start to use them daily again. And turns out, if you do that, pacman breaks your whole system, you’ll have to hunt down package archives and manually untangle that mess.
I wish there was a happy medium between Arch and Debian. Something relatively up to date but also with an eye for stability, and also minimalist when it comes to the default install.
Void?
I think that’s a bit of an exaggeration, when compared to other Linux distros where upgrades are always that scary thing.
Also the keyring stuff .. I’m not sure when that was introduced. So might have been before that?
I’ve done pretty long jumps on an Arch Linux System for my mother which isn’t really good with computers and technology in general on a netbook. Just a few buttons on the side in xfce worked really well until the web became too demanding for first gen Intel atoms. But I updated it quite a bit later looking for some files or something. I don’t remember that being a big issue. But I do remember how I was surprised that it wasn’t.
I actually had many more problems, like a huge amount of them with apt for example.
Worse of course package managers trying to be smart. If there’s one thing that I would never want to be smart it’s a package manager. I haven’t seen an instance yet where that didn’t backfire.
Upgrades have been relatively fear-free for me on both Ubuntu and Fedora, though that may be a recent thing, and due to the fact that my systems don’t stray too far from a “stock” install.
One thing I will give Arch props for is that it’s super easy to build a lightweight system for low-end devices, as you mentioned. Currently my only Arch device is an Intel Core m5 laptop, because everything else chugs on it.
Have you tried Alpine? It’s really shaping up to be a decent general purpose distro, but provides snapshot stability. It’s also about as lightweight as you can get.
I haven’t for that particular device, but in my time using it I couldn’t get it to function right on another netbook I had. Probably user error on account of me being too used to systemd, but I’m not in a rush to try it again either.
Good to know, thanks. I’m on my first non-test Arch install at the moment and so far I’ve been surprised by the actual lack of anything being worse than on other distros. Everything worked out of the box.
This seems to be playing a little loose with the facts. At some point Firefox changed their versioning system to match Chrome, I assume so that it wouldn’t sound like Firefox was older or behind Chrome in development. Firefox did not literally travel from 1.0 to 100. So it probably either has fewer or more than 100 versions, depending on how you count.UPDATE: OK I was wrong, and that was sloppy of me, I should have actually checked instead of relying on my flawed memory. There are in fact at least 100 versions of Firefox. Seems like there are probably more than 100, but it’s not misleading to say that there are 100 versions if there are more than 100.That said, this looks like a great release with useful features. Caption for picture-in-picture video seems helpful, and I’m intrigued by “Users can now choose preferred color schemes for websites.” On Android, they finally have HTTPS-only mode, so I can ditch the HTTPS Everywhere extension.
Wikipedia lists 100 major versions from 1 to 100.
https://en.m.wikipedia.org/wiki/Firefox_version_history
What did happen is that Mozilla adopted a 4 week release cycle in 2019 while Chrome was on a 6 week cycle until Q3 2021.
They didn’t change their version scheme, they increased their release cadence.
Oh, but they did. In the early days they used a more “traditional” way of using the second number, so we had 1.5, and 3.5, and 3.6. After 5.0 (if I’m reading Wikipedia correctly) they switched to increasing the major version for every release regardless of its perceived significance. So there were in fact more than 100 Firefox releases.
https://en.wikipedia.org/wiki/Firefox_early_version_history
I kinda dislike this “bump major version” every release scheme, since it robs me of the ability to visually determine what may have really changed. For example, v2.5 to v2.6 is a “safe” upgrade, while v2.5 to v3.0 potentially has breaking changes. Now moving from v99 to v100 to v101, well, gotta carefully read release notes every single time.
Oracle did something similar with JDK. We were on JDK 6 for several years, then 7 and then 8, until they ingested steroids and now we are on JDK 18! :-) :-)
Sure for libraries, languages and APIs, but Firefox is an application. What is a breaking change in an application?
I got really bummed when Chromium dropped the ability to operate over X forwarding in SSH a few years ago, back before I ditched Chromium.
Changing the user interface (e.g. keyboard shortcuts) in backwards-incompatible ways, for one.
And while it’s true that “Firefox is an application”, it’s also effectively a library with an API that’s used by numerous extensions, which has also been broken by new releases sometimes.
My take is that it is the APIs that should be versioned because applications may expose multiple APIs that change at different rates and the version numbers are typically of interest to the API consumers, but not to human users.
I don’t think UI changes should be versioned. Just seems like a way to generate arguments.
It doesn’t apply to consumer software like Firefox, really. It’s not a library for which you care if it’s compatible. I don’t think version numbers even matter for consumer software these days.
Every release contains important security updates. Can’t really skip a version.
Those are all backported to the ESR release, right? I’ve just noticed that my distro packages that; perhaps I should switch to it as a way to get the security fixes without the constant stream of CADT UI “improvements”…
Most. Not all, because different features and such. You can compare the security advisories.
Oh, yeah, I guess that’s right. I was focused in on when they changed the release cycle and didn’t think about changes earlier than that. Thank you.
Personal: framework laptop kitted with wifi, 1 tb nvme, zfs on root, and 32 gigs of ram on one stick (for easy upgrading later to 64). I still like it, would still pick it again.
Work: 2018 mbp 16 inches, 16 gigs of ram. Given a chance I’d upgrade to an M1. I thrash out of ram without really trying.
What OS do you use? How is the battery life with non-windows?
I’ve heard there have been battery issues under Linux.
As far as I know, the issue is with Tiger Lake being broken, not capable of entering certain sleep states. It should manifest itself on Windows too. I’ve stopped using sleep, instead I turn off my laptop every time lid is closed. Fortunately, startup time is really small.
I’m using Fedora on my Framework, as that’s what Framework was recommending as having the best hardware support when I got it. (They now also bill Ubuntu as “Essentially fully functional out of the box.”)
The problem I was having was that the laptop would completely drain its battery with the lid closed in “s2idle” mode. I was able to fix this by switching to “deep” sleep, at the cost of it taking ~10 seconds to wake up, which has not really inconvenienced me. https://github.com/junaruga/framework-laptop-config/wiki/Battery-Life:-Change-sleep-mode-from-s2idle-to-deep
There are probably more advanced things I could do to improve battery life, but with that straightforward fix, it doesn’t lose more than 10-20% of battery charge if left sleeping unplugged overnight. That’s good enough to be usable for my purposes.
This seems correct. I tell it to go into deep sleep, but the battery drain when suspended is still too high. Three days unplugged at most.
But I use my suspend to ram ability, and the battery drain there is zero. I get about 6 hours active usage if I squeeze on my entirely untuned Void Linux install. I’m comfortable using about 30 to 50 percent of the battery on my most common flight routes (2.5, 3.5 hours flight time)
I tell it to go into deep sleep, but the battery drain when suspended is still too high. Three days unplugged at most.
But I use my suspend to ram ability, and the battery drain there is zero. I get about 6 hours active usage if I squeeze on my entirely untuned Void Linux install. I’m comfortable using about 30 to 50 percent of the battery on my most common flight routes (2.5, 3.5 hours flight time)
I have a Dygma keyboard. I use layers to map hjkl to arrow keys, some for media keys, and a couple of special mappings for browser shortcuts.
I have a very minimal customization.
Layman’s question: why is systemd not using semantic versioning? Hard to understand if any breaking changes will be coming to distros upgrading to systemd 250. I am assuming it should correspond to something like 1.250.0 if full compatibility is preserved?
It may be as simple as Systemd predating the Semantic Versioning spec by almost five years.
are you sure?
none of the specs on semver.org are dated, but there is a wayback machine snapshot from 2009. systemd 1 was 2010.
I also feel like the practice existed long before the semver spec, but that github co-founder certainly seems to be taking credit for it…
SemVer certainly matches how version numbers were first explained to me in 2007.
yet the guy who added tracking scripts to avatars is like “I propose a simple set of rules and requirements…”
The first Semver commit (ca64580) was 14 Dec 2009 with it’s first release (v0.1.0) the next day. The first Systemd commit (f0083e3) was 27 Apr 2005 with it’s first release (0.1) the same day.
I think you’re right that the first stable release of Systemd came after the Semver spec and that various forms of that practice were already around before it. In my (somewhat unreliable) memory it took years for Semver to reach the popularity it has now where it’s often expected of many projects.
on wikipedia the initial release for systemd is listed as 30 March 2010, without any citation. perhaps it should read 27 April 2005.
also I shouldn’t have assume the initial release was called systemd 1. if they do point releases maybe they are indeed using semantic versioning.
I think that semantic versioning is a lie, albeit a well-intentioned one. All it tells you is what the author thinks is true about their consumers’ expectations of their code, not what is actually true, so it can mislead. Having an incrementing release version says the same thing: “Something has changed” and gives the same practical guarantees: “We hope nothing you did depended on what we changed”.
I don’t see it as a lie, more like a best effort guess at compatibility, which is really the best we could hope for.
Semantic versioning is more of a social contract than a formal proof of compatibility.
With that reasoning, why bother with any communication at all?
True in this case, but there are ecosystems that will help authors enforce semantic versioning, e.g. Elm where the compiler makes you increase the major version if it can know there are API changes, i.e. when the type of an exported function changed.
I don’t think it’s supposed to have breaking changes, is it?
The breaking changes are documented in the release notes, but there’s very few of them considering the scope available.
are you sure they aren’t using semantic versioning? they do have minor releases like 249.7.
I still use gvim for pasting things in scratch files quite a bit. Is there a recommended GUI for neovim these days? I tried one a few months ago, but it was barely usable
I don’t know if recommended but I’m enjoying neovide quite a bit, for being basically console nvim with small improvements that don’t try to make it into something else.
I use vimr for this same scenario. I haven’t had any issues.
vimr++
I’ll add that the main developer is very helpful even in the face of very out-of-the-ordinary problems.
I also use vimr as a daily driver and have been happy with it. I’ve also used neovim-qt in the past and that was very solid.
My advice—just do it!
I started blogging in [[http://boston.conman.org/archive/][1999]], and back then, there wasn’t much in the way of blogging software. At first, it was just a series of hand written HTML files as I started on the software [1]. As time went on, I imported the entries I had into the blogging engine I wrote and continued on [2].
The original idea for my blog was to keep friends up to date while I was away in Boston for a short contracting job (I never did get that job) but over the years, it’s more for me than for other people, but if others read it, that’s fine. Because of that, there’s no single subject my blog is about—it’s more of an online journal (which was popular in the mid-to-late 90s), but for me, that’s okay.
So, just do it already!
[1] Which I still use, by the way. https://github.com/spc476/mod_blog
[2] It took me nearly two years writing the software before I said “Enough! This won’t ever be perfect” and released it. There were features I was stressing over that in the long term, turned out not to matter at all.
I think it is wonderful that you have all your old post still published and available. I’ve lost anything I had before 2008, but I’ve made a point to keep everything available that I can.
How recoverable is a corrupted archive? Is data lost only at the corrupted location, or would the whole archive be lost?
Depending on the corruption, it may all be lost. The archive is validated but has no error-correction metadata. I pondered raid-like wrapper bottles, but haven’t done anything about them yet.
The de-duplicated database sounds similar to solid compression, which would lose the whole archive on a small damage, but streaming aspect makes me wonder if it’s organized in a way that enables resiliency.
I’ve really enjoyed using go-chi. My requirements are rather basic, just simple routing.
Upgrading @golang versions is actually a pleasurable task for me:
Does any other language get this as right?
Go’s secret sauce is that they never† break BC. There’s nothing else where you can just throw it into production like that because you don’t need to check for deprecations and warnings first.
† That said, 1.17 actually did break BC for security reasons. If you were interpreting URL query parameters so that ?a=1&b=2 and ?a=1;b=2 are the same, that’s broken now because they removed support for semicolons for security reasons. Seems like the right call, but definitely one of the few times where you could get bitten by Go.
Another issue is that the language and standard library has a compatibility guarantee, but the build tool does not, so e.g. if you didn’t move to modules, that can bite you. Still, compared to Python and Node, it’s a breath of fresh air.
I’ve been upgrading since 1.8 or so. There have been (rarely) upgrades that broke my code, but it was always for a good reason and easy to fix. None in recent memory.
Are semicolons between query params a common practice? I’ve never heard of this before.
No, which is why they removed it. It was in an RFC which was why it was added in the first place.
1.16 or 1.15 also broke backwards compatibility with the TLS ServerName thing.
Java is damn good about backward compatibility.
From what I recall, their release notes are pretty good as well.
I had a different experience, going from Java 8 to Java 11 broke countless libraries for me. Especially bad is that they often break at run- and not at compile time.
As someone with just a little experience with Go, what’s the situation with dependencies? In Java and maven, it becomes a nightmare with exclusions when one wants to upgrade a dependency, as transitive dependencies might then clash.
It’s a bit complicated, but the TL;DR is that Go 1.11 (this is 1.17, recall) introduced “modules” which is the blessed package management system. It’s based on URLs (although weirdly, it’s github.com, not com.github, hmm…) that tell the system where to download external modules. The modules are versioned by git tags (or equivalent for non-git SCMs). Your package can list the minimum versions of external packages it wants and also hardcode replacement versions if you need to fork something. The expectation is that if you need to break BC as a library author, you will publish your package with a new URL, typically by adding v2 or whatever to the end of your existing URL. Package users can import both github.com/user/pkg/v1 and github.com/user/pkg/v2 into the same program and it will run both, but if you want e.g. both v1 and v1.5 in the same application, you’re SOL. It’s extremely opinionated in that regard, but I haven’t run into any problems with it.
Part of the backstory is that before Go modules, you were just expected to never break BC as a library author because there was no way to signal it downstream. When they switched to modules, Russ Cox basically tried to preserve that property by requiring URL changes for new versions.
The module name and package ImportPath are not required to be URLs. Them being a URL is overloading done by
go get
. Nothing in the language spec requires them to be URLs.Yes, but I said “TL;DR” so I had to simplify.
I also have only a little experience with Go. I have not yet run into frustrations with dependencies via Go modules.
Russ Cox gave a number of great articles talking about how Go’s dependency management solves problems with transitive dependencies. I recall this one being very good (https://research.swtch.com/vgo-import). It also calls out a constraint that programmers must follow:
Is this constraint realistic and followed by library authors? If not, you’re going to run into problems with Go modules.
I’ve run into dependency hell in: Java, JavaScript, Python, and PHP – In every programming language I’ve had to do major development in. It’s a hard problem to solve!
Ah, the dialectic of dependencies…
It is (obviously) not realistic for most software produced in the world.
I strongly agree. The first time major stuff broke was Java 9, which is exceedingly recent, and wasn’t an LTS. And that movement has more in common with the Go 2 work than anything else, especially as Java 8 continues to be fully supported.