It’s frustrating that they describe the technologies they’re building on top of, and give a handwavy pitch of what it does (“conceptually similar to IPFS and Tor, but faster”) … but they don’t describe its functionality or feature set.
The underpinnings look great. I’m using a lot of the same crypto & RPC components in my own vaguely-similar Tendril project. I just want to know what comes in between “Step 1. DHT” and “Step 3. Profit!”
I appreciate your thoughts on this and wasn’t aware of this lack of encryption for keys. I’d imagine logic can be made to prevent a DHT record from being propagated (like only accepts changes for a set of keys I care to show? or some sort of blacklist like with BitTorrent)?
I find communal storage to be more and more important outside of the West because of its durability and recentering the people instead of the cables as the thing that has to be stewards of the data.
Personally I have a pretty low risk tolerance for [nasty/illegal media files] randomly existing on my PC or phone, even temporarily and in an encrypted form (since my device has the key).
Yeah, that’s bad. I can see why they say this is unavoidable: even if you required E2E encryption for files, how do you prevent someone uploading an “encrypted” file that’s actually unencrypted? Although the only reason to do that would be to deliberately spread illegal/compromising content, and it wouldn’t be exactly obvious or easy to find.
To some degree you accept this risk whenever you host an online service that accepts arbitrary data, but the risk feels elevated in the context of software that is explicitly designed to provide anonymous data sharing with onion routing.
Yeah. I long since gave up on the domain of P2P social software with unlimited reach, for similar reasons. A lot of these problems go away if the only peers you exchange data with are members of the user’s direct social circle; although of course it exacerbates connectivity/discovery problems.
This is a meta-problem in capability theory; we never really enumerated the various patterns that can be built with capabilities and published them in a single definitive way, so I have to link to janky old wikis or one-off blogs just to explain what’s possible.
In short, a capability-aware network allows for delegation of arbitrary capabilities, up to the expressive power of the underlying cryptographic protocol.
I’ve read a fair bit about capabilities. But they’re a very broad technology! I don’t get the impression Veilid is a general purpose capabilities platform like E or Spritely; the website says it’s focused on storing data for social networks.
Google gets its way with anything related to the Web as a platform because it has 66% marketshare. Stop using Chrome if you don’t like its unilateral decision making.
I think regulation is the necessary step here. The GDPR has had real effect, this new surveillance method is in part a way for them to try to work around GDPR. Time for updated regulation.
I’m in agreement. “Just switch” is not particularly reasonable. At best, in many years, that approach could start to reduce Google’s power. But it’s unlikely.
If we want change we have to force change through regulation.
I mean, I think there should be both, plus also speaking as an ex-Google advertising privacy person, Google’s advertising businesses are, like, at LEAST four or five distinct business models which should each be separate companies. more realistically, at least a dozen.
the current situation with Google in adtech is as if a stock exchange could also be a broker, and a company listed on the exchange, and a high-frequency trading firm, and a hedge fund, and a bank, and … well, you get the idea
I often find it cathartic to read through legal proceedings involving my former employer. the currently-ongoing one in NY state has filings which go into some detail on legal theories that broadly agree with me about this (there’s a definition in the law of what constitutes a distinct market), so that’s nice to see. maybe someday there’ll be some real action on it.
I’d also love to see an antitrust regulator look at the secondary effects from Chrome’s dominance. Google supports Chrome on a small handful of platforms and refuses to accept patches upstream that support others (operating systems and architectures). This is bad enough in making other operating systems have less browser choice (getting timely security updates is hard when you have to maintain a large downstream patch set, for example) but has serious knock-on effects because Chrome is the basis for Electron. The Electron team supports all platforms that Chrome support upstream, but they don’t support anything else (they are happy to take patches but they can’t make guarantees if their upstream won’t). This means that projects that choose Electron for portable desktop apps are locked out.
Google did take the Fuchsia patches into upstream Chromium. A new client OS from Google doesn’t have these problems but a new client OS from anyone else will. That seems like a pretty clear example of using dominance in one market to prevent new players in another (where, via Android, they are also one of the largest players) from emerging. But I am not a lawyer.
Splitting up Google is definitely a form of regulation. My feeling is that splitting it up is one of the forms of regulation least likely to have accidental negative consequences.
We see the negative effects of Google being together all the time: AMP was a very ugly attempt to use the search monopoly to force a change to preserve their ad monopoly on mobile where it was being eaten away by Facebook at the price of breaking the web. More recently, the forced transition from Google Analytics Universal Analytics to Google Analytics 4 was something only a monopoly would do. No company that actually expected its analytics to actually make money directly would just break every API so gratuitously.
That said, even break ups can have unexpected consequences. The AT&T break up of the 80s did lead to a telecom renaissance in the 90s, but it also fatally crippled the Unix team and led to the end of Bell Labs as a research powerhouse.
The AT&T break up of the 80s did lead to a telecom renaissance in the 90s, but it also fatally crippled the Unix team and led to the end of Bell Labs as a research powerhouse.
Did it? The division into RBOCs had dubious benefits for consumers, because it replaced a well-regulated national monopoly with several less regulated local monopolies. The original plan of splitting out Western Electric would have made a lot more sense (WE was getting creamed by Nortel in the switching market, breaking up the phone system messes up the balance sheet elsewhere), but AT&T execs thought computer revenue from commercializing Unix was too good.
I am not sure if breaking up ATT did any good to me as a consumer, since the only internet choices I have is ATT and Comcast! US feels like an undeveloped country with the crawling internet speeds here in the San Francisco Bay Area.
The current “AT&T” is really Southwestern Bell, which somehow was allowed to eat all its neighbors. It is silly to let the telcos merge into a megablob a short decade after breaking them up in the first place.
In the broadest sense yes, but I feel that the term has come to mean setting up rules of conduct for the regulated businesses and possibly some form of oversight. Somehow it doesn’t pop into my mind that when the large companies call for regulation, they might be actually asking to be split up. I hope I make sense.
It’s not as if the choices are mutually exclusive.
I abandoned Chrome as a daily driver a few years back, but I’d do it today in a heartbeat based on this news. I rather enjoy the Firefox user experience, and switching was not a huge cost. I suppose YMMV and if switching does pose a large cost for someone, that’s their calculus, it’s just hard for me to imagine.
I’m also pushing for regulation how I can (leaving messages for my congresscritters, for what that’s worth). For me, I can’t imagine doing that but continuing to use Chrome.
Advertisement would help too. This announcement is buried for a reason. Google may have just handed Mozilla a huge cannon to use to get people off Chrome and onto Firefox, but Mozilla has to actually take advantage of it.
It’s not clear to me that this does bypass the GDPR. The GDPR requires informed consent to tracking. It sounds like this uses intentionally misleading text so will not constitute informed consent. It’s then a question of whether it counts as tracking. Google is potentially opening up some huge liability here because the GDPR makes you liable for collecting PII in anonymised form if it is later combined with another data set to deanonymise it.
I’d agree with that if it worked for Microsoft, Apple, Samsung, Sony (and Google). We need more than regulation; we need a cultural shift away from things like Google Chrome being the “defacto” for the Web. We have to get people understand that they have choice.
I would say regulation absolutely worked on Microsoft. A key part of why Google was able to succeed in the early 2000s was Microsoft was being very careful after losing a major anti-trust action. I was at Google at the time and I was definitely worried that Microsoft would use its browser or desktop dominance to crush the company. It never did but I’m confident it would have without the anti-trust concern.
All regulations end up the same way. Simply walking around it. Paying consultants to figure out the legal way. The biggest players will find the way, and the poorest and smallers players will die out. And that’s one of the ways how you can create a monopoly.
I just arrived in another EU country, and thanks to the derided regulation I can call and use mobile internet at the same pricing as at home. This means it’s easier for me to search for transport, lodging etc. to the benefit of both me and the providers of these services. The ones losing out are the telecom operators, who have to try to compete on services instead of inflated fees for roaming.
I’m not “deriding” regulations. I simply question the motives that are used when creating them. Maybe it’s because of the legacy of “central-planned economy” I was subjected to.
Also I think you’ve just given an example of a company in a sector that requires an explicit permission from the government to be able to even start the business.
It’s not true that large companies always find a way to bypass legislation or that regulation is always anti-competitive in any interesting sense.
Large companies can often work around regulations, but sometimes they clearly lose and regulation is passed and enforced that hurts their interests. E.g. GDPR, pro-union laws, minimum wages, etc.
Yes, richer and more powerful players are usually more likely to survive a financial hit. That’s not a feature of regulation. That’s a feature of capitalism: power and money have exponential returns (up to a point).
It has to be fixed with redistributive policies, not regulation.
Also, mobile telecoms consume a finite public good (EM spectrum noisiness in an area). They’re a natural target for public regulation. I don’t think that’s really a problem, tho I would prefer if public control was not state control.
It’s not true that large companies always find a way to bypass legislation or that regulation is always anti-competitive in any interesting sense.
I disagree. Companies will always try, they may not succeed. In particular, if the cost of complying with regulations is lower than the cost of finding work arounds, then they will comply. This is part of the reason that the GDPR sets a limit on fines that is astronomical: the cost of trying and failing to work around the GDPR is far lower than the cost of complying.
I’m a bit confused. I didn’t say anything about companies trying or not. I agree with all of your post except the bit about the GDPR fine limit, which I think is probably high enough (4% of global turnover) to exceed the benefits of non-compliance in most cases.
I don’t want to get into the Keynes vs. von Hayek (although if redistribution is involved then maybe we should include Marx) dispute regarding whether regulations are good or bad, because the moderator removes threads related to politics, and I don’t want him to remove this one.
(also I’m not sure we can convince each other to our point of view)
I did stop using Chrome, a long time ago. But, if my frontend colleagues are any indication, a deep hostility toward non-Chrome browsers is rampant among the people who are responsible for supporting them. And more and more teams just don’t bother. I would prefer not to have Chrome installed at all, but I have to because many websites that I don’t have a choice about using (e.g., to administer my 401(k), to access government services, to dispute a charge on my credit card) just flat-out don’t work in anything else.
You might have some luck reporting such issues to the responsible government agencies. They don’t usually write the sites themselves but contract the work out. The clerk will usually just forward your complaint to the supplier who will gladly bill the additional work.
The problem is systemic - if they don’t test it except with Chrome, they might fix the “one time issue” only for it to break the next time around they make some larger change.
Depending on the jurisdiction, supporting a single vendor’s product with public money may be illegal. It’s a direct subsidy on Google. Whether a particular state / national government can subsidise Google without violating laws / treaties varies, but even in places where they can they typically have to do some extra process. If you raise the issue as a query about state subsidy of a corporation then you may find it gets escalated very quickly. If it was approved by an elected person then they may be very keen to avoid ‘candidate X approved using taxpayer money to subsidise Google, a corporation that pays no tax in this state’ on PSAs in the next election.
I doubt any regulator would perceive “failed to test a web application in minority browsers” as a subsidy. Maybe if they specifically developed an application that targeted that specific proprietary vendor’s stack.
But I imagine a public organization such as a library building a virtual environment to be used specifically in VR Chat to target young audiences as a part of promotional strategy would be perceived as completely mostly fine.
In Czechia, government purchased several (pretty important, duty declarations for example) information systems that were only usable with Microsoft Silverlight. They are still running, by the way. As far as I know, the agencies were not even fined for negligence.
Most people out of IT treat large tech companies like a force of nature, not vendors.
I read a very apt quote[1] on HN a month ago, about how much Google values Chrome users thoughts, which directly relates to people complaining, but then continuing to use it:
Chrome user opinion to them is important to their business in about the same way meatpackers care about what cattle think of the design of the feeding stations. As long as they keep coming to eat, it’s just mooing.
Stop using Chrome if you don’t like its unilateral decision making.
More like make sure you convert everyone around you as well. If you have any say in your company policy, just migrate your office staff to Firefox. Make sure to explain to your family and convert them as well. uBlock on mobile Firefox should help to ease some conversion there as well.
I’ve been using my Framework 12th Gen Intel since it launched in May 2022, and it’s been my absolute favorite laptop. I’m gratified to see I can upgrade it to the AMD mainboard; and also pick up a case that will allow me to reuse my existing Intel board; and replace the battery with a larger capacity battery. They’re really sticking by their promise of upgradability in a laptop!
I live in Italy and I got the Framework I’m typing this on a month ago, I needed a new laptop so I asked a friend from Germany to forward it to me lol. I think I’ll be getting the blank keyboard when they start shipping here.
I disagree. It’s futurism, so it could be wrong, but the idea that the easiest to use UI for apps has switched from a GUI to a command line is interesting. It had been observed for some time that Google is like a CLI but this takes it to a new level. Will it actually work out? Time will tell. But if you’re thinking about where the business opportunities are for programmers, it would be silly to not at least examine if LLMs could help your project.
It misses the key thing that drove GUI adoption in the first place: discoverability. In early CLIs, it was easy to list the full set of commands available in an OS and then full set of options for each one. In DOS, I think it took two screens to list every single command available in the default install. As CLIs get bigger, this become harder because their designs were verb-noun. GUIs make it easy to show the set of verbs that are possible to apply to a specific object, which lets the user discover what the tool can do much more easily. It’s possible to design noun-verb CLIs (PowerShell has enough type info to do this, but decided to make some very poor choices from the perspective of usability). A natural-language prompt is the absolute worst thing here because it gives the user no information about what is possible and relies on their ability to scope queries to something that might work.
I think LLMs change the calculus here because it’s feasible to say, “I’m trying to get the average total value of all the monthly reports” and the LLM can shoot back “Use this code AVG(C5:X5) SUM(B5:B18)”. You don’t have to know that getting the average is AVG and getting the total is SUM. You also don’t have to preprogram in all the synonyms for the functions. Just write a bunch of documentation for one shot learning (and that can also be AI assisted) and let it go.
Typing is still tedious and time-consuming for an average user. It might be more convenient combined with voice, but that depends on user preferences and environment (e.g. open-plan office).
So I’d expect basic operations to still use classic buttons. Instructions may be useful for learning (“show me how…”) or tasks big and complicated enough that it’s easier to describe and delegate them than to do them yourself. However, the AI needs to be really good at this, so that checking the results and correcting it isn’t worse than doing it yourself.
Humans are terrible at writing instructions for others to do, AI included. Usually, they can’t break down a task into atomic units of completion. I suspect this will make making an AI do something harder, because at best, it’s a delegation problem.
The point of the current LLM tools is that it is possible to iterate. So there can be some back and forth between the user and the AI. The AI can even do the breaking down of a task for the user.
This is most likely not faster than a power user that knows how to click the right buttons or type in the right commands. But it is probably a lot faster and definitely a lot nicer experience for everyone else.
And we’re just witnessing the early beginnings of this kind of human machine interfaces. Imagine that the AI that’s assisting you has a personal profile of you where it remembers years of context about who you are and what you work on and what your current task is. Add on top of this the addition of voice and even body language through the web cam and then imagine what kind of interactions are possible.
Eternal Terminal is usually a better choice than mosh, (unless you’re frequently using a slow, high latency or lossy link). ET offers native scrolling and tmux control mode.
Last time I checked ET needed to run as a system daemon on the remote server. In my company, I frequently need to work on remote shared linux boxes but don’t have sudo on them. Mosh can run as a regular user
People use tmux to keep track of their sessions (sort of like screen and dtach and emacs daemon) and mosh also keeps track of sessions. That’s the overlap.
tmux is a terminal multiplexer. mosh doesn’t do that. they both happen to have a concept of attaching and detaching, but that’s not a functional overlap.
That said, I see why people get the impression that tmux and mosh provide similar functionality. Both allow you to continue a terminal session after the connection to the machine was interrupted. But this is done with a different mechanisms. I’d say the functional overlap is marginal.
I go the other way and use autossh, which doesn’t require a custom UDP protocol and so plays nicely with firewall rules, and dtach for the persistence. My terminal’s normal scrollback works well (I hate that tmux wants to be a terminal emulator inside a terminal emulator) and so I have a clean separation of concerns:
My windowing system manages windows and does so in a way that is consistent across different kinds of window.
My terminal emulator emulates a terminal and does so in a way that is consistent across different connected sessions.
dtach manages disconnection and reconnection
autossh reconnects if a session is disconnected.
On macOS, the terminal provides a UUID in an environment variable for each window that is consistent across restarts of the app, so I have some logic in my profile that reconnects to remote sessions and I can reboot without losing any state.
On iOS, ssh disconnects so frequently that mosh is better than automatically reconnecting but that has left me in a sorry state of having to use |&less (I’m on on zsh) if I wanna read error messages off the screen (and that doesn’t play nicely with inputs such as passwords, and it also requires me to repeat the incantation). Most of the time I’m in Emacs, though, which has its own state managing daemon. I use dtach for stuff outside of Emacs, like when I’m testing out and debugging daemons (before jamming them into systemd).
Can you share more about your workflow? From your comment I see that there is a TERM_SESSION_ID env var (which I was not aware of prior), which it looks like Terminal.app and iTerm2 set (but not Kitty? haven’t checked others). So how do you map those to autossh reconnects?
I wrap autossh in a shell function that creates a file under ~/cache that contains the connection string and is named with the session UUID. In my profile, there’s a line that checks for a file matching the UUID of the current session and, if it exists, runs the connection command from the shell. If Terminal.app restarts, it recreates all of my windows with the same UUIDs and starts a shell in each. The shell executes the profile, which then restores the connection.
The only minor annoyance is that autossh doesn’t have a good way of reporting whether the connection exited because I exited the remote command or for other reasons, so I have to have a prompt asking me if I want to delete the session state, rather than having it automatically cleaned up.
Instead of paying for a registry service, I wonder whether it is a bad pattern to host an image registry within the cluster. I’ve done it a few times, but I’m not sure if it’s inappropriate for any specific reasons.
That’s correct. But just to be clear, you don’t “swap out” blockchain for non-blockchain. Turbosrc is not a blockchain. So someone would have to fork Turbosrc and make VotePower a crypto token, and some other things. It’s perfectly useful as is and without Web3 capabilities. The point of Turbosrc is to allow voting by ‘stakeholders’ on pull requests - how VotePower is recorded (database or blockchain) is a means, not an end.
I’ve done this. My setup is to use Vite to build/pack the site, the I have a Python script that copies all the files into the Rust source tree, then update a single Rust file with a bunch of include_str macros. Single binary with web server (I use tide) and all the artifacts. It’s pretty neat :-)
Maybe someone scraped it off the source code, yeah. Probably more likely than a troll who’s reading tests for 13yo bugs and has a free extra domain in some plan, waiting to be claimed.
Now moving my game development learning towards Tiled to utilize a faster way to design maps. I have a sense of the game I’d like to make and now already want to design some aspects in a GUI so I can work with prototyping a bit faster.
I actually wish more IRC clients and servers supported this as an authentication method. It’s something people understand, would introduce a simpler way to do ACL and allow make it more of a viable option against Matrix and XMPP (both things that I’d run myself if I decide to spin up a community).
Lots more game development learning using Godot. I’ve been following a tutorial on YouTube and I’m almost done with what’s available so far. I’d like to do the RPG tutorial later but I think I’ll focus on making just Pong.
I’m also cleaning up on my self-hosted livestreaming with a daemon that’ll handle almost everything (because why not make your own version of StreamElements).
I wasn’t satisfied with the blog’s markup and JS requirement, so I went to look for the original article, because I hoped it was written in Markdown or something similar. It is not.
Working on adding true cursor-based pagination to https://indieweb.org/Koype. Then gonna see if I can fix my site’s archives to actually show things on a per-hour, -day and -week basis.
I’m a little conflicted on this article. On one hand it covers a lot of ground in terms of features. On the other, Rust on Nails doesn’t provide “batteries included” and “conventions over configuration” of even the earliest versions of Rails. Even though author calls it a framework, they start by choosing a web server and a router, and keep doing it for all the components. Amount of boilerplate will shock any RoR developer unfamiliar with Rust, too.
So, A for effort but it’s kinda false advertising. Don’t get me wrong, I’m excited it’s possible to cover that much and piece together a semblance of a framework but there’s a RoR comparison is a little premature.
Yes, yes, yes! SQLite is the choice to make for databases. Like especially with Litestream, you get to treat your database as one treats their filesystem (which is what you end up having to rely on anyway). And it becomes “infinitely” storable since you can use object storage with Litestream. So excited to see them embrace this and eager to learn from them.
It’s frustrating that they describe the technologies they’re building on top of, and give a handwavy pitch of what it does (“conceptually similar to IPFS and Tor, but faster”) … but they don’t describe its functionality or feature set.
The underpinnings look great. I’m using a lot of the same crypto & RPC components in my own vaguely-similar Tendril project. I just want to know what comes in between “Step 1. DHT” and “Step 3. Profit!”
I spent a few days looking at it when it was first released and tried to summarise the main ideas in a post, which I will share in case it helps. https://thomask.sdf.org/blog/2023/08/15/a-few-notes-on-veilid.html
Very useful overview, thanks!
I appreciate your thoughts on this and wasn’t aware of this lack of encryption for keys. I’d imagine logic can be made to prevent a DHT record from being propagated (like only accepts changes for a set of keys I care to show? or some sort of blacklist like with BitTorrent)?
I find communal storage to be more and more important outside of the West because of its durability and recentering the people instead of the cables as the thing that has to be stewards of the data.
Thank you for the writeup!
Yeah, that’s bad. I can see why they say this is unavoidable: even if you required E2E encryption for files, how do you prevent someone uploading an “encrypted” file that’s actually unencrypted? Although the only reason to do that would be to deliberately spread illegal/compromising content, and it wouldn’t be exactly obvious or easy to find.
Yeah. I long since gave up on the domain of P2P social software with unlimited reach, for similar reasons. A lot of these problems go away if the only peers you exchange data with are members of the user’s direct social circle; although of course it exacerbates connectivity/discovery problems.
This is a meta-problem in capability theory; we never really enumerated the various patterns that can be built with capabilities and published them in a single definitive way, so I have to link to janky old wikis or one-off blogs just to explain what’s possible.
In short, a capability-aware network allows for delegation of arbitrary capabilities, up to the expressive power of the underlying cryptographic protocol.
I’ve read a fair bit about capabilities. But they’re a very broad technology! I don’t get the impression Veilid is a general purpose capabilities platform like E or Spritely; the website says it’s focused on storing data for social networks.
Google gets its way with anything related to the Web as a platform because it has 66% marketshare. Stop using Chrome if you don’t like its unilateral decision making.
I think regulation is the necessary step here. The GDPR has had real effect, this new surveillance method is in part a way for them to try to work around GDPR. Time for updated regulation.
I’m in agreement. “Just switch” is not particularly reasonable. At best, in many years, that approach could start to reduce Google’s power. But it’s unlikely.
If we want change we have to force change through regulation.
Not regulation. Google must be split up.
It’s advertising business must be kept separate from the browser AND the search.
I mean, I think there should be both, plus also speaking as an ex-Google advertising privacy person, Google’s advertising businesses are, like, at LEAST four or five distinct business models which should each be separate companies. more realistically, at least a dozen.
the current situation with Google in adtech is as if a stock exchange could also be a broker, and a company listed on the exchange, and a high-frequency trading firm, and a hedge fund, and a bank, and … well, you get the idea
I often find it cathartic to read through legal proceedings involving my former employer. the currently-ongoing one in NY state has filings which go into some detail on legal theories that broadly agree with me about this (there’s a definition in the law of what constitutes a distinct market), so that’s nice to see. maybe someday there’ll be some real action on it.
I’d also love to see an antitrust regulator look at the secondary effects from Chrome’s dominance. Google supports Chrome on a small handful of platforms and refuses to accept patches upstream that support others (operating systems and architectures). This is bad enough in making other operating systems have less browser choice (getting timely security updates is hard when you have to maintain a large downstream patch set, for example) but has serious knock-on effects because Chrome is the basis for Electron. The Electron team supports all platforms that Chrome support upstream, but they don’t support anything else (they are happy to take patches but they can’t make guarantees if their upstream won’t). This means that projects that choose Electron for portable desktop apps are locked out.
Google did take the Fuchsia patches into upstream Chromium. A new client OS from Google doesn’t have these problems but a new client OS from anyone else will. That seems like a pretty clear example of using dominance in one market to prevent new players in another (where, via Android, they are also one of the largest players) from emerging. But I am not a lawyer.
Yes. Also, don’t forget web attestation, which may very well lock out anyone running their own OS on their own hardware.
I was trying really hard to!
very much agreed.
Wouldn’t that imply regulation, as in defining criteria about when and how to split it up (assuming that would not be a voluntary step by Google)?
Splitting up Google is definitely a form of regulation. My feeling is that splitting it up is one of the forms of regulation least likely to have accidental negative consequences.
We see the negative effects of Google being together all the time: AMP was a very ugly attempt to use the search monopoly to force a change to preserve their ad monopoly on mobile where it was being eaten away by Facebook at the price of breaking the web. More recently, the forced transition from Google Analytics Universal Analytics to Google Analytics 4 was something only a monopoly would do. No company that actually expected its analytics to actually make money directly would just break every API so gratuitously.
That said, even break ups can have unexpected consequences. The AT&T break up of the 80s did lead to a telecom renaissance in the 90s, but it also fatally crippled the Unix team and led to the end of Bell Labs as a research powerhouse.
Did it? The division into RBOCs had dubious benefits for consumers, because it replaced a well-regulated national monopoly with several less regulated local monopolies. The original plan of splitting out Western Electric would have made a lot more sense (WE was getting creamed by Nortel in the switching market, breaking up the phone system messes up the balance sheet elsewhere), but AT&T execs thought computer revenue from commercializing Unix was too good.
I am not sure if breaking up ATT did any good to me as a consumer, since the only internet choices I have is ATT and Comcast! US feels like an undeveloped country with the crawling internet speeds here in the San Francisco Bay Area.
The current “AT&T” is really Southwestern Bell, which somehow was allowed to eat all its neighbors. It is silly to let the telcos merge into a megablob a short decade after breaking them up in the first place.
In the broadest sense yes, but I feel that the term has come to mean setting up rules of conduct for the regulated businesses and possibly some form of oversight. Somehow it doesn’t pop into my mind that when the large companies call for regulation, they might be actually asking to be split up. I hope I make sense.
It’s not as if the choices are mutually exclusive.
I abandoned Chrome as a daily driver a few years back, but I’d do it today in a heartbeat based on this news. I rather enjoy the Firefox user experience, and switching was not a huge cost. I suppose YMMV and if switching does pose a large cost for someone, that’s their calculus, it’s just hard for me to imagine.
I’m also pushing for regulation how I can (leaving messages for my congresscritters, for what that’s worth). For me, I can’t imagine doing that but continuing to use Chrome.
Advertisement would help too. This announcement is buried for a reason. Google may have just handed Mozilla a huge cannon to use to get people off Chrome and onto Firefox, but Mozilla has to actually take advantage of it.
It’s not clear to me that this does bypass the GDPR. The GDPR requires informed consent to tracking. It sounds like this uses intentionally misleading text so will not constitute informed consent. It’s then a question of whether it counts as tracking. Google is potentially opening up some huge liability here because the GDPR makes you liable for collecting PII in anonymised form if it is later combined with another data set to deanonymise it.
I’d agree with that if it worked for Microsoft, Apple, Samsung, Sony (and Google). We need more than regulation; we need a cultural shift away from things like Google Chrome being the “defacto” for the Web. We have to get people understand that they have choice.
I would say regulation absolutely worked on Microsoft. A key part of why Google was able to succeed in the early 2000s was Microsoft was being very careful after losing a major anti-trust action. I was at Google at the time and I was definitely worried that Microsoft would use its browser or desktop dominance to crush the company. It never did but I’m confident it would have without the anti-trust concern.
All regulations end up the same way. Simply walking around it. Paying consultants to figure out the legal way. The biggest players will find the way, and the poorest and smallers players will die out. And that’s one of the ways how you can create a monopoly.
I just arrived in another EU country, and thanks to the derided regulation I can call and use mobile internet at the same pricing as at home. This means it’s easier for me to search for transport, lodging etc. to the benefit of both me and the providers of these services. The ones losing out are the telecom operators, who have to try to compete on services instead of inflated fees for roaming.
I can’t see any monopolies forming. Do you?
I’m not “deriding” regulations. I simply question the motives that are used when creating them. Maybe it’s because of the legacy of “central-planned economy” I was subjected to.
Also I think you’ve just given an example of a company in a sector that requires an explicit permission from the government to be able to even start the business.
It’s not true that large companies always find a way to bypass legislation or that regulation is always anti-competitive in any interesting sense.
Large companies can often work around regulations, but sometimes they clearly lose and regulation is passed and enforced that hurts their interests. E.g. GDPR, pro-union laws, minimum wages, etc.
Yes, richer and more powerful players are usually more likely to survive a financial hit. That’s not a feature of regulation. That’s a feature of capitalism: power and money have exponential returns (up to a point).
It has to be fixed with redistributive policies, not regulation.
Also, mobile telecoms consume a finite public good (EM spectrum noisiness in an area). They’re a natural target for public regulation. I don’t think that’s really a problem, tho I would prefer if public control was not state control.
I disagree. Companies will always try, they may not succeed. In particular, if the cost of complying with regulations is lower than the cost of finding work arounds, then they will comply. This is part of the reason that the GDPR sets a limit on fines that is astronomical: the cost of trying and failing to work around the GDPR is far lower than the cost of complying.
I’m a bit confused. I didn’t say anything about companies trying or not. I agree with all of your post except the bit about the GDPR fine limit, which I think is probably high enough (4% of global turnover) to exceed the benefits of non-compliance in most cases.
Sorry, I misread your post. And then I wrote ‘lower’ when I meant ‘higher’, so I clearly can’t be trusted with thinking today.
No worries!
I don’t want to get into the Keynes vs. von Hayek (although if redistribution is involved then maybe we should include Marx) dispute regarding whether regulations are good or bad, because the moderator removes threads related to politics, and I don’t want him to remove this one.
(also I’m not sure we can convince each other to our point of view)
I don’t really know what your position is or what you might have disagreed with me about, but I am totally fine leaving this convo here.
I did stop using Chrome, a long time ago. But, if my frontend colleagues are any indication, a deep hostility toward non-Chrome browsers is rampant among the people who are responsible for supporting them. And more and more teams just don’t bother. I would prefer not to have Chrome installed at all, but I have to because many websites that I don’t have a choice about using (e.g., to administer my 401(k), to access government services, to dispute a charge on my credit card) just flat-out don’t work in anything else.
You might have some luck reporting such issues to the responsible government agencies. They don’t usually write the sites themselves but contract the work out. The clerk will usually just forward your complaint to the supplier who will gladly bill the additional work.
The problem is systemic - if they don’t test it except with Chrome, they might fix the “one time issue” only for it to break the next time around they make some larger change.
Oh sure it is. But if you pester the clerk a couple of times, the Firefox support requirement might just make it to the next tender spec.
Depending on the jurisdiction, supporting a single vendor’s product with public money may be illegal. It’s a direct subsidy on Google. Whether a particular state / national government can subsidise Google without violating laws / treaties varies, but even in places where they can they typically have to do some extra process. If you raise the issue as a query about state subsidy of a corporation then you may find it gets escalated very quickly. If it was approved by an elected person then they may be very keen to avoid ‘candidate X approved using taxpayer money to subsidise Google, a corporation that pays no tax in this state’ on PSAs in the next election.
I doubt any regulator would perceive “failed to test a web application in minority browsers” as a subsidy. Maybe if they specifically developed an application that targeted that specific proprietary vendor’s stack.
But I imagine a public organization such as a library building a virtual environment to be used specifically in VR Chat to target young audiences as a part of promotional strategy would be perceived as
completelymostly fine.In Czechia, government purchased several (pretty important, duty declarations for example) information systems that were only usable with Microsoft Silverlight. They are still running, by the way. As far as I know, the agencies were not even fined for negligence.
Most people out of IT treat large tech companies like a force of nature, not vendors.
I read a very apt quote[1] on HN a month ago, about how much Google values Chrome users thoughts, which directly relates to people complaining, but then continuing to use it:
1: https://news.ycombinator.com/item?id=37035733
More like make sure you convert everyone around you as well. If you have any say in your company policy, just migrate your office staff to Firefox. Make sure to explain to your family and convert them as well. uBlock on mobile Firefox should help to ease some conversion there as well.
I’ve been using my Framework 12th Gen Intel since it launched in May 2022, and it’s been my absolute favorite laptop. I’m gratified to see I can upgrade it to the AMD mainboard; and also pick up a case that will allow me to reuse my existing Intel board; and replace the battery with a larger capacity battery. They’re really sticking by their promise of upgradability in a laptop!
Agreed, haven’t been this happy in a hardware announcement since they’ve first announced themselves
Very excited to see the new countries they’re shipping to! That’s been the big blocker for me to jump on the framework train
For the lazy:
It’s ironic Taiwan wasn’t one already, since I believe they’re produced/assembled there?
Shout out to borders.
Is there a roadmap for other countries available? Like Sweden or Poland?
I live in Italy and I got the Framework I’m typing this on a month ago, I needed a new laptop so I asked a friend from Germany to forward it to me lol. I think I’ll be getting the blank keyboard when they start shipping here.
This is a puff piece with no relevant content.
I disagree. It’s futurism, so it could be wrong, but the idea that the easiest to use UI for apps has switched from a GUI to a command line is interesting. It had been observed for some time that Google is like a CLI but this takes it to a new level. Will it actually work out? Time will tell. But if you’re thinking about where the business opportunities are for programmers, it would be silly to not at least examine if LLMs could help your project.
It misses the key thing that drove GUI adoption in the first place: discoverability. In early CLIs, it was easy to list the full set of commands available in an OS and then full set of options for each one. In DOS, I think it took two screens to list every single command available in the default install. As CLIs get bigger, this become harder because their designs were verb-noun. GUIs make it easy to show the set of verbs that are possible to apply to a specific object, which lets the user discover what the tool can do much more easily. It’s possible to design noun-verb CLIs (PowerShell has enough type info to do this, but decided to make some very poor choices from the perspective of usability). A natural-language prompt is the absolute worst thing here because it gives the user no information about what is possible and relies on their ability to scope queries to something that might work.
I think LLMs change the calculus here because it’s feasible to say, “I’m trying to get the average total value of all the monthly reports” and the LLM can shoot back “Use this code AVG(C5:X5) SUM(B5:B18)”. You don’t have to know that getting the average is AVG and getting the total is SUM. You also don’t have to preprogram in all the synonyms for the functions. Just write a bunch of documentation for one shot learning (and that can also be AI assisted) and let it go.
Typing is still tedious and time-consuming for an average user. It might be more convenient combined with voice, but that depends on user preferences and environment (e.g. open-plan office).
So I’d expect basic operations to still use classic buttons. Instructions may be useful for learning (“show me how…”) or tasks big and complicated enough that it’s easier to describe and delegate them than to do them yourself. However, the AI needs to be really good at this, so that checking the results and correcting it isn’t worse than doing it yourself.
Humans are terrible at writing instructions for others to do, AI included. Usually, they can’t break down a task into atomic units of completion. I suspect this will make making an AI do something harder, because at best, it’s a delegation problem.
The point of the current LLM tools is that it is possible to iterate. So there can be some back and forth between the user and the AI. The AI can even do the breaking down of a task for the user.
This is most likely not faster than a power user that knows how to click the right buttons or type in the right commands. But it is probably a lot faster and definitely a lot nicer experience for everyone else.
And we’re just witnessing the early beginnings of this kind of human machine interfaces. Imagine that the AI that’s assisting you has a personal profile of you where it remembers years of context about who you are and what you work on and what your current task is. Add on top of this the addition of voice and even body language through the web cam and then imagine what kind of interactions are possible.
Agreed.
There’s http://microformats.org/wiki/h-feed but I doubt many feed readers support this.
If not so, you can pipe it through something like https://granary.io/ so you don’t have to generate that feed.
Eternal Terminal is usually a better choice than
mosh
, (unless you’re frequently using a slow, high latency or lossy link). ET offers native scrolling andtmux
control mode.https://eternalterminal.dev/
Last time I checked ET needed to run as a system daemon on the remote server. In my company, I frequently need to work on remote shared linux boxes but don’t have sudo on them. Mosh can run as a regular user
This is a big win for me too
Eternal Terminal is fantastic but another downside is that it doesn’t support as many platforms as mosh does, eg Windows.
The one app I install right after installing tmux.
I feel like mosh and tmux do kind of the same thing so I usually skip tmux but then my terminal doesn’t have any scrollback 💔
They don’t do the same thing.
People use tmux to keep track of their sessions (sort of like screen and dtach and emacs daemon) and mosh also keeps track of sessions. That’s the overlap.
tmux is a terminal multiplexer. mosh doesn’t do that. they both happen to have a concept of attaching and detaching, but that’s not a functional overlap.
It’s true that mosh doesn’t multiplex terminals. Both have the functionality to keep track of your SSH sessions.
No, tmux SSH agnostic.
That said, I see why people get the impression that tmux and mosh provide similar functionality. Both allow you to continue a terminal session after the connection to the machine was interrupted. But this is done with a different mechanisms. I’d say the functional overlap is marginal.
Which is all I was saying.
Yep.
One of the reasons people use tmux, not everybody’s main reason but some people’s main reason, is to do what mosh also does.
I go the other way and use autossh, which doesn’t require a custom UDP protocol and so plays nicely with firewall rules, and dtach for the persistence. My terminal’s normal scrollback works well (I hate that tmux wants to be a terminal emulator inside a terminal emulator) and so I have a clean separation of concerns:
On macOS, the terminal provides a UUID in an environment variable for each window that is consistent across restarts of the app, so I have some logic in my profile that reconnects to remote sessions and I can reboot without losing any state.
On iOS, ssh disconnects so frequently that mosh is better than automatically reconnecting but that has left me in a sorry state of having to use
|&less
(I’m on on zsh) if I wanna read error messages off the screen (and that doesn’t play nicely with inputs such as passwords, and it also requires me to repeat the incantation). Most of the time I’m in Emacs, though, which has its own state managing daemon. I usedtach
for stuff outside of Emacs, like when I’m testing out and debugging daemons (before jamming them into systemd).Can you share more about your workflow? From your comment I see that there is a
TERM_SESSION_ID
env var (which I was not aware of prior), which it looks like Terminal.app and iTerm2 set (but not Kitty? haven’t checked others). So how do you map those to autossh reconnects?I wrap autossh in a shell function that creates a file under ~/cache that contains the connection string and is named with the session UUID. In my profile, there’s a line that checks for a file matching the UUID of the current session and, if it exists, runs the connection command from the shell. If Terminal.app restarts, it recreates all of my windows with the same UUIDs and starts a shell in each. The shell executes the profile, which then restores the connection.
The only minor annoyance is that autossh doesn’t have a good way of reporting whether the connection exited because I exited the remote command or for other reasons, so I have to have a prompt asking me if I want to delete the session state, rather than having it automatically cleaned up.
Instead of paying for a registry service, I wonder whether it is a bad pattern to host an image registry within the cluster. I’ve done it a few times, but I’m not sure if it’s inappropriate for any specific reasons.
Only if you don’t wanna manage it.
No they didn’t. It was a 3 paragraph meta-complaint with no vision or even real content.
Still smells like a token scam, only this time they’re selling shovels.
I kept thinking this too but they make it clear that you can “swap it out” for something else.
That’s correct. But just to be clear, you don’t “swap out” blockchain for non-blockchain. Turbosrc is not a blockchain. So someone would have to fork Turbosrc and make VotePower a crypto token, and some other things. It’s perfectly useful as is and without Web3 capabilities. The point of Turbosrc is to allow voting by ‘stakeholders’ on pull requests - how VotePower is recorded (database or blockchain) is a means, not an end.
It reads like the idea is to sell you on web3 votepower, web3 being an ethereum or whatever token. I guess it lets you buy and sell repository access?
I’ve done this. My setup is to use Vite to build/pack the site, the I have a Python script that copies all the files into the Rust source tree, then update a single Rust file with a bunch of include_str macros. Single binary with web server (I use tide) and all the artifacts. It’s pretty neat :-)
How much do we want to bet that some domain swatter is partly to blame here?
Maybe someone scraped it off the source code, yeah. Probably more likely than a troll who’s reading tests for 13yo bugs and has a free extra domain in some plan, waiting to be claimed.
Made a lot of progress in my Pong game in Godot! Might show it on Itch by Monday. Aiming to see what next I can explore making.
Now moving my game development learning towards Tiled to utilize a faster way to design maps. I have a sense of the game I’d like to make and now already want to design some aspects in a GUI so I can work with prototyping a bit faster.
I actually wish more IRC clients and servers supported this as an authentication method. It’s something people understand, would introduce a simpler way to do ACL and allow make it more of a viable option against Matrix and XMPP (both things that I’d run myself if I decide to spin up a community).
Lots more game development learning using Godot. I’ve been following a tutorial on YouTube and I’m almost done with what’s available so far. I’d like to do the RPG tutorial later but I think I’ll focus on making just Pong.
I’m also cleaning up on my self-hosted livestreaming with a daemon that’ll handle almost everything (because why not make your own version of StreamElements).
I wasn’t satisfied with the blog’s markup and JS requirement, so I went to look for the original article, because I hoped it was written in Markdown or something similar. It is not.
ngl this is wild
Oh dear…
Working on adding true cursor-based pagination to https://indieweb.org/Koype. Then gonna see if I can fix my site’s archives to actually show things on a per-hour, -day and -week basis.
I’m a little conflicted on this article. On one hand it covers a lot of ground in terms of features. On the other, Rust on Nails doesn’t provide “batteries included” and “conventions over configuration” of even the earliest versions of Rails. Even though author calls it a framework, they start by choosing a web server and a router, and keep doing it for all the components. Amount of boilerplate will shock any RoR developer unfamiliar with Rust, too.
So, A for effort but it’s kinda false advertising. Don’t get me wrong, I’m excited it’s possible to cover that much and piece together a semblance of a framework but there’s a RoR comparison is a little premature.
Kinda is gracious. This felt like a write up of “things I used”.
Yes, yes, yes! SQLite is the choice to make for databases. Like especially with Litestream, you get to treat your database as one treats their filesystem (which is what you end up having to rely on anyway). And it becomes “infinitely” storable since you can use object storage with Litestream. So excited to see them embrace this and eager to learn from them.