“Rather, it’s the fact that Microsoft has blatantly copied us in Windows 11, and as a result, people are starting to see Plasma as a cheap clone of Windows again”
Is there even 1 designer at MS which is using Linux ? It’s pretty clear that with Win11 they copied MacOS. But anyway, as a KDE user what makes it special is the huge amount of customizability. Nice defaults are good for first impressions, but letting the user customize the crap of out everything is by far the best thing about KDE.
IMHO, IDE’s and complex editor configurations have two or three major problems:
And one minor problem:
In other words, “UNIX is my IDE”
Ok, lets go :
“more failure modes, some of which might not be immediately obvious or understandable”
Then lets’ all switch back to DOS instead of Linux/Windows because clearly simpler software is better, right ?
“if you ever change your environment/work on multiple machines, you’re likely not going to be able to take your environment with you”
Visual Studio, for example, has a way to sync your settings to a new environment/machine.
“if you SSH into remote machines, a graphical IDE might not work very well (though from what I understand it’s getting better lately)”
Agree with this one, just don’t SSH for development, running things locally is so much better/faster.
Indeed! It didn’t make the final edit, but Mitchell spoke about these cases as well. He needed to do a lot of editing while sshed into remote machines, so being productive in minimally configured vim has been an advantage.
My unwillingness to work with high-latency low-fidelity editors over SSH makes me fight for portable software that is runnable locally.
I’ve moved some projects from “to run this, start a k8s cluster” to “to run this, run
I largely agree with you, but VSCode does solve 2 and 3 fairly nicely via settings sync and remote editing, respectively.
How would this be different from shipping an OS update that merely contains Safari fixes, if there were an urgent need to release a Safari fix? Their release timing is not a technical limitation.
Apple does update Safari independently on macOS. I don’t know why they don’t on iOS. My guess is that there are arcane details of the iOS build/release processes that make it more difficult than it seems from the outside.
The core functionality is in the WebKit framework, which is a system framework used by many Apple and 3rd party apps, and updating that should require an OS update, but I think on macos they get around that by bundling Safari updates with their own private copy of WebKit.
Maybe webkit is more baked into iOS than it is on MacOS? It’s exposed to arbitrary apps as part of the system API instead of a library that apps can pack in.
It’s a system framework on all Apple platforms, i.e. a dynamic library in /System/Library/Frameworks.
Yeah if it’s a system framework of the OS they probably can’t just push updates whenever they want. They have a release schedule that they need to follow.
Nobody uses package managers on Windows and they’re a last resort on macOS so you are gonna have to roll your own auto-update unfortunately.
On the Linux side, tbh pretty much the only software I’ve interacted with that “just works” and isn’t a nightmare of broken shit has been software that entirely sidesteps the traditional Linux ecosystem. So software that is either written in Go, or distributed as containers. With Go, you go to the project’s github, download a 100% static linked binary, and it works 100% of the time forever. Containers are less reliable because you have a gigantic runtime, but you can at least find static linked versions of podman etc.
Beyond that you’ll have to accept that your software will not work a significant % of the time for your users for reasons that are entirely out of your control.
they’re a last resort on macOS
brew install xyz is the first command I run on my mac when I try to install
xyz. Only if that command fails, I google (DDG, in fact) the installation instructions.
They’re a last resort on macOS
While this is likely true for the general macOS audience, I kind of think that the overlap between “people who use CLI tools regularly” and “people who have Homebrew installed” is basically 100% minus whoever uses MacPorts :-).
Nobody uses package managers on Windows
That’s false. Both choco and winget are decent and have decent usage for advanced users.
Nobody uses package managers on Windows
The existing package managers on Windows haven’t been widely used, but I feel like now that there is an official one, winget, that it’s going to see much wider use.
I’ve seen a growing use of Scoop and winget in recent years, I think folks are realising that it’s still going to be the most widely used OS and MS are always making it a nicer environment to dev in!
Brew taps seem to work well for CLI tools, I’m not sure I’m familiar with the sentiment that it’s a “last resort” but that might just be my bubble! Out of curiosity, are there alternatives?
Wrote a blog post on avoiding repeated keypresses in Vim, web browsers, and application switching/launching: https://superjamie.github.io/2023/01/28/repeated-keypresses
Might work on this SDL tile engine I’ve been doing for a game idea.
Want to watch both Blade Runner films again after seeing this great analysis of 2049 yesterday: https://youtu.be/OtLvtMqWNz8
Regarding the Alt+Tab switching :
in Windows & KDE a faster “generic” way to switch windows without Alt+Tab is to use Winkey + numbers . This will “activate” the window according to the taskbar index.
the one i’m using is built with AHK. Pressing Ctrl+WinKey+G , for example, checks if Firefox is runing. If it is, it activates the window otherwise it launches it.
Removing the tabs is kind of pointless since that “file bar” (which contains the split editor icon) cannot be removed. That vertical space is still “lost” even without tabs.
It saves some space if you have
"breadcrumbs.enabled": true,. Though, the primary motivation here is not as much saving pixels, as just getting rid of inefficient way to navigate.
Except I want JUST the breadcrumbs and no tabs! So, I can’t turn tabs off without losing the breadcrumbs :(
“We’re trying to run an application on some hardware in a reasonably efficient manner, where we can easily redeploy it when it changes. “
Why not deploy on bare metal then ? Kubernetes is ….complex, no sane person will ever deny this. The obvious question before using it is….do you really need it ? Kubernetes exists to solve a very complex problem that most people simply don’t have it.
Kubernetes exists to solve a very complex problem that most people simply don’t have it.
That pretty much sums up the blog post – avoid complex “enterprisey” solutions if your needs are simple. It seems that Kubernetes is the answer to everyone’s problems.
I don’t think this is true. I have been a long time Xfce4 user and I would say the exact same ethos exists there for that DE.
I’m really excited about immutable distros that mainly use flatpak for their software. The containerizing of programs and modular permissions on linux is a big step up for general security for desktop use and something that I always felt was missing compared to, for instance, MacOS.
The containerizing of programs
Runing all apps in containers means the OS, as it is now, is a failure. And flatpaks have performance problems and software today is already slow.
I don’t think it means the OS is a failure, but it does mean that the software distribution model is a failure. Apps in containers are still talking to the OS system call APIs, still running on the OS and, with runc or similar, using the OS for isolation. They do provide some interesting possibilities though: if you switch out runc and use a separate VM for each app then you can suspend and resume apps and live migrate them between computers. Being able to live migrate apps from my home machine to one in my office would be very interesting for hybrid working, for example.
I haven’t heard of performance issues with Flatpak, do you know of any examples of performance impacts?
My worry about Flatpak is dependency management. Last time I took a look, it was hard to query dependencies. Admittedly this was long ago. Hence, for a given CVE it’d be hard to know if all your software is patched. Has this situation changed? I’d not like to end up with obscure application bundles, like macOS.
This is the reason I like NixOS. It’s brings the best of both worlds. Your dependencies do not need to be in sync like with traditional package managers, but things are still tidy and builds are reproducible.
The nice thing about Flatpak is simple sandboxing, à la Firejail/bubblewrap. Nix does not have this yet, but Guix does. It was added very recently.
I think it’s important to run things with the least amount of privileges. macOS already sort of provides this by default. It’s a good last resort to defend yourself against malicious software. For example, it’d have saved many of the users that got their data stolen by a compromised PyTorch last week.
I have a similar worry with Flatpack. It seems that they have identified the difficult problem in packaging and made it worse.
The root problem with packaging is that dependencies often have unstable interfaces. This is addressed by having multiple versions installed at once. This adds a bigger problem: you now need to do security back-ports to each of the supported versions, even if upstream does not. Flatpack makes it much easier to ship different versions of dependencies and so it compounds the emergent problem. If I have 20 apps installed and they depend on 10 different versions of libfoo, who is going to do the security back ports of a fix in libfoo? The hard work doesn’t go away, it just becomes easier to ignore and push problems to end users.
Interesting points. Can’t really answer if querying dependencies is less annoying now, as I don’t use flatpak on a daily basis. GUIX sounds pretty amazing though, I might try it out someday.
Guix is totally worth trying. But you should probably try the Linux distribution, GuixSD, to fully take advantage of what Guix offers. Guix has a channel with non-GNU approved software that is seldomly advertised: https://gitlab.com/nonguix/nonguix. Without this channel, it might give the impression of being way behind a normal distribution in terms of package availability.
Unless Twitter requires manual interventions to run (imagine some guys turning cranks all day long :)) , why exactly would it go down ?
Eventually, they will have an incident and no one remaining on staff will know how to remediate it, so it will last for a long time until they figure it out. Hopefully it won’t last as long as Atlassian’s outage!
Or everyone remaining on staff will know how to fix it but they will simply get behind the pace. 12 hour days are not sustainable and eventually people will be ill more often and make poorer decisions due to fatigue. This post described the automation as clearing the way to spend most their time on improvements, cost-savings, etc. If you only spent 26% of your time putting out fires and then lost 75% of your staff well now you’re 1% underwater indefinitely (which completely ignores the mismatch between when people work best and when incidents occur).
Even worse - things that would raise warnings and get addressed before they’re problems may not get addressed in time if the staffing cuts were too deep.
That’s how all distributed systems work – you need people turning cranks all day long :) It gets automated over time, as the blog post describes, but it’s still there.
That was my experience at Google. I haven’t read this book but I think it describes a lot of that: https://sre.google/sre-book/table-of-contents/
That is, if such work didn’t exist, then Google wouldn’t have invented the job title “SRE” some time around 2003. Obviously people were doing similar work before Google existed, but that’s the term that Twitter and other companies now use (in the title of this blog post).
(Fun fact: while I was there, SREs started to be compensated as much or more than Software Engineers. That makes sense to me given the expertise/skills involved, but it was cultural change. Although I think it shifted again once they split SRE into 2 kinds of roles – SRE-SWE and SRE-SysAdmin.)
It would be great if we had strong abstractions that reduce the amount of manual work, but we don’t. We have ad hoc automation (which isn’t all bad).
Actually Twitter/Google are better than most web sites. For example, my bank’s web site seems to go down on Saturday nights now and then. I think they are doing database work then, or maybe hardware upgrades.
If there was nobody to do that maintenance, then eventually the site would go down permanently. User growth, hardware failures (common at scale), newly discovered security issues, and auth for external services (SSL certs) are some reasons for “entropy”. (Code changes are the biggest one, but let’s assume here that they froze the code, which isn’t quite true.)
That’s not to say I don’t think Twitter/Google can’t run with a small fraction of the employees they have. There is for sure a lot of bloat in code and processes.
However I will also note that SREs/operations became the most numerous type of employee at Google. I think there were something like 20K-40K employees under Hoezle/Treynor when I left 6+ years ago, could easily be double that now. They outnumbered software engineers. I think that points to a big problem with the way we build distributed systems, but that’s a different discussion.
Yeah, ngl but the blog post rubbed me the wrong way. That tasks are running is step 1 of the operarional ladder. Tasks running and spreading is step 2. But after that, there is so much work for SRE to do. Trivial example: there’s a zero day that your security team says is being actively exploited right now. Who is the person who knows how to get that patched? How many repos does it affect? Who knows how to override all deployment checks for all the production services that are being hit and push immediately? This isn’t hypothetical, there are plenty of state sponsored actors who would love to do this.
I rather hope the author is a junior SRE.
I thought it was a fine blog post – I don’t recall that he claimed any particular expertise, just saying what he did on the cache team
Obviously there are other facets to keeping Twitter up
For example, my bank’s web site seems to go down on Saturday nights now and then. I think they are doing database work then, or maybe hardware upgrades.
IIUC, banks do periodic batch jobs to synchronize their ledgers with other banks. See https://en.wikipedia.org/wiki/Automated_clearing_house.
I think it’s an engineering decision. Do you have people to throw at the gears? Then you can use the system with better outcomes that needs humans to occasionally jump in. Do you lack people? Then you’re going to need simpler systems that rarely need a human, and you won’t always get the best possible outcomes that way.
This is sort of a tangent, but part of my complaint is actually around personal enjoyment … I just want to build things and have them be up reliably. I don’t want to beg people to maintain them for me
As mentioned, SREs were always in demand (and I’m sure still are), and it was political to get those resources
There are A LOT of things that can be simplified by not having production gatekeepers, especially for smaller services
Basically I’d want something like App Engine / Heroku, but more flexible, but that didn’t exist at Google. (It’s hard problem, beyond the state of the art at the time.)
At Twitter/Google scale you’re always going to need SREs, but I’d claim that you don’t need 20K or 40K of them!
My personal infrastructure and approach around software is exactly this. I want, and have, some nice things. The ones I need to maintain the least are immutable – if they break I reboot or relaunch (and sometimes that’s automated) and we’re back in business.
I need to know basically what my infrastructure looks like. Most companies, if they don’t have engineers available, COULD have infrastructure that doesn’t require you to cast humans upon the gears of progress.
But in e.g. Google’s case, their engineering constraints include “We’ll always have as many bright people to throw on the gears as we want.”
Basically I’d want something like App Engine / Heroku, but more flexible, but that didn’t exist at Google.
I think about this a lot. We run on EC2 at $work, but I often daydream about running on Heroku. Yes it’s far more constrained but that has benefits too - if we ran on Heroku we’d get autoscaling (our current project), a great deploy pipeline with fast reversion capabilities (also a recentish project), and all sorts of other stuff “for free”. Plus Heroku would help us with application-level stuff, like where we get our Python interpreter from and managing it’s security updates. On EC2, and really any AWS service, we have to build all this ourselves. Yes AWS gives us the managed services to do it with but fundamentally we’re still the ones wiring it up. I suspect there’s an inherent tradeoff between this level of optimization and the flexibility you seek.
Heroku is Ruby on Rails for infrastructure. Highly opinionated; convention over configuration over code.
At Twitter/Google scale you’re always going to need SREs, but I’d claim that you don’t need 20K or 40K of them!
Part of what I’m describing above is basically about economies of scale working better because more stuff is the same. I thought things like Borg and gRPC load balancing were supposed to help with this at Google though?
It can coast for a long time! But eventually it will run into a rock because no one is there to course-correct. Or bills stop getting paid…
I don’t have a citation for this but the vast majority of outages I’ve personally had to deal with fit into two bins as far as root causes go:
Because of the mass firing and exodus, as well as the alleged code freeze, the second category of downtime has likely been mostly eliminated in the short term and the system is probably mostly more stable than usual. Temporarily, of course, because of all of the implicit knowledge that walked out the doors recently. Once new code is being deployed by a small subset of people who know the quirks, I’d expect things to get rough for a while.
You’re assuming that fewer people means fewer mistakes.
In my experience “bad” deployments are much less because someone is constantly pumping out code with the same number of bugs per deployment but because the deployment breaks how other systems interact with the changed system.
In addition fewer people under more stress, with fewer colleagues to put their heads together with, is likely to lead to more bugs per deployment.
Not at all! More that… bad deployments are generally followed up with a fix shortly afterwards. Once you’ve got the system up and running in a good state, not touching it at all is generally going to be more stable than doing another deployment with new features that have potential for their own bugs. You might have deployed “point bugs” where some feature doesn’t work quite right, but they’re unlikely to be showstoppers (because the showstoppers would have been fixed immediately and redeployed)
Too bad the binary name conflicts with another essential Unix tool
Yeah, the nerve on these FB people. Seriously, it should have been “sg” , because “sl” is also used by Powershell in Windows.
It also speaks volumes about their culture, completely absent of Unix hacking tradition.
One never knows, I might be wrong here, but I don’t see this sapling ever becoming a full grown tree. Successful universally accepted projects have succeeded because they nail a specific solution to a well understood problem and offer a clear direct ovcoua benefit to the user drom day one. I don’t think this is the case here.
Signing binaries and relying on a CA to establish “trust” is already a lost cause.
Did notarization significantly improved the OSX security ? To me it seems it was only a roadblock for OSS software. On Windows there was malware signed with leaked developer certificates.
These are pretty different ecosystems: with end-user software signing, you’re relying on intermediate issuers (or, in macOS’s case, just Apple) to only issue certificates to “legitimate” entities. Sigstore is two or three levels below that: it’s intended to help engineers sign packages and distributions that larger programs and systems are made out of. Signing and verification are simpler and easier to automate in that context.
So what happens if the for-profit declines this request. Would the community be able to fork Gitea, come up with a new name, establish a governing body, and keep pushing forward on their own path?
Gitea started life as a fork of Gogs, so this seems entirely plausible, or even desirable, for two big reasons:
Large enterprise users have very different needs from indies and small communities, or even larger OSS projects. SSO, compliance, integration with e.g. “mature” devops tools like Jenkins and Artifactory, etc. all tend to drive enterprise usage. Smaller-scale users often care more about ease of installation and use, design and quality of the actual code, and openness to outside contributors.
DAO. ‘Nuf said.
SSO - Single Sign On
DAO - Decentralized Autonomous Organization; basically a bunch of people who get together through the power of CRYPTO to achieve common goals. Right now synonymous with “scam”, but that’s mostly because the default way of making money in crypto is via ponzi schemes.
 and yes, I count “yield farming” as a ponzi scheme.
no, they want to experiment with a DAO, but this is usually enough to get people to write off Gitea permanently
No it is not! “decentralized autonomous organization” is basically a synonym of “the FOSS community” it’s just that some people have subverted the literal meaning of words with particular implementations that are beyond terrifying. That does not mean it cannot be done better but apparently some people are unwilling to admit that the emperor wears no clothes and resort to surface level prejudice and childish dismissals.
If these (gitea) people have integrity and they want to find fair ways to organize this project then power to them! If not, then the reason for dismissing them is not because they used some words but rather because they did or tried to do something dishonest (which is not immediately apparent as soon as you claim you want to try to use cryptography to partially automate your organization).
I refer to my earlier comment on the initial announcement for context to interpret this one.
“decentralized autonomous organization” is basically a synonym of “the FOSS community”
I have never heard this definition, and I’ve been following FOSS since the late 90s and crypto since the white paper was published.
Note that crypto proponents love wrapping themselves in the open source mantle - almost all code is MIT-licensed, for example. But that’s just appropriating a cultural shibboleth. The ethos of crypto - artificial digital scarcity - is antithetical to what most people think of when they think of FLOSS.
Sorry but the ethos of crypto(graphy) is communicating without being misunderstood.
We figured out how to build artificial scarcity.. yay! (blegh). Now let’s build methods to manage the problem of “tragedy of the commons” - which is what we actually care about - this comes down to assessing what improves our collective security and by how much (relative to other such improvements). Such assessments are probabilistic and will be built on a social contract of sharing cryptographic commitments to assessments.
If we decide these assessments have meaning, for example by bridging into the legacy system by calling them “exchange rates” then what we’ll have are currencies that are scarce only in the sense that if you print too much you lose trust and your exchange rates suffer… a scale-free credit system; like the international stage is using to do p2p.
The system can be composed of sovereign individuals joining hands with all sorts of temporary contracts.. what happens if someone doesn’t honor their contract? Their exchange rate suffers. What happens if someone didn’t commit a legitimate improvement? Their exchange rate suffers…. This isn’t the only way to do it. I am just saying: we’ve been played for complete fools and it needs to end.
no dude… here’s the blog post: https://blog.gitea.io/2022/10/open-source-sustainment-and-the-future-of-gitea/
To preserve the community aspect of Gitea we are experimenting with creating a decentralized autonomous organization where contributors would receive benefits based on their participation such as from code, documentation, translations, and perhaps even assisting individual community members with support questions.
this doesn’t make sense if you replace “decentralized autonomous organization” with “the FOSS community.” I’m sorry but it’s definitely cryptocurrency related. any other form of organization that does those things would be less decentralized than the current community of contributors. and I can’t imagine why they would add “autonomous” unless they were referring to DAOs as people currently understand them.
Why can I call FOSS community a DAO? I’m just saying this has precedence.
… they were referring to DAOs as people currently understand them.
Which I can agree is the concerning part.. which is why I mentioned earlier comment for context.
I am happy to be receiving engagement with this discussion and can admit that I am defending something slightly different than what they are doing but I feel that my defense gives adequate grounds as to /why/ they are doing this.
For me what is perplexing is what makes all these otherwise smart people feel like they have to limit the granularity of their discernment to “cryptobro” as soon as the notion of using cryptography to organize the causal part of societal communication is brought up. There is a lot of need for useful tools that can be trusted as well as a system of assessment that can give us confidence in funding those involved in building all this software. Incentives are hard to get right, but that is no reason not to try.. a large part of society is dedicated to that task (politics) and they are not using the best methods we know about.. why not?
I don’t see the connection between what you say is perplexing and what people have said in this thread. DAOs are a blockchain/cryptocurrency thing, which is not just “using cryptography.”
My premise is the literal intent in the phrase decentralized autonomous organization. Even though it currently refers to broken implementations it does not stop us from implementing something sane.
nobody will look through your post history to find a comment that you mention but don’t link to. and what was the point of arguing “decentralized autonomous organization” could refer to something totally different, if you agree that gitea is using it in the normal way?
I don’t see the connection between what you say is perplexing and what people have said in this thread.
Problem ist, the term „DAO“ is now burnt. Just like „web3“ and „crypto“. No amount of explanations will be able to revert this.
Explanations maybe not, implementations definitely, the word “crypto” is only temporarily burnt.. you’ll see.
In which sense do you ask that question? Are you referring to the viability of doing so due to project size and complexity?
Other than that, if the code is MIT licensed, there’s no issue in forking. I don’t know about a governing body, a person can take initiative individually if they so wish.
Hats off to those people brave enough to daily drive these Linux/FOSS mobile devices but such a buggy experience really sounds like an uphill battle. I can see why the author swapped back.
It’s a shame because I’d love to see these projects succeed.
So would I! I think this is only a temporary hiatus - I’m genuinely hoping the Pro will be a sufficiently good experience that it’ll be an acceptable daily driver.
I’m about to try the same on my Librem 5 I finally received after years. My guess is the experience will be slow and lacking.
Mostly because these devices were conceived almost 5+ years ago, and modern tech has substantially aped what was an affordable developer hardware device back then.
My guess is the experience will be slow and lacking. Mostly because these devices were conceived almost 5+ years ago,
I was an early Windows Phone adopter (i shipped one of the first Windows Phone apps when working at Seesmic). I remember people thinking WP will suck because the hardware was clearly inferior to Android. Then it was launched and the overall experience was amazing even on lower spec devices.
The experience on these Linux phone will suck because they try to shrink a desktop environment (KDE in this scenario) to work on a phone. I love KDE on a high power desktop but , to me, this is clearly the wrong approach to take when building a phone OS.
I love KDE on a high power desktop but , to me, this is clearly the wrong approach to take when building a phone OS.
I disagree. There’s a lot to love about having a fully featured desktop OS on your phone :) I think the issue with the PinePhone was that it was underpowered; I’m hoping the PinePhone Pro will fix that.
Does KDE really need so much? I first ran KDE (beta 4) on a 233 MHz Pentium MMX with 32 MiB of RAM. My phone has 8 2.something GHz cores and 8 GiB of RAM. Even if KDE needed 10 times the resources that it needed back then, it would use a fraction of what my device can provide.
SizeUp: On macOS I moved to Rectangle, but used SizeUp for a long-time. I bought an AquaSnap license a long time ago and it serves my needs. Windows now has more hotkeys and virtual desktop stuff I don’t really use since I have my setup the way I want it.
Dash: Zeal and Velocity were the top equivalents when I last checked. Compared to window management, this area had fewer options because most people tend to open a browser rather than use a specialist tool.
As @williballenthin stated, PowerToys and Windows Terminal are also essential. You want to be on Windows 11 for the best experience going forward. Windows 10 users have been frustrated about some changes, but things are gradually reappearing.
You want to be on Windows 11 for the best experience going forward
No, you don’t. They haven’t even managed to fix the taskbar yet. Wait for 12 and stick with 10 for now.
Just concentrate on making a great cross platform desktop client. Chasing mobile will only slow you down and make you lose focus.
Lots of people like (or at least find it to be convenient, or need) to read their email on their phone, however.
I think there’s a tinge of paranoia that runs through the anti-telemetry movement (for lack of a better term; I’m not sure it’s really a movement). Product usage telemetry can be incredibly valuable to teams trying to decide how best to allocate their resources. It isn’t inherently abusive or malignant. VSCode is a fantastic tool that I get to use for free to make myself money. If they say they need telemetry to help make it better than I am okay with that.
I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc.
But MS shot itself in the foot by making telemetry mysterious and impossible to inspect or disable. They made people allergic to the very idea.
I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc
It’s a bit uncharitable to read “they blocked my crash reporting service” as “they must have some kind of misunderstanding about what telemetry means” (if that’s what you’re implying when you say you’re disappointed but not surprised that people block them).
I know exactly what services like rollbar do and what kinds of info they transmit, and I choose to block them anyways.
One of the big takeaways from the Snowden (I think?) disclosures was that the NSA found crash reporting data to be an invaluable source of information they could then use to help them penetrate a target. Anybody who’s concerned about nation-state (or other privledged-network-position actor) surveillance, or the ability of law enforcement or malicious actors impersonating law enforcement to get these services to divulge this data (now or at any point in the foreseeable future), might well want to consider blocking these services for perfectly informed reasons.
I believe that’s actually correct - people in general don’t understand what different types of telemetry do. A few tech people making informed choices don’t contradict this. You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others. You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.
So no, I don’t believe the general public understands how many things are lumped into the telemetry idea and they don’t have tools to make informed decisions.
Side-topic: MS security actually does aggregate analysis of crash reports to spot exploit attempts in the wild. So how that works out for security is a complex case… I lean towards report early, fix early.
You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others.
I’m not following this argument. People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry – which includes the possibility of data related to their machines being used against them.
Adblocker developers (correctly!) recognize that datadog/rollbar/etc are vectors for some of those harms. The not every person who installs an adblocker could tell you which specific harm rollbar.com corresponds to vs which adclick.track corresponds to, does not imply that if properly informed about what rollbar.com tracks and how that data could be exploited, they wouldn’t still choose to block it. After all, they’re users who are voluntarily installing software to prevent just such harms. I think a number of these people understand just fine that some of that telemetry data is “my computer is vulnerable and this data could help someone harm it” and not just “Bob has a diaper fetish” stuff.
It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.
You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.
That some incorrect people are vocal does not tell us anything, really.
It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.
Counterpoint: Every time my app crashed, people not only gave me all data i asked for, they just left me with a remote session to their desktop. At some point I switched to rollbar and they were happy when I emailed them about an update before they got around to reporting the issue to me. So yeah, based on my experience, people are very happy to give crash data in exchange for better support. In a small pool of customers, not a single one even asked about it (and due to the industry they had to sign a separate agreement about it).
That some incorrect people are vocal does not tell us anything, really.
The bad part is not that they’re vocal, but that they cannot learn the truth themselves and even if I wanted to tell them it’s not true - I cannot be 100% sure, because a lot of current telemetry is opaque.
I don’t know how many customers you have or how directly they come in contact with you, but I would hazard a guess that your business is not a faceless megacorp like Microsoft. This makes all the difference; I would much more readily trust a human I can talk to directly than some automated code that sends god-knows-what information off to who-knows-where, with the possibility of it being “monetized” to earn something extra on the side.
People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry
ooof that’s reading way too much into it. I just don’t want to watch ads. And as for telemetry, I just don’t want the bloat it introduces.
The onus is not on users to justify disabling telemetry. The ones receiving and using the data must be able to make a case for enabling it.
Obviously, you need to be GDRP-compliant too; that should go without saying, but it’s such a low bar.
Copy-pasting my thoughts on why opt-out telemetry is unethical:
Being enrolled in a study should require prior informed consent. Terms of the data collection, including what data can be collected and how that data will be used, must be presented to all participants in language they can understand. Only then can they provide informed consent.
Harvesting data without permission is just exploitation. Software improvements and user engagement are not more important than basic respect for user agency.
Moreover, not everyone is like you. People who do have reason to care about data collection should not have their critical needs outweighed for the mere convenience of the majority. This type of rhetoric is often used to dismiss accessibility concerns, which is why we have to turn to legislation.
If you make all your decisions based on telemetry, your decisions will be biased towards the type of user who forgot to turn it off.
This presumes that both:
a) using data obtained from monitoring my actions to “improve VSCode” (Meaning what? Along what metrics is improvement defined? For whose benefit do these improvements exist? Mine, or the corporation’s KPIs? When these goals conflict, whose improvements will be given preference?) is something I consider a good use in any case
b) that if this data is not being misused right now (along any definition of misuse) it will never in the future cross that line (however you choose to define it)
Along what metrics is improvement defined?
First step would be to get data about usage. If MS finds out a large number of VSCode users are often using the json formatter (just a example) i assume they will try to improve that : make it faster, add more options etc etc.
Mine, or the corporation’s KPIs
It’s an OSS project which is not commercialized in any way by the “corporation”. They are no comemrcial licenses to sell, with VSCode all they earn is goodwill.
will never in the future cross that line
Honest question, in what way do you think VSCode usage data be “missused” ?
i assume they will try to improve that : make it faster, add more options etc etc.
You assume. I assume that some day, now or in the future, some PM’s KPI will be “how do we increase conversion spend of VSCode customers on azure” or similar. I’ve been in too many meeting with goals just like that to imagine otherwise.
It’s an OSS project which is not commercialized in any way by the “corporation”
I promise you that the multibillion dollar corporation is not doing this out of the goodness of their heart. If it is not monetized now (doubtful – all those nudges towards azure integrations aren’t coincidental), it certainly will be at some point.
Honest question, in what way do you think VSCode usage data be “missused” ?
Well, first and most obviously, advertising. It does not take much of anything to connect me back to an ad network profile and start connecting my tools usage data to that profile – things like “uses AWS-related plugins” would be a decent signal to advertisers that I’m in the loop on an organization’s cloud-spend decisions, and ads targeted at me to influence those decisions would then make sense.
Beyond that, crash telemetry data is rich for exploitation uses, like I mentioned in another comment here. Even if you assume the NSA-or-local-gov-equivalent isn’t interested in you, J Random ransomware group is just successfully pretending to be a law enforcement agency with a subpoena away (which, as we discovered this year, most orgs are doing very little to prevent) from vscode-remote-instance crash data from servers people were SSH’d into. Paths recorded in backtraces tend to have usernames, server names, etc.
“This data collected about me is harmless” speaks more to a lack of imagination than to the safety of data about you or your organization’s equipment.
That point is irrelevant, since it’s impossible to prove that microsoft is NOT misusing it now and that they will NOT misuse it in the future.
People, if you’re not happy with VsCode/Electron and you don’t want to shell out for a Sublime Text license , CudaText is THE editor for you https://cudatext.github.io/
Your comment is not very helpful. I would avoid “Use X” without bringing any additionnal information. Everyone could do baseless claim about X being THE editor.
But then for the time you want to work away from the desk you need an extra laptop. Not everyone needs that of course, but if you want to work remotely away from home or if you do on-call, then laptop’s a requirement.
Laptops also have a built-in UPS! My iMac runs a few servers on the LAN and they all go down when there’s a blackout.
Can’t speak about the other poster, but I think power distribution in the US would qualify as risky. And not only in rural areas. consider that even Chicago burbs don’t have buried power lines. And every summer there’s the blackout due to AC surges. I’d naively expect at least 4 or 5 (brief) blackouts per year
i get that, but it’s also not a very productive framework for discussion. i like my laptop because i work remotely – 16GB is personally enough for me to do anything i want from my living room, local coffee shop, on the road, etc. i do junior full-stack work, so that’s likely why i can get away with it. obviously, DS types and other power hungry development environments are better off with a workhorse workstation. it’s my goal to settle down somewhere and build one eventually, but it’s just not on the cards right now; i’m moving around quite a bit!
my solution? my work laptop is a work laptop – that’s it. my personal laptop is my personal laptop – that’s it. my raspberry pi is for one-off experiments and self-hosted stuff – that’s it. in the past, i’ve used a single laptop for everything, and frequently found it working way too hard. i even tried out mighty for a while to see if that helped ((hint: only a little)). separation of concerns fixed it for me! obviously, this only works if your company supplies a laptop, but i would go as far as to say that even if they don’t it’s a good alternative solution, and might end up cheaper.
my personal laptop is a thinkpad i found whilst trash-hopping in the bins of the mathematics building at my uni. my raspberry pi was a christmas gift, and my work laptop was supplied to me. i spend most of my money on software, not really on the hardware.
edit: it’s also hard; since i have to keep things synced up. tmux and chezmoi are the only reasonable way i’ve been able to manage!
Unfortunately I don’t think this is well known to most programmers. Recently a fairly visible blogger posted his workstation setup and the screen was positioned such that he would have to look downward just like with a laptop. It baffled many that someone who is clearly a skilled programmer could be so uninformed on proper working ergonomics and the disastrous effects it can have on one’s posture and long-term health.
Anyone who regularly sits at a desk for an extended period of time should be using an eye-level monitor. The logical consequence of that is that laptop screens should only be used sparingly or in exceptional circumstances. In that case, it’s not really necessary to have a laptop as your daily driver.
After many years of using computers I don’t see a big harm of using a slightly tilted display. If anything a regular breaks and stretches/exercises make a lot more difference, especially in long term.
If you check out jcs’ setup more carefully you’ll see that the top line is not that much lower from the “default” eye-line so ergonomics there works just fine.
We discuss how to improve laptop ergonomics and more at https://reddit.com/r/ergomobilecomputers .
(I switched to a tablet PC, the screen is also tilted a bit but raised closer to eye level. Perhaps the photo in the ‘fairly visible blogger’s setup was setup for the photo and might be raised higher normally)
That assumes you’re using the laptop’s built-in keyboard and screen all day long. I have my laptop hooked up to a big external monitor and an ergonomic keyboard. The laptop screen acts as a second monitor and I do all my work on the big monitor which is at a comfortable eye level.
On most days it has the exact same ergonomics as a desktop machine. But then when I occasionally want to carry my work environment somewhere else, I just unplug the laptop and I’m good to go. That ability, plus the fact that the laptop is completely silent unless I’m doing something highly CPU-intensive, is well worth the loss of raw horsepower to me.
I bought a ThinkStation P330 2.5y ago and it is still my best computing purchase. Once my X220 dies, if ever, then I will go for a second ThinkStation.
A few years ago I bought an used thinkcentre m92. Ultra small form factor. Replaced the hard drive with a cheap SSD and threw in extra RAM and a 4k screen. Great set up. I could work very comfortably and do anything I want to do on a desktop. Including development or whatching 4k videos. I used that setup for five years and have recently changed to a 2 year old iMac with an Intel processor so I can smoothly run Linux on it.
There is no way I am suffering through laptop usage. I see laptops as something suited for sales people, car repair, construction workers and that sort of thing. For a person sitting a whole day in front of the screen… No way.
I don’t get the need for people to be able to use their computers in a zillion places. Why? What’s so critical about it? How many people actually carries their own portable office Vs just doing their work on their desks before the advent of the personal computer? We even already carry a small computer in our pocket att all times that fills up lot of personal work needs such as email, chat, checking webpages, conference calls, etc. Is it really that critical to have a laptop?
I don’t get the need for people to be able to use their computers in a zillion places. Why? What’s so critical about it?
I work at/in:
The first two are absolutely essential, the third is because if I want to do some hobbyist computing, it’s not nice if I disappear in the home office. Plus my wife and I sometimes both work at home.
Having three different workstations would be annoying. Not everything is on Dropbox, so I’d have to pass files between machines. I like fast machines, so I’d be upgrading three workstations frequently.
Instead, I just use a single MacBook with an M1 Pro. Performance-wise it’s somewhere between a Ryzen 5900X and 5950X. For some things I care about for work (matrix multiplication), it’s even much faster. We have a Thunderbolt Dock, 4k screen, keyboard and trackpad at each of these desks, so I plug in a single Thunderbolt cable and have my full working environment there. When I need to do heavy GPU training, I SSH into a work machine, but at least I don’t have a terribly noisy NVIDIA card next to me on or under the desk.
The first two are absolutely essential, the third is because if I want to do some hobbyist computing, it’s not nice if I disappear in the home office.
I believe this is the crux of it. It boils down to personal preference. There is no way I am suffering to the horrible experience of using a laptop because it is not nice to disappear to the office. If anything, it raises the barrier to be in front of a screen.
Your last paragraph is exactly my thoughts. Having a workstation is a great way to reduce lazy habits IMNSHO. Mobility that comes with a laptop is ultimately a recipe for neck pain, strain in arms and hands and poor posture and habits.
I have 3 places in which I use my computer (a laptop). In two of them, I connect it to an external monitor, mouse and keyboard, and I do my best to optimize ergonomics.
But the fact that I can take my computer with me and use it almost anywhere, is a huge bonus.
OS update for thousands of VMs applied at the SAME TIME. Geez…what could go wrong huh ?
I was surprised a few years ago when I realized that all debian based systems have their cron.daily run at the same time. There is no randomization at all. I wonder if this is visibile in mirror logs around the world.
That’s very surprising. The FreeBSD-update tool has a random sleep in its cron mode to avoid all of the machines hitting the mirrors at the same time. I’m surprised the Debian servers don’t complain.