You say that like it’s a bad thing. :) If lots of people feel the same way we’ll get a lot of similar blog posts. Concerns about originality should properly be directed to submitters or moderators imho.
IMO if you want to “vote with your wallet”* and enact change, then making a blog post about it is almost as important as the action itself - GitHub’s #1 feature is it’s network effect, and the best way to move that feature to e.g. SourceHut is to convince other people to switch from GitHub to SourceHut.
And, the best way to convince other people to switch is to convince them that other people are switching, and that they will gain some network effect (and social credit) if they switch too.
*Not necessarily endorsing this ideology, but it’s a super common belief and worth addressing.
Talking about doing something and showing that you’re actually doing it are two different things though. Go ahead and make a blog post about it, tweet it or share it on Facebook, I don’t care so much about that. I would much rather see new repositories linked to Lobsters being hosted on SourceHut to show off its viability. I believe influencing the public through action, not words is much more effective.
How so? I think that the author is decent at explaining that they considered several factors on a per-repository basis, only migrating some repositories.
Yes, but given the context of these posts being made quite often it’s starting to feel like an echo chamber. Yes, we are all aware of the issues with GitHub being owned by Microsoft and their latest ventures into Copilot ethics. I’ve seen so many of these posts on both Lobsters and Hacker News. Honestly, I’d expect as much from the latter, but I was under the assumption that Lobsters wouldn’t be as politically charged.
I agree - it at least makes you consider/think about the choices out there. While I think sourcehut has grown a lot in recent years I don’t like the layout a ton personally. It would be much different if the post ended with “now join me and leave Github” or something
I don’t think it’s necessarily pedantic. This blog post has it explicitly stating, “why? because it’s my blog, that’s why.”. No validation seeking, just a simple statement, in my opinion.
Yes, but pedantic on Lobsters. Everyone is entitled to their own personal opinions and choices, but when you’re echoing the masses is it really noteworthy enough to make the front page?
but when you’re echoing the masses is it really noteworthy enough to make the front page?
Curious how you determined that this is somehow the opinion of “the masses”.
I enjoyed the article because I would never had know about this new repos host otherwise, and his reasoning was interesting. Perhaps there’s a mass of articles about SourceHut I’ve simply missed.
Thank you. So these are links to stuff hosted there, not so much articles discussing SourceHut. I guess when I glance at the article headings I don’t noted much the linked URL (not that sr.ht would have meant anything to me before today. But this diversity is encouraging.
Yeah well.. the creator/proprietor of SourceHut managed to ruffle enough feathers to get themselves banned.
But there’s still some stuff submitted here that’s about it or mentions it. Interestingly searching for “SourceHut” turns up a number of submissions along the theme of “get your ass off GitHub, stat!”:
I’m still at a loss why anyone would knowingly use the Chrome browser. It was created with exactly one purpose: To help Google track you better. Oh wait, it wasn’t Google wanting to “give something back” or “donate something for free out of the kindness of their hearts”? Nope. It was created as a response to browsers like Firefox and Safari that were slowly improving their privacy settings, and thus reducing Google’s ability to violate your privacy for a fee.
And if you’re curious, Google didn’t create and give away fonts out of the kindness of their hearts, either. If you’re using Google fonts, you aren’t anonymous. Anywhere. Ever. Private mode via a VPN? Google knows who you are. Suckers. Seriously: How TF do you think they do perfect fingerprinting when everyone is using only a few browsers on relatively small number of hardware devices?
TLDR - Google dropped the “do no evil” slogan for a reason.
To be fair, they also wanted their web apps to run better. They went with Google Chrome rather than making Gmail and Docs into desktop apps. If the owner of the platform I make my apps on is a direct competitor (Microsoft Office vs Google Docs), I wouldn’t be happy. Especially when that competitor platform sucks. Now that Chrome holds the majority of market share, Google can rest easy knowing that their stuff runs how they want for most users. Chrome also pushed the envelope for browser features they directly wanted to use in their apps.
The tracking and privacy thing is a LOT more pronounced now than it was in 2008 when Chrome came out. That’s definitely an issue that’s relevant today, but you can’t really pretend it was the sole driving force of the original release of Google Chrome.
I knew that Google was building Chrome for the purpose of tracking back when it was still in development, based on private comments from friends at Google. I don’t know if that was 2008, but it was somewhere in that time period. Yes, they needed a better browser experience to support some of their product goals, but Google’s overwhelmingly-critical product is tracking users, and protecting that one cash cow is important enough to give away gmail and browsers and fonts and phone OSs (and a thousand other things) for free.
Google loses money on pretty much everything, except advertising. And despite whatever the execs say in public, they’re actually quite content with that situation, because the advertising dollars are insane.
“If you’re not paying for the product, then you are the product.”
To be fair, they also wanted their web apps to run better.
They could have done that by funding development in Firefox.
It would have been hard to work within an existing technical framework, especially considering that Firefox in 2008 or whatever was probably saddled with more tech debt than it is today, but it’d certainly be an option.
You can’t strong-arm the web into adopting the features you want by simply funding or contributing to Firefox.
And it’s not clear to me that Google would’ve been able to get Mozilla to take the necessary steps, such as killing XUL (which Mozilla eventually did many many years later, to compete with Chrome). And sandboxing each tab into its own process is probably also the kind of major rework that’s incredibly hard to pull off when you’re just an outsider contributing code with no say in the project management.
I get why Google wanted their own browser. I think they did a lot of good work to push performance and security forwards, plus some more shady work in dictating the web standards, in ways that would’ve been really hard if they didn’t have their own browser.
I still feel a bit bitter about the retirement of XUL. Back in the mid-2000 you could get a native-looking UI running with advanced controls within days. Haven’t seen anything that would get close to that in speed of development so far, maybe except VisualBasic?
Yeah, which I’m sure very conveniently prevents them from attracting too much anti-trust attention, the same way that Intel or NVidia don’t just buy AMD. But I doubt they pay any developers directly to contribute to Firefox, the way that for example AMD contributes to Mesa, Valve contributes to WINE, Apple contributes to LLVM, etc.
There’s a difference between not crushing something because its continued existence is useful to you, and actually contributing to it.
On one hand, you’re totally right. Throwing cash at keeping other browsers alive keeps their ass away from the anti-trust party.
On the other hand, again, between 75% to 95% of [Mozilla’s] entire yearly budget comes from Google. At that volume of financial contributions, I don’t think it matters that they’re not paying Google employees to contribute to Firefox—they’re literally bankrolling the entire organization around Firefox, and by extension basically its paid developers.
They pretty much were back then. At the time Google weren’t happy with the uptake of Firefox vs IE, despite promoting FF on their own platforms, and wanted to pursue the option of their own browser. Mozilla weren’t particularly well known for being accepting of large contributions or changes to their codebase from third parties. There was no real embedding story either which prevented Google from going with Gecko (the Firefox browser engine) as the base instead of WebKit.
I’m still at a loss why anyone would knowingly use the Chrome browser.
Chrome was the first browser to sandbox Flash and put Java behind a ‘click to play’. This was an extreme game changer for security.
Expanding on that, Chrome was the first browser to build sandboxing into the product from day 1. This was an extreme game changer for security.
Between (1) and (2) the threat landscape changed radically. We went from PoisonIvy and BlackHole exploits absolutely running rampant with 0 or 1 click code execution to having to nothing in a few years - the browser exploit market, in that form, literally died because of Chrome.
Continuing on,
Firefox had tons of annoying bugs that Chrome didn’t. “Firefox is already running” - remember that? Chrome had bugs but ultimately crashes were far less problematic, only impacting a tab. Back then that really mattered.
Chrome integrates with GSuite extremely well. Context Aware Access and browser management are why every company is moving to enforce use of Chrome - the security wins you get from CAA are incredible, it’s like jumping 10 years into the future of security just by choosing a different browser.
To help Google track you better.
Whether that’s true or not, the reality is that for most of Chrome’s lifetime, certainly at least until very recently, there were very very few meaningful privacy issues (read: none, depending on your point of view) with the browser. Almost everything people talked about online was just a red herring - people would talk about how Chrome would send out LLMNR traffic like it was some horrible thing and not just a mitigation against attacks, or they’d complain about the numerous ways Chrome might talk to Google that could just be disabled and were often even part of the prompts during installation.
I don’t see why it’s hard to believe that Google wanted more control over the development of the major browser because they are a ‘web’ company and controlling web standards is a massive competitive edge + they get to save hundreds of millions of dollars by not paying Firefox for google.com to be the homepage.
Chrome has been publishing whitepapers around its features for a long time. I don’t keep up anymore and things may have changed in the last few years but there was really nothing nearly as serious as what people were saying.
If you’re using Google fonts, you aren’t anonymous. Anywhere. Ever.
Just to be clear, what you’re talking about is, I assume, the fact that if a website loads content (such as fonts) from a CDN then your browser makes requests to that CDN. Google discusses this here, although I’m not sure why this behavior is surprising:
Is there some other aspect of Google Fonts that you’re referring to? Because I’ve really lost my ability to give statements like “Google Fonts tracks you” any benefit of the doubt after a decade of people misunderstanding things like “yes, a CDN can see your IP address”.
Seriously: How TF do you think they do perfect fingerprinting when everyone is using only a few browsers on relatively small number of hardware devices?
Who says they do perfect fingerprinting? Also since when are there a relatively small number of hardware devices? An IP, useragent, and basic device fingerprinting (“how big is the window”) is plenty to identify a lot of people.
Infosec people love Chrome for architectural reasons. Which just goes to show that privacy and security are separate concerns that only marginally overlap.
I agree, Privacy is totally separate from Security. That said, I do not believe Chrome has represented a privacy concern since its inception - at least until recently, which I only say because I no longer care to follow such things.
*$font,third-party slap this bad boy in your uBlock Origin config & you won’t be downloading any fonts from third-party sites.
…But do be warned that laggards are still using icon fonts (even on contemporary sites!) despite it not being the best practice for icons for over a decade.
I’m still at a loss why anyone would knowingly use the Chrome browser.
I use it because (among other reasons) I want protection against viruses more than against Google. The last thing I heard about Firefox security (I don’t follow it actively) was Patrick Walton commenting that they have made significant improvements but still have never caught up to Chrome on security. I want Chrome’s security for lunch, there is no free lunch, and I’m okay with paying for that lunch in ad targetting data. With JavaScript disabled by default (for security), I never even see many of the ads that they might spend that data on targetting.
Your attitude is a good one: You’re conscious about what they’re probably trying to do with data about you, and you accept the trade-off for the services provided, and you do so consciously.
If Google were above-board in what they’re doing, I’d be 100% thumbs up for their behavior. But they’re not.
A major financial reason for Chrome is to save the cost of paying browser vendors to make Google the default search engine. Google pays Apple like $10 billion a year for this purpose on Safari. This is why Microsoft aggressively promoting Edge is such a threat to Google - fewer users using Google.
When I looked at this many years ago, I obviously had the same exact question, but I don’t actually have an answer. The same browser version (stock) with the same config (stock) on the same OS and OS version (stock) on the same hardware (stock) with the “same” Google fonts apparently generates a different Google fingerprint, apparently even in private browsing mode through the same VPN IP. Drop the Google fonts, and the fingerprint is apparently identical. It’s been a decade since I looked at any of this, so my memory is pretty fuzzy at this point. And of course Google doesn’t provide any hints as to how they are unveiling users’ identities; this is a super closely guarded trade secret and no one I knew at Google even gave me any hints. (My guess is that they are all completely unaware of how Google does it, since the easiest way to maintain secrets is to not tell all of your employees what the secret is.)
The Google fingerprinting does a render on a hidden canvas (including rendering text using fonts) and then sends Google a hash of that render. Somehow the use of Google fonts (note: specifically when downloaded from Google, which is what most web sites do) appears to give different users their own relatively unique hash. If I had to guess (WAG WARNING!!!), I’d suggest that at least one of the most widely distributed fonts is altered ever-so-imperceptibly per download – but nothing you can see unless you render large and compare every pixel (which is what their fingerprint algo is doing). Fonts get cached for a year, so if (!!!) this is their approach, they basically get a unique ID that lasts for the term of one year, per human being on the planet.
If you examine their legalese, you’ll see that they carefully carve out this possible exception. For example: “The Google Fonts API is designed to limit the collection, storage, and use of end-user data to what is needed to serve fonts efficiently.” Right. They don’t need to collect or store anything from the Fonts API. Because your browser would be doing the work for them. Similarly, “requests for fonts are separate from and do not contain any credentials you send to google.com while using other Google services that are authenticated, such as Gmail.” So they went out of their way to provide the appearance of privacy, yet somehow their fingerprinting is able to defeat that privacy.
The only thing that I know for certain is that Google hires tons of super smart people explicitly to figure out how to work around privacy-protecting features on other companies’ web browsers, and their answer was to give away fonts for free. 🤷♂️
I’m not normally accused of being a conspiracy theorist, but damn, writing this up I sure as hell feel like one now. You’re welcome to call me crazy, because if I read this shit from anyone else, I’d think that they were nuts.
That’s really ingenious, if true. To go along with supporting your theory, there is a bug open since 2016 for enabling Subresource Integrity for Google Fonts that still isn’t enabled.
I’m a bit sceptical about the concept, that seems like it comes with an enormous cost of downsides - fonts are not light objects, and really do benefit from caching. Whereas merely having the Referer tag of the font request in addition to timing information & what is sent with the original request (IP addr, User agent, etc) seem perfectly sufficient in granularity to track a user.
This feels too easy to detect for it not to have been noticed by now - someone would have attempted to add the SRI hash themselves and noticed it break for random users, instead of the expected failure case of “everyone, everywhere, all at once”.
The fonts are constantly updated on Google fonts at the behest of the font owners, so the SRI hash issue being marked as WONTFIX isn’t very exciting, as I wouldn’t be surprised at it being legally easier for Google to host one version of the font (as Google is often not the holder of the Reserved Font Name), as the Open Font License seems to be very particular about referring to fonts by name. Reading through the OFL FAQ (https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=OFL-FAQ_web), if I were a font distributor I would be hesitant to host old conflicting versions of the font amongst each other. Plus, easier to cache a single file than multiple, and lower effort on the side of a font foundry, as it means they do not need to have some versioning system set up for their font families (because they’re not just one weight, type, etc).
The fonts not being versioned goes beyond the SRI hash benefits, fonts often have breaking changes[0] in them e.g. https://github.com/JulietaUla/Montserrat/issues/60 so a designer being to know future font changes wouldn’t result in any changes in the page. So in my mind, it really feels like it’s the foundry who’s wanting there to be a single authoritative version of the font.
0: I suppose a semver major version bump in the font world would be character width changes.
Even if cpurdy’s version is true, I’m sure they use every normal and not so normal trick in the book to track as well. If your singular goal is identifying users uniquely, you would be stupid to rely on only 1 method. You would want 100 different methods, and you would want to deploy every single one. So if a competing browser vendor or unique edge case happens to break a single method, you don’t really care.
I agree caching of fonts is useful, but the browser would cache the most common fonts locally anyway. It would behoove Google to set the cache lifetime of the font file as long as practically possible, even if they were not using it to track you.
I agree, fingerprinting is a breadth of signals game, but I just can’t believe this vector, it feels way too technically complicated for comparable methods available within the same context - the idea was minute changes in font to result in different canvas render hashes, but a user already has a lot of signals within JS & canvas (system fonts, available APIs, etc) that are much quicker to test.
Fonts are cached per site by the browsers as a means of avoiding fingerprinting the cross-domain timing effects - Safari & Chrome call it partitioned cache; Firefox, first party isolation. So Google can tell you’ve visited a site as the referer gets sent on first load, unless they set Referrer-Policy: no-referrer, of course
I agree it’s technically complicated, but I imagine they want a variety of hard and not very hard methods. Assuming they do it, perhaps they only run it when they can’t figure out who you are from some other, easier method.
I always have a degoogled Chrome fork installed as a backup browser, in case I have website compatibility problems with Firefox. Some of my problems might be caused by my Firefox extensions, but it’s easier to switch browsers than to start disabling extensions and debugging.
On desktop I use Ungoogled Chromium. On Android I use Bromite and Vanadium. My Android fork is GrapheneOS, which is fully degoogled by default. I am grateful that Google created Android Open Source to counteract iOS, it is an excellent basis for distros like Graphene. I use F-Droid for apps.
Also, I have the Google Noto fonts installed as a debian package (fonts-noto). It’s a great font that eliminates tofu, and I thank Google for creating it. I don’t think Google is spying on my debian installed package list. If I don’t connect to google servers, they can’t see the fonts used by my browser.
I primarily rely on Ublock Origin for blocking spyware and malware, including Google spying. It’s not perfect. I can always use Tor if I feel really paranoid. The internet isn’t anonymous for web browsers if you are connecting with your real IP address (or using the kind of VPN that you warned about). Google isn’t the only surveillance capitalist on the web; I expect the majority of sites spy on you. Even Tor is probably not anonymous if state actors are targeting you. I wouldn’t use the internet at all if I was concerned about that.
Chrome initially came out in late 2008, when Firefox and Safari were actually doing OK, and IE8 was just around the corner, its betas were already out on Chrome’s release date. Chrome wasn’t even a serious player* until about 2010 or 2011, by which time IE9 was out and IE 6 was really quite dead. This article from June 2010 has a chart: https://www.neowin.net/news/ie6-market-share-drops-6-ie8-still-in-top-spot/
You can see IE8 and Firefox 3.5 were the major players, with Safari on the rise (probably mostly thanks to the iphone’s growing popularity).
i remember when Chrome’s marketshare was big enough that I had to start working with its annoying bugs. I tried to ignore it at first hoping it would die, but once the boss himself started pushing it, the pain kept coming. But there was a period there I didn’t hate - IE8 and Firefox 3.5 actually both worked pretty well.
Firefox was the #2 browser (behind only IE*) when Chrome was introduced, and at the time it was still growing its market share year after year, mostly at the expense of IE.
After its release, Chrome quickly overtook Safari (then #3), and proceeded to eat almost all of IE’s and Firefox’s market share. It is now the #1 browser, by a significant margin.
Interestingly, Safari did not yield market share to Chrome, and continued to grow its market share, albeit at a much lower rate than Chrome did. I assume that this growth is based on the growth of iPhone market share, and relatively few iPhone users install Chrome. Today, Safari is now solidly the #2 browser behind Chrome.
Edge (the new IE) is #3.
Firefox has dropped to the #4 position, in a three-way tie with Opera and Samsung.
Agreed. I’m not sure if IE7 was a thing until after chrome. Also when Chrome first came out it was a breath of fresh air because at the time you either had to use Firefox or Opera, both of which had the issue of sites breaking that were made with IE in mind or the whole browser locking up because one site was hung. While I won’t speculate that tracking was a primary goal of Chrome development let’s not pretend that it wasn’t leaps and bounds ahead of what else was available at the time on the IE6 centric web.
Chrome was definitely aimed directly at IE, most likely because they couldn’t bribe MS to default to Google search and because its outdated tech made the push to web apps much harder - consider the fact that early versions didn’t run on anything other than Windows (about 6 months between 1.0 and previews for Mac and Linux), and the lengths they went to get sandboxing to work on WinXP.
I think it’s fair to say that Firefox did have an impact - but it wasn’t that Chrome was created as a response, rather that Firefox defeated the truism that nothing could dethrone IE because it was built into Windows.
I’m still at a loss why anyone would knowingly use the Chrome browser.
I generally don’t like when technology companies use their product to push some ideological agenda, so I would probably choose Chrome over Firefox if I would have to choose between only those two. Also the new Firefox tabs waste a lot of screen space, and they didn’t give any official way to return to the previous look, so that’s another argument (last time I’ve used FF I had to hack through some CSS, which stopped working few updates later). The only thing I miss from FF are tab containers, but that’s about it.
But still, I use Vivaldi, which runs on Blink, so I’m not sure if I match your criteria, since your question is about “Chrome browser” not “Chrome engine”.
My work uses Google apps heavily and so I maintain a personal/work distinction in browsing by routing everything for work to Chrome. Privacy is a technical dead letter.
Yeah, I obviously have to use Chrome a bit, too. Because as a developer, ignoring the #1 browser seems stupid, any way you look at it. And a few sites only work on Chrome (not even on Safari or Firefox). I try to avoid Edge except for testing stuff, because the nag level is indescribable. “Are you super double extra sure that you didn’t want me to not be not your not default web browser? Choose one: [ Make Edge the default browser ] [ Use Microsoft suggested defaults for your web browser choice]” followed by “It’s been over three minutes since you signed in to your Microsoft cloud-like-thingy user account that we tied to your Windows installation despite your many protestations. Please sign in again, and this time we’ll use seven factor authentication. Also, you can’t not do this right now.” And so on.
I abhor and abjure the modern web, but we all have to live in it. On my Mac I use an app called ‘Choosy’ which lets me reroute URLs to arbitrary browsers, so I can use Safari without worry, given that I send all the garbage to either Chrome or a SSB.
Exciting news tbh. Freenom is a terrible steward + this change means the possibility that .ga doesn’t just get blanked blacklisted everywhere.
Somewhat unfortunate about the bulk deletion, but I’d bet the vast majority are “free” Freenom domains, many of which are reasonably questionable anyway.
The only thing I disagree with is naming this “web maximalism”. This isn’t the web, it’s using the (misleadingly titled) “web browser” as an application platform. Which I don’t know if I disagree with. It’s exclusionary to everyone that isn’t Google, Apple and currently Mozilla, but if you’re not worried about the future of open access to these applications it is probably the best place to plonk your code, assuming your users will have the hardware capabilities to render it.
Maybe future OSes should use Linux to boot directly into a SpiderMonkey userland. I’ll go live with the orangutans at that point.
ChromeOS (as of 2012 when I bought the adorable C720) boots into a Gentoo userland with Chrome as the root window, which isn’t going deep enough. PID 1 should be a JavaScript event loop.
Maybe future OSes should use Linux to boot directly into a SpiderMonkey userland.
Why even use Linux? Why not put a WASM / WASI / WebGPU runtime directly on hardware? The Birth and Death of JavaScript is getting closer and closer to reality every year. As modern web tech gains in power, it has to absorb and address concerns that traditional multi-user OSes have had for a very long time, while also solving portability problems that traditional OSes have (often) ignored. WASM is a truly portable and reasonably performant ISA; WebGPU is a truly portable and reasonably performant compute / graphics accelerator API, WASI is a truly portable and reasonably performant syscall ABI… If we lean in really heavily to this, isn’t it exciting to think of deploying totally portable containers to essentially any hardware anywhere? (Let me use the spare compute from my RISC-V NVMe controller for some background management tasks.)
It’s also possibly horrifying (rooting out malware, botnets)… But this future has a lot to recommend it, technically.
I don’t think so. Because most normal customers are not affected. They buy a Console with some games and play them. To play games on there own Computer is not a feature most users want/need.
Also this case is a bit different, because Nintendo don’t choose to be evil just to be evil. They currently want to prevent pirates from playing a not jet released game. I don’t think this will help, but I can clearly understand the reasoning.
Hah. It’s just another day in the “Nintendo being assholes” news cycle.
Even amongst the small group of people who care right now, a good chunk are going to stop caring as soon as the next installment of ${NOSTALGIC_NINTENDO_ONLY_FRANCHISE} gets announced.
Web applications serve an action or purpose. The user wants to complete a task - not look at a pretty poster. I understand this sounds harsh, but many designers design more for their own ego and portfolio rather than the end-user.
Sometimes I feel like I live in a totally different plane of existence as minimal-web enthusiasts. I want my tools to look good! A couple extra HTTP requests to download pretty fonts is definitely worth it. All of the caching done in your browser and in CDNs is conveniently ignored here.
I agree with you on aesthetic grounds. However, the caching situation has changed to protect user privacy. The performance calculus therefore changes too. Every first visit to a website requires CDN-hosted fonts to be downloaded regardless of whether some other site the user’s visited used the same font at the same URL. From 2010 to 2020, everyone experienced only occasional FOUT or FOIT from Google-hosted fonts. Now it’s a constant problem. There are mitigation strategies. Or, as the author rudely commands, one can go back to “web-safe” fonts (i.e. ones you think your users have installed in their OS). I think it’s a strategy worth considering for body copy on text-heavy websites, especially if most of your sessions are first-time visitors.
This entire article, full of hyperbole, overexaggerated (and unsubstantiated) claims, weirdoverdoneemphasis, and eventually landing on an ad, reads like so much like a generic marketing funnel it hurts.
There’s some merit here—software eng in general seems to have a problem with cargo culting new tech, and frontend/JavaScript communities seem extra bad about it these days—but it’s pretty hard to process when it’s being commented on by off-brand Billy Mays screaming about how the sky is falling nobody using React will have a job in the future, so you should buy his courses ASAP.
First of all, let’s remember that Github is a fully proprietary service. Using it to host the development of a free software makes no sense if you value freedom.
I’ve heard this argument too many times, and I grow increasingly tired of attempting to reason through it. How is this true… at all? GitHub lets me host practically unlimited code for free. If I value my software being free, this should be the only thing I am concerned with. GitHub is run by capitalists, shocker they’re doing a capitalism or two. There is literally no better way to spread your code to the world than uploading it to GitHub, partially because that’s where all the users are.
The bar for entry for GitHub is extremely low. I learned how to use GitHub’s desktop app almost a decade ago, far before I became comfortable with the git CLI. I’ve met too many people that are not technically adept yet still have a GitHub account and are capable of contributing to projects. I can’t say the same about GitLab, Codeberg, or SourceHut, even if I enjoy parts of those products more than GitHub.
By keeping your project on Github, you are encouraging new developers to sign up there, to create their own project there. Most importantly, you support the idea that all geeks/developers are somehow on Github, that it is a badge of pride to be there.
There’s over 100 million users. The evidence that all the geeks/developers are on GitHub has been weighed, and it is overwhelmingly in GitHub’s favor.
Good code is written when people are focused, thinking hard about a problem while having the time to grasp the big picture. Modern Github seems to be purposely built as a tool to avoid people thinking about what they do and discourage them from writing anything but a new JavaScript framework.
I think this beautifully illustrates the author’s perspective of those millions of users on GitHub: they’re writing bad code that doesn’t need to exist. Modern GitHub has almost exclusively devoted itself to streamlining the contribution process and encouraging communities to form around software projects. There are plenty of features I’m not a fan of, and some obvious changes I would make, but overall GitHub is incredibly easy to teach and use. I would love to say the same about any free alternative, but I can’t.
I avoided GitHub for a long time, for many of the arguments in the article. I eventually got an account about 10 years ago because I learned an important lesson: pick your battles. I don’t like the fact that GitHub is a proprietary dependency for a load of F/OSS projects. I don’t like that it distorts the F/OSS ecosystem by making CI free for some platforms and expensive for others. But when I moved projects there, I got more contributors and more useful contributions. It’s trivial for a user to submit a small fix to a project on GitHub without already having a deep engagement with the project. That is not true of anything else, in part because of network effects. I consider that more Free Software existing should be a primary goal of the Free Software movement and I’m willing to compromise and use proprietary tools to make this happen. At the same time, I’d encourage folks who have a problem with this to keep working on alternatives and come up with some way of funding them as a public service.
GitHub takes down code when requested. It is rather difficult to imagine Free Software properly destroying the system of copyrighted code when we are willing to let copyright complainants take down any code which threatens that system. I don’t think that forges ought to solve this problem entirely, but forges should not be complicit in the reaction against Free Software.
[O]verall GitHub is incredibly easy to teach and use. I would love to say the same about any free alternative, but I can’t.
I recommend that you try out GitLab. If nothing else, it will refine your opinion.
Yes, basically every public forge will take down code when (legally) requested. America has the DMCA process for this; I’m not particularly familiar with German law, but Codeberg, a very strong proponent of free software, also calls out in their TOS:
(4) Content that is deemed illegal in Germany (e.g. by violating copyright or privacy laws) will be taken offline and may lead to immediate account suspension.
Fighting the man and destroying copyright is a nice concept, but it’s a concept that needs to be pushed socially and legally, not just by ignoring the whole thing altogether.
Copyleft is only one approach to fighting copyright, and forges are only one approach to sharing code. It’s easy enough to imagine going beyond forges and sharing code in peer-to-peer ecosystems which do not respect takedown requests; I’m relatively confident that folks share exfiltrated corporate code from past “leaks” via Bittorrent, for example.
What are forges to do? Not accept DMCA requests? Free Software will continue to be incapable of taking down the copyright system, because it works within said system. If you want to change that system, political activism is going to do a lot more than GPL ever could.
I’ve tried GitLab, SourceHut, Gitea, and a few others, and while I enjoy different parts of those products, I couldn’t possibly call them “user-friendly” the same way I can GitHub. https://about.gitlab.com/ is solely an advertisement for they “DevSecOps” platform - this is, of course, really cool, but someone without the necessary background will not care about this. Even though a lot of this is just marketing, that marketing is important in painting an image for potential users. “The place for anyone from anywhere to build anything” speaks volumes while “DevSecOps Solution” silences the room.
I recommend that you try out GitLab. If nothing else, it will refine your opinion.
I don’t understand why GitLab gets all the love. Because as a user of their hosted service, it is really Just. The. Same. Git, pull requests, wiki, projects boards, discussions, CI.
If you want to host your own instance then yes of course .. you can’t do that with GitHub. But for the hosted version - I would say it is just as proprietary and locked-in as GitHub is.
That’s not the bar that was set, though. All I’m suggesting is that GitLab is a free alternative to GitHub, and that (because GitLab is designed to clone GitHub’s UX) GitLab is as “incredibly easy to teach and use” as GitHub. I’ve used GitLab instances in the past, and believe this to be a reasonable and met bar. Neighboring comments recommend Gitea, which also seems like a fine free alternative, and it might also meet the bar; I’m not personally experienced enough with Gitea to have an opinion.
The author gives two reasons why they think that their proposal will be opposed:
The first is the significant privacy cost to users of collecting and storing detailed activity traces. The second is the fact that access to this data must be restricted, which would make the project less open than most strive to be.
There are more! The third is the complication of all workflows; tools which previously worked offline now sporadically enter codepaths which can break and require additional capabilities to run. The fourth is the possibility of users sending garbage data simply because they can. The fifth is the massive sampling bias introduced in the second post by the fact that many distros will unconditionally disable this telemetry for all prebuilt Go packages. The sixth is that removing features based on percentage of users enjoying the feature will marginalize users who don’t use tools in ways approved by the authors, removing choice and flexibility from supposedly-open-source toolchains.
Not the most ridiculous proposal from the author, but clearly not fully baked. An enormous amount of words were spent explaining in technical detail how the author plans to violate their users’ privacies without overt irritation. Consider this gem from the third post:
The Go build cache is a critical part of the user experience, but we don’t know how well it works in practice.
How do they know that it’s critical, then? So much ego and so little understanding of people as individuals.
Finally, I hope it is clear that none of this is terribly specific to Go. Any open source project with more than a few users has the problem of understanding how the software is used and how well it’s working.
The author is explicitly telling us that they should not be trusted to publish Free Software.
The author doesn’t understand the problem, and intervention may be required in order to teach them a lesson.
A) Russ clearly understands that users can submit garbage data.
B) It’s pretty anti-social to be so opposed to telemetry that instead of merely opting out or boycotting a product you actively send junk data. It’s fine to take direct action against something that is harming you. For example, removing the speakers that play ads at gas station pumps in the US now is a positive good because those ads violate the social contract and provide no benefit to consumers who just want to pump their gas in peace. But this telemetry is meant to benefit Go as an open source tool, not line Google’s pockets. You can disagree about if it’s too intrusive if you want, but taking active measures against it is an uncalled for level of counter-aggression.
We must have different life experiences. Quoting from the Calvin and Hobbes strip published on August 23, 1995:
Calvin: I’m filling out a reader survey for Chewing magazine.
Calvin: See, they ask me how much money I spend on gum each week, so I wrote, “$500.” For my age, I put “43,” and when they asked what my favorite flavor is, I wrote “garlic/curry.”
Hobbes: This magazine should have some amusing ads soon.
Calvin: I love messing with data.
We don’t need to send data to an advertiser so that the advertiser can improve their particular implementation of a programming language. If this sort of telemetry is truly necessary, then let’s find or establish a reputable data steward who will serve the language’s community without preference to any one particular toolchain.
Twice you’ve used the phrase “anti-social”. We aren’t in the UK and your meme doesn’t work here. If you want to complain about a collective action being taken against a corporation, then find a better word, because the sheer existence of Google is damaging to the fabric of society.
Point (B) deserves to be addressed too. You seem to think that an advertiser only causes harm when they advertise. However, at Google’s size, we generally recognize that advertising causes harm merely by collecting structured data and correlating it.
If the problem is that Google exists at all, then get people in SF to throw eggs at the Google buses or bribe some Senators into letting the million and half pending anti-trust lawsuits go through. I don’t see how hurting the telemetry data for Go has any connection whatsoever to the goal of a world without Google. All it does is harm people who would benefit from a better Go compiler/tools. You can hate Google all day, and you probably should. I deliberately have never applied to work at Google because I don’t believe in their corporate mission. However, none of that has anything to do with the issue at hand, which is adding telemetry to Go. If you can show that they’re going to secretly use the telemetry to send Big Mac ads to developers, then you can be mad at them, but you haven’t shown that.
I don’t see how hurting the telemetry data for Go has any connection
whatsoever to the goal of a world without Google.
I’d argue that collecting data via ET Phone Home telemetry without
explicit consent is a grievous breach of the Social Contract. While I
myself am a non-belligerent person, you can bet that there are people
who share my outlook about consent and say that one bad turn deserves
another. Poisoning telemetry data is a tactic in line with the
traditions of Luddism and sabotage. It’s throwing a monkey-wrench into
the grinding gears of the machine. While I’m not gonna sit here and
encourage it (I’m not stupid), I do understand the mindset.
“Rub some telemetry on it” is definitely in line with Google culture.
Poisoning the stream won’t stop that.
Yeah, kids, be good and don’t poison the datastream. The ethical
approach is to stop using Go if they add opt-out telemetry to their
toolchain.
A marketing company that calls your home phone number at dinner time is scummy, but you don’t get to yell at the person on the other end of the phone because they’re just taking the best low paying job they could find. Instead you have to get the https://en.wikipedia.org/wiki/National_Do_Not_Call_Registry enacted and get funding to enforce the law.
A marketing company that calls your home phone number at dinner time is
scummy, but you don’t get to yell at the person on the other end of the
phone because they’re just taking the best low paying job they could
find.
Legally, I sure do. If somebody invades my personal space, I’m well
within my right to yell at them for it, even though they’re just some
poor schlub doing a job. Morally / ethically, it’s a different story.
The compassionate person will refrane from yelling at the
aforementioned poor schlub. Even then, it’s worth noting that the most
saintly of us has bad days and doesn’t always do the moral /
compassionate thing. My point? People who script-read for
telemarketing are going to be verbally abused by people whose right to
be let alone has been violated, and we cannot pretend otherwise.
At what point does “I’m just doing my job” go from being a reason for
compassion to an excuse? I don’t have the answer to that, but I do try
to be a compassionate person.
These two scenarios aren’t equivalent anyway. The telemarketing
equivalent of poisoning the telemetry stream isn’t yelling at some
unfortunate script-reader. The equivalent is lying to the marketing
company, wasting its resources, or possibly defrauding it. Think:
trolling the Jehovah’s Witnesses and Mormons who invite themselves to
your door to share their religion. This can be lots of fun, and it
wastes the other person’s cycles while simultaneously preventing them
from harassing somebody else.
In 2000, someone called me up from one of those multi-level marketing
scams, after a friend referred them to me. I spent three glorious hours
on the phone on a Sunday night trolling some scammer. It was better
than whatever was on TV at the time, no doubt.
I’m sure there are people who do this sort of thing to telemarketers.
It isn’t abuse, but it does cause them to waste cycles while getting
paid and possibly entertained, yet preventing them from doing active
harm to others.
For lack of a better term, I’ll call this array of tactics
“psychological warfare countermeasures”, to contrast them with
electronic warfare countermeasures.
Of course, poisoning an opt-out telemetry stream is an electronic warfare
countermeasure: the equivalent of deploying chaff to confuse enemy
radar.
And I’ll end with the “Kids, don’t poison a telemetry stream” disclaimer
I used yesterday.
No. You aren’t correctly reading what I wrote. I’m saying that somebodywill send them junk data in the future, and I anticipate that nothing short of that future situation will make them understand their mistake.
The plain reading of “intervention may be required in order to teach them a lesson” is that you’re rallying people to intervene, ie spam the system. If that’s not what you mean, then I’m sorry for misinterpreting you, but alyx apparently has the same misinterpretation.
Maybe my final comment in the GitHub discussion will make it clear to you and @alyx that, although I am thinking adversarially, I am still a whitehat in this discussion. I made similar comments during Audacity’s telemetry proposal, and did not develop any software to attack Muse Group.
Perhaps what irritates you is that I find the whole situation amusing. I don’t really respect the author’s understanding of society, and I think that often the correct thing to do in case of disaster is to learn a lesson. The author may not learn a lesson until somebody interferes with their plans, and I expect that the fallout will be hilarious, but I am not necessarily that somebody.
Nor, frankly, do I have thousands of spare USD/mo to waste on cloud instances just for the purpose of distracting Google. What do I look like, the government?
The sixth is that removing features based on percentage of users enjoying the feature will marginalize users who don’t use tools in ways approved by the authors, removing choice and flexibility from supposedly-open-source toolchains.
This is good. The tool chains are still open source, that is a fact. This will:
Make the code base more maintainable, thus over time making it less buggy
Focus the thinking of those involved on use cases that matter the most, making more people happier
Make the system overall simpler, and easier to understand for all users.
The world cannot be built around power users/tech elite. That would be bad.
The fifth is the massive sampling bias introduced in the second post by the fact that many distros will unconditionally disable this telemetry for all prebuilt Go packages
The fourth is the possibility of users sending garbage data simply because they can.
The data can be interpreted with its limitations in mind. That seems better than simply having no data whatsoever about the world.
So much ego and so little understanding of people as individuals.
I don’t see how this follows from Russ’ belief the build cache is critical to Go’s UX.
Focus the thinking of those involved on use cases that matter the most, making more people happier
I don’t think it’s valid to conclude that the use cases that occur the most are the use cases that matter the most. Blind and vision-impaired people make up a small portion of all computer users; do you think it would be valid to ignore their needs in favor of making experiences better for users without vision impairments?
True, I agree with you. But I think its a safe default to reject supporting use cases with few users. There are some concerns in specific cases which reasonably can overrule this principle, but I think the bar should be high for spending effort on rare uses.
Every project has limited resources to spend. This by definition means that some work will not be done. If you have two bugs with same severity it might be useful to know how many users each is impacting and factor that in decisions about what to work on and in what order.
Vision impaired users are an interesting comparison. There are lots of vision impaired people in the world. A neighbor of mine happens to work at an association for the blind/vision impaired. Many people wear glasses. Many people become blind or near blind in old age. All of us are vision impaired when it is dark, or when we use our eyes to look at something else while trying to use a computer simultaneously. Even if the absolute percentage of blind people is not that high, it ends up being a lot of people, especially factoring in situational blindness.
One can imagine a lot of other conditions that might also be good to accommodate for, but which are less common and more difficult to adapt to. In the end, I think the ADA’s “reasonable accommodation” standard is broadly correct. There has to be a balance to try to include people when the costs are bearable without making things so expensive that it’s not possible for the majority to use it either.
Probably a good thing that there are many alternatives for getting on the Fediverse. Some of those projects will of course wither with time, but hopefully there will still be healthy competition even so.
I want to politely disagree. I mean, yes the CCC is a political organization. (I’d even go so far and argue that every organization which declares itself apolitical is in favor of the status quo and thus, politically speaking conservative. But let’s not get into that here. Let’s talk about which kinds of politics belong here or have belonged here.)
In short, the CCC in Germany is somewhat comparable to the EFF in the USA and I’ve seen a significant variety of EFF blog posts on lobste.rs. This search here https://lobste.rs/domains/eff.org is totally including political stuff about governmental control of end-to-end encryption, privacy laws, consumer rights and most of that stuff was upvoted to more than 2 digits.
Maybe someone will search through the moderation log and find more removed submissions than those I find as not-removed, but I’d be surprised..
There’s no apolitical way to judge the politicalness of a lobsters submission. Every possible submission is hateful to some ideology, the only important question is whether that ideology is one that the lobsters moderators care about (either positively or negatively).
While I see where you are coming from calling them political, I think it can be odd to enforce it in a sane way. They have their ethical guides, which are very vague along the lines of “make use and share information, without borders” and “computers can be used for good”. Meanwhile the Go website featured a BLM banner on every page, and every other open source project mentions the war in Ukraine. Also Codes of Conduct tend to be political in nature and pretty much every discussion is too. You can go further with censorship resistance, discussing Amazon, Google, etc. On top of that the majority of discussions on AI and automation are political. Moreover the tags practices and philosophy, culture, person, practices and especially law have a huge likelihood of being political in one way or another. The same is true for many open source project including everything GNU related, OpenBSD, etc. have a political part to them.
I think when people talk about politics they mostly mean party/partisan politics and I think that’s not what the CCC is. They neither tell you how to vote nor are they very much tied do certain parties, especially given that on more political talks they already had a big spectrum even on talkers and it always has also been criticized by members of the CCC. And then the Club is not the Congress.
Given that they are often enough invited by politicians of all parties to give their expert opinions on topics and at least some see themselves more as a service and putting their political opinion aside in such situations, I think that if you remove CCC related news for this reason you really also have to remove any Go project links for their BLM banners, etc. But I am happy to not draw the lines there. I just hope they make sense. Usually in communities that “ban” political topics it’s more on topics that are pretty much set up to cause political discussions which a new on a famous event in computer science circles not happening isn’t.
I’m somewhat unhappy that there is a policy of banning everything from certain sources, disregarding the actual content, while I also understand it cause it’s work and effort, time which might be better spent in another way.
As soon as you talk about either individuals, groups of people or organizations and how they work or how their rules are ore are decided there’s politics involved. Like I said, many tags would have to be removed, for politics to be gone. And even then, it will creep in one way or another.
What are the ethical principles of hacking - motivation and limits
Access to computers - and anything which might teach you something about the way the world really works - should be unlimited and total. Always yield to the Hands-On Imperative!
All information should be free.
Mistrust authority - promote decentralization.
Hackers should be judged by their acting, not bogus criteria such as degrees, age, race, or position.
You can create art and beauty on a computer.
Computers can change your life for the better.
Don’t litter other people’s data.
Make public data available, protect private data.
The hacker ethics were first written down by Steven Levy in his book “Hackers: Heroes of the Computer Revolution (ISBN 0-440-13405-6, 1984). He mentions the Tech Model Railroad Club at MIT, their members constructed a supersystem of relays and switches - some of them became core members the ai lab. They used the term “hack” for an “elaborate … prank” with “serious respect implied”. The hacker ethics evolved in a time when computers were scarce; and the people sharing a machine had to think about rules of cooperation.
The last two point are additions by the CCC from the 80s. After some more or less crazy individuals from the hacker scene had the idea of offering their “hacker know-how” to the KGB there were intense discussions, three letter agencies have a somewhat different opinion about freedom of information. As well were intrusions into outside systems considered more and more counter productive.
To protect the privacy of the individual and to strengthen the freedom of the information which concern the public the yet last point was added.
Hacker ethics are —like the rest of the world— as such in constant discussion and development. The above rules should be considered as guidelines and basis for discussion.
I realize the odds we catch a flame war are pretty high, but on the balance this seems topical - it’s news about a business’s policies and practices, but it’s not like it’s about them updating their pricing plans or something. We’ve had a lot of discussions about centralized vs. distributed systems, censorship resistant systems, and generally how the systems we build influence the cultures we have. “Like every human culture, this is done by lurching from crisis to crisis trying to decide what’s acceptable and not, what’s individual or systemic.” But uh… that doesn’t mean we have to have a flame war about it. Please try to be kind to each other and to not indulge in hyperbole.
I love this site, but the seemingly random rules around “business news” are annoying. You are deleting everything that doesn’t relate to computing, yet this one and every time apple announces new things to buy, it is okay. Can the rules be made clearer or abandoned if they don’t work?
I can read about the kiwi farms (fsck them!) elsewhere on the internet. Do we really need that drama here? If we do, where do we draw the line? There are many, many things in the world worse than kiwi farms, but they are equally off topic. Let’s keep it that way.
I should’ve put that better than “not pricing or something”. I think this news is a useful example of what happens in the real world when we’re talking about technical projects like immutable decentralized publishing, scaling, running your own servers, privacy, censorship resistance, and the role of the web in society at large. The discussion is rantier and less technical than usual, but I don’t think it’s wasted. The discussion is hard and often fractious because we developers have incredible power. It’s our work that built the systems and interconnections that led to this situation. So this isn’t technical, but it’s very much the results, and we need to understand them to inform the next systems we build.
You can also see it positively that the rules are more squishy: Do I want business news about startup X dropping feature Z ? No, I generally don’t. Do I want information about a precedence case where cf have used their house right, a company serving ~30% of the network traffic and sometimes called the guys that protect DdOS groups (among other things) ? Yeah I do. Even if I am more interested in coming updates about what this meant for future cases like this. And I’m interested in discussions about the m1 chip from people on lobsters, something that changes the mobile CPU market dramatically in terms of possible power, heat and performance. Even though I don’t intend to buy anything apple for now. Its just not that black & white.
You can get all of that on the orange site, where all of what you explained is on topic. I think KF is garbage, yet again, this site is about computing. Learning that a publicly traded company in the US has one customer less is not about computing. There is nothing technically interesting here, but that is - according to the description -what this site is all about.
Regarding apple announcements: minute by minute live blogging of whatever ipad or 3d emoji or tv series are announced is also very much not about computing.
@calvin often does a very nice job covering apple events. Of which some are more about computing than others, in my estimation. But here are a couple of recent-ish examples:
I find them valuable and hope they keep getting posted here.
I don’t find it valuable, but I could chose to ignore it. What annoys me is how any other article that is “business news” gets deleted, yet more apple tv+ shows or the colors of the new iphones are not business news apparently and should be frontpage news on a site that literally says this about itself:
Lobsters is focused pretty narrowly on computing; tags like art don’t imply every piece of art is on-topic. Some rules of thumb for great stories to submit: Will this improve the reader’s next program? Will it deepen their understanding of their last program? Will it be more interesting in five or ten years?
Some things that are off-topic here but popular on larger, similar sites: entrepreneurship, management, news about companies that employ a lot of programmers, investing, world events, anthropology, self-help, personal productivity systems, last-resort customer service requests via public shaming, “I wanted to see what this site’s amazing users think about this off-topic thing”, and defining the single morally correct economic and political system for the entire world when we can’t even settle tabs vs. spaces.
He’s probably referring to calvin’s threads on Apple’s big announcement days. They’re a tradition, where usually we don’t give room for product announcements like that. It’s a fair criticism that this implies our rules aren’t consistent.
Fair, but I think the rules are in the end just to reflect the intention of the community. And because of that I think it is fine when the community decides it wants to break that rule and make these announcements an exception. (Calvins summary could be seen as its own post and is probably the the thing that makes it worthwhile.) Note: I don’t actually buy or like apple stuff, so it’s not like I’m in favor of that exception.
But, we didn’t discuss that post (because no one submitted it?) which makes this discussion all the more off topic, in my opinion: either we’re interested in the discussion AROUND the ban (which is on topic and the other post is better) or we’re only interested in the specific end result (which is obviously wildly off topic).
“Infrastructure business announces policy of trying to act like a common carrier, even though it isn’t one” is a little interesting, and if it wasn’t discussed here, that’s a loss but a small one. “Policy announcement immediately runs into trouble” is interesting and contextually makes the first one more interesting.
it’s news about a business’s policies and practices, but it’s not like it’s about them updating their pricing plans or something.
Business news is generally off-topic, as I recall for many years. Blog posts announcing a decision to stop service to a customer–especially without accompanying coverage/linkage of the facts of the case, the technical issues at play, or even numbers making an economic argument for why it makes sense–would seem to me to be similarly off-topic.
If the standard of worthwhile posting and justification has fallen to “Fuck $thing”, this site’s future as a forum for discussion isn’t significantly better than 4chan or reddit.
A major company that touches a LOT of the spaces many of the readers here touch has just made a very large decision regarding a cesspool that’s also impacted, directly or indirectly, many of the readers here.
It seems pretty obvious that this is a bit outside the usual scope but, frankly, this is one of those things a lot of this community do care strongly about and it’s valuable for them to be made aware of.
KiwiFarms presents a serious existential risk for Lobsters because of the threats to our community. @alyx did not give a list of community members who have been impacted; hopefully it is obvious that giving such a list would be tantamount to surrender.
If we don’t care strongly about each other, then we don’t have a community.
We manifestly don’t care strongly about each other, as seen in the shitflinging any time politics or meaningful policy differences come up. Post as a Palantir employee here and see how much strong caring for fellow Lobsters there is!
(Would I prefer that we be a more united community? Of course. Do I think we were some number of years ago? Yes. Do I think that we can do that today? No, not really, because reasons.)
I disagree with your claim of KF being some special “existential risk” to Lobsters, especially without proof.
And again, I’m not saying anything in support of KF. As far as I know, they’re unrepentant assholes. I am saying that we should be sticking to on-topic material instead of what amounts to gloating.
An article on how to make a botnet to go after Kiwifarms, or on the legal issues encountered in taking them to court, or on scraping them for automatic doxing notification would all be great submissions here! I would upvote them!
This, though, is just an inactionable announcement that Cloudflare kicked off a customer that some people here may dislike.
Post as a Palantir employee here and see how much strong caring for fellow Lobsters there is!
I can’t speak to that, but I’ve posted here as a Microsoft employee and, although there are some very anti-Microsoft folks here, I’ve never felt personally attacked. That doesn’t mean that everyone agrees with me[1], but it does mean that I can almost always[2] expect that people disagree with me politely and respectfully, with well-reasoned arguments.
[1] Weird, since I’m always right…
[2] Everyone has bad days where they’re unusually cranky, in general this community seems pretty good at pointing out when someone is being uncharacteristically mean and asking them to change, and kicking them out if they don’t.
I realize the odds we catch a flame war are pretty high, but on the balance this seems topical - it’s news about a business’s policies and practices, but it’s not like it’s about them updating their pricing plans or something.
Fuck Kiwifarms.
This is nontopical.
We’ve had a lot of discussions about centralized vs. distributed systems, censorship resistant systems, and generally how the systems we build influence the cultures we have.
Then this would be off-topic, and the articles that use the banning of kiwifarms as a springboard to talk about these things would be on topic, as has been the precedent for a while.
That’s a strong argument that I made the wrong call here. One of the reasons I kept this up was that I figured we’d see a stream of responses over the next week or so about why DDOS protection works this way, alternate systems that would’ve had different failure modes, etc. Keeping the CF post made for a single natural place to merge those stories so users can discuss or hide conveniently. I commented early trying to point the discussion towards the technical aspects, and clearly did not succeed.
Would you say the distinction you’re making is that Losbters primarily looks at things through a technical lens to discussion implementation? It isn’t well-reflected on /about and hopefully we can improve that writing and public understanding.
Basically, the problem with V now (and the reason why I think we should ban it and other similar projects like Urbit from lobsters) is that V boosters have a long enough history of shameless lies which require increasing levels of effort to uncover that I have no confidence that any actually interesting V announcement is real, or that any actually interesting V demo isn’t faked. Like, you could give me the binary to run myself and I’d believe that there was some kind of trick to it rather than the demo being legit unless I had gone over it with a decompiler and a fine toothed comb first.
Also kind of vexing seeing @volt_dev getting away with increasing the V promotional posts after a post recently got popular that showed that many key V claims were obvious lies.
Sorry, not on point but Urbit is horrible. The community is extremely toxic to non-cishet dudes and it’s infuriating how many angry DMs I got for putting my pronouns in my username.
This is an aside at best but how are we not yet past the point where we let people write “change the world” when what they mean is “sell something” or just “program the computer”
Maybe because letting people choose their own words is part of the social norm. You’re welcome to complain about it, but let implied we have some say in the words they choose.
use an ssh tunnel to securely set it up, and afterwards, enable ssl certs. I’d hope that the detected hostname and http protocol aren’t too hard to unwind.
caprover allows for setting the initial default password from the command line. Following this is the most promising, I think.
watch the logs as you set it up, filter out your own IP, maybe get a heads up, at least
do it locally, then copy the files and database to the server
The problem here is the audience who’s setting up these instances.
It is uh, almost definitely not going to be people who are familiar with any of those things.
No, the actual problem is that someone misled them into thinking that installing WP yourself when you’re not familiar with any of those things is an acceptable state of affairs.
People see that there’s a “free” way to get it, and rather than signing up for some managed plan that will actually serve them better decide that it’s “easy enough” to just follow the steps in a readme… but over time they will almost certainly tend to a 100% chance of breaking or getting hacked, as there is no actual plan to maintain the site.
Wordpress has an automatic update function AFAIK, so I’m not sure the risk is as high as you think it is. The real problem comes when people install shitty unmaintained add-ons.
I agree with andyc’s summary but part of the impression I get is that there’s some strong correlations being pushed by the author without looking at the deeper “why?” parts of these correlations.
A single binary site using sqlite is probably going to be easier to deploy/maintain, yes. But why? Because there’s most likely fewer moving parts, period. As we decrease the amount of moving parts, we’re simplifying maintainability and (probably) deployability, but we’re also sacrificing something.
Whether that’s sacrificing what can be done live vs what needs “deployed” to do, sacrificing flexibility, etc, we’re not just getting easier deploys and less maintenance for free by rebuilding software as single binary deployables powered by sqlite.
Every time you see an open-source rewrite of a system from a Google paper, you should keep in mind that Google wouldn’t have published the system unless it had already been replaced by something better.
Spanner, Zanzibar and Borg among outers are proof that this isn’t true. GRPC is even a little different that they are still adopting it as it was released publicly.
Systems are going to change over time, so in the long term there will be outdated papers, but that is just a fact that papers are a snapshot of knowledge at the time of authorship.
Even if they have, so what?
If the concept is one that (when implemented) works well, and there isn’t an obviously better alternative publicly available, who cares whether or not it’s not pulled straight from Google’s latest stack?
I have played with Loki and think it’s a pretty good model. For small use-cases you can totally just use the filesystem storage option. You only have to upgrade to an object store when your scale requires it. It does require some configuration on the ingestion side to specify the fields you are going to index on but that doesn’t seem that big of a problem to me.
It may have been me configuring it poorly (probably), but my experience w/ Loki in a small scale setting has been that it will do terrible, terrible things to your inodes when running Loki using filesystem storage.
Just something to look out for, but worth keeping an eye on it. Besides the “Oops! All inodes!” issue, Loki+Grafana is a pretty nice setup.
I have not run into that issue in my setup. It may be a result of the amount of logs I’m pulling in which is actually quite small or something else to do with the my setup.
It also has to do with the file system you are using, so it might partly be about using the right tool for the job. But it would certainly make sense to structure them in a better way, regardless.
Does anyone remember the last time one of these “moving off of GitHub” posts had some new complaints?
I feel like every month or two, there’s someone new leaving GitHub, for the same reasons:
git-send-email
is better than PRs (For their workflow)You say that like it’s a bad thing. :) If lots of people feel the same way we’ll get a lot of similar blog posts. Concerns about originality should properly be directed to submitters or moderators imho.
Yes, it’s become pedantic to make a blog post about it. Almost feels like validation seeking, worried they made the wrong decision.
IMO if you want to “vote with your wallet”* and enact change, then making a blog post about it is almost as important as the action itself - GitHub’s #1 feature is it’s network effect, and the best way to move that feature to e.g. SourceHut is to convince other people to switch from GitHub to SourceHut.
And, the best way to convince other people to switch is to convince them that other people are switching, and that they will gain some network effect (and social credit) if they switch too.
*Not necessarily endorsing this ideology, but it’s a super common belief and worth addressing.
Talking about doing something and showing that you’re actually doing it are two different things though. Go ahead and make a blog post about it, tweet it or share it on Facebook, I don’t care so much about that. I would much rather see new repositories linked to Lobsters being hosted on SourceHut to show off its viability. I believe influencing the public through action, not words is much more effective.
How so? I think that the author is decent at explaining that they considered several factors on a per-repository basis, only migrating some repositories.
Yes, but given the context of these posts being made quite often it’s starting to feel like an echo chamber. Yes, we are all aware of the issues with GitHub being owned by Microsoft and their latest ventures into Copilot ethics. I’ve seen so many of these posts on both Lobsters and Hacker News. Honestly, I’d expect as much from the latter, but I was under the assumption that Lobsters wouldn’t be as politically charged.
I agree - it at least makes you consider/think about the choices out there. While I think sourcehut has grown a lot in recent years I don’t like the layout a ton personally. It would be much different if the post ended with “now join me and leave Github” or something
I don’t think it’s necessarily pedantic. This blog post has it explicitly stating, “why? because it’s my blog, that’s why.”. No validation seeking, just a simple statement, in my opinion.
Yes, but pedantic on Lobsters. Everyone is entitled to their own personal opinions and choices, but when you’re echoing the masses is it really noteworthy enough to make the front page?
Curious how you determined that this is somehow the opinion of “the masses”.
I enjoyed the article because I would never had know about this new repos host otherwise, and his reasoning was interesting. Perhaps there’s a mass of articles about SourceHut I’ve simply missed.
Sure! Here’s a sampling of 3 articles I found when searching Lobsters for “sourcehut”
https://blog.edwardloveall.com/lets-make-sure-github-doesnt-become-the-only-option
https://ersei.net/en/blog/bye-bye-github
https://ploum.net/2023-02-22-leaving-github.html
Thank you for these. I’ll concede that “masses” is subjective.
I’d say that a
paranoidhealthy skepticism of GitHub is widespread in this community.Here are 16 submissions from SourceHut (domain: sr.ht):
https://lobste.rs/domains/sr.ht
(github.com has 6480 submissions, so it’s still a ways to go!)
Thank you. So these are links to stuff hosted there, not so much articles discussing SourceHut. I guess when I glance at the article headings I don’t noted much the linked URL (not that sr.ht would have meant anything to me before today. But this diversity is encouraging.
Yeah well.. the creator/proprietor of SourceHut managed to ruffle enough feathers to get themselves banned.
But there’s still some stuff submitted here that’s about it or mentions it. Interestingly searching for “SourceHut” turns up a number of submissions along the theme of “get your ass off GitHub, stat!”:
https://lobste.rs/search?q=sourcehut&what=stories&order=newest
edit lol @metadaemon already posted these: https://lobste.rs/s/gs5wp3/i_m_moving_my_projects_off_github_2022#c_dyvrzb
Thank you for this! I guess I just get used to seeing GH mentioned all the time I don’t realize what else is afoot.
I guess this is part of the problem. GitHub is the code hosting solution. Other forges struggle for mindshare.
“Moving off of GitHub” post bingo board when?
We don’t have enough boxes for a bingo board yet; there’s gotta be more than 5 things first!
The posting will continue until you actually listen and leave GitHub too.
You’ll get better results convincing some billionaire that GH is full of SJWs banning conservatives and getting them to buy it.
I’m still at a loss why anyone would knowingly use the Chrome browser. It was created with exactly one purpose: To help Google track you better. Oh wait, it wasn’t Google wanting to “give something back” or “donate something for free out of the kindness of their hearts”? Nope. It was created as a response to browsers like Firefox and Safari that were slowly improving their privacy settings, and thus reducing Google’s ability to violate your privacy for a fee.
And if you’re curious, Google didn’t create and give away fonts out of the kindness of their hearts, either. If you’re using Google fonts, you aren’t anonymous. Anywhere. Ever. Private mode via a VPN? Google knows who you are. Suckers. Seriously: How TF do you think they do perfect fingerprinting when everyone is using only a few browsers on relatively small number of hardware devices?
TLDR - Google dropped the “do no evil” slogan for a reason.
To be fair, they also wanted their web apps to run better. They went with Google Chrome rather than making Gmail and Docs into desktop apps. If the owner of the platform I make my apps on is a direct competitor (Microsoft Office vs Google Docs), I wouldn’t be happy. Especially when that competitor platform sucks. Now that Chrome holds the majority of market share, Google can rest easy knowing that their stuff runs how they want for most users. Chrome also pushed the envelope for browser features they directly wanted to use in their apps.
The tracking and privacy thing is a LOT more pronounced now than it was in 2008 when Chrome came out. That’s definitely an issue that’s relevant today, but you can’t really pretend it was the sole driving force of the original release of Google Chrome.
Note: I don’t use Chrome.
I knew that Google was building Chrome for the purpose of tracking back when it was still in development, based on private comments from friends at Google. I don’t know if that was 2008, but it was somewhere in that time period. Yes, they needed a better browser experience to support some of their product goals, but Google’s overwhelmingly-critical product is tracking users, and protecting that one cash cow is important enough to give away gmail and browsers and fonts and phone OSs (and a thousand other things) for free.
Google loses money on pretty much everything, except advertising. And despite whatever the execs say in public, they’re actually quite content with that situation, because the advertising dollars are insane.
“If you’re not paying for the product, then you are the product.”
They could have done that by funding development in Firefox.
It would have been hard to work within an existing technical framework, especially considering that Firefox in 2008 or whatever was probably saddled with more tech debt than it is today, but it’d certainly be an option.
You can’t strong-arm the web into adopting the features you want by simply funding or contributing to Firefox.
And it’s not clear to me that Google would’ve been able to get Mozilla to take the necessary steps, such as killing XUL (which Mozilla eventually did many many years later, to compete with Chrome). And sandboxing each tab into its own process is probably also the kind of major rework that’s incredibly hard to pull off when you’re just an outsider contributing code with no say in the project management.
I get why Google wanted their own browser. I think they did a lot of good work to push performance and security forwards, plus some more shady work in dictating the web standards, in ways that would’ve been really hard if they didn’t have their own browser.
I still feel a bit bitter about the retirement of XUL. Back in the mid-2000 you could get a native-looking UI running with advanced controls within days. Haven’t seen anything that would get close to that in speed of development so far, maybe except VisualBasic?
They essentially do (and did) fund the development of Firefox.
Yeah, which I’m sure very conveniently prevents them from attracting too much anti-trust attention, the same way that Intel or NVidia don’t just buy AMD. But I doubt they pay any developers directly to contribute to Firefox, the way that for example AMD contributes to Mesa, Valve contributes to WINE, Apple contributes to LLVM, etc.
There’s a difference between not crushing something because its continued existence is useful to you, and actually contributing to it.
On one hand, you’re totally right. Throwing cash at keeping other browsers alive keeps their ass away from the anti-trust party.
On the other hand, again, between 75% to 95% of [Mozilla’s] entire yearly budget comes from Google. At that volume of financial contributions, I don’t think it matters that they’re not paying Google employees to contribute to Firefox—they’re literally bankrolling the entire organization around Firefox, and by extension basically its paid developers.
That’s probably fine if they don’t have a say in technical or business decisions.
They pretty much were back then. At the time Google weren’t happy with the uptake of Firefox vs IE, despite promoting FF on their own platforms, and wanted to pursue the option of their own browser. Mozilla weren’t particularly well known for being accepting of large contributions or changes to their codebase from third parties. There was no real embedding story either which prevented Google from going with Gecko (the Firefox browser engine) as the base instead of WebKit.
And yet, Google Chrome was nicknamed “Big Browser” from the start.
Chrome was the first browser to sandbox Flash and put Java behind a ‘click to play’. This was an extreme game changer for security.
Expanding on that, Chrome was the first browser to build sandboxing into the product from day 1. This was an extreme game changer for security.
Between (1) and (2) the threat landscape changed radically. We went from PoisonIvy and BlackHole exploits absolutely running rampant with 0 or 1 click code execution to having to nothing in a few years - the browser exploit market, in that form, literally died because of Chrome.
Continuing on,
Firefox had tons of annoying bugs that Chrome didn’t. “Firefox is already running” - remember that? Chrome had bugs but ultimately crashes were far less problematic, only impacting a tab. Back then that really mattered.
Chrome integrates with GSuite extremely well. Context Aware Access and browser management are why every company is moving to enforce use of Chrome - the security wins you get from CAA are incredible, it’s like jumping 10 years into the future of security just by choosing a different browser.
Whether that’s true or not, the reality is that for most of Chrome’s lifetime, certainly at least until very recently, there were very very few meaningful privacy issues (read: none, depending on your point of view) with the browser. Almost everything people talked about online was just a red herring - people would talk about how Chrome would send out LLMNR traffic like it was some horrible thing and not just a mitigation against attacks, or they’d complain about the numerous ways Chrome might talk to Google that could just be disabled and were often even part of the prompts during installation.
I don’t see why it’s hard to believe that Google wanted more control over the development of the major browser because they are a ‘web’ company and controlling web standards is a massive competitive edge + they get to save hundreds of millions of dollars by not paying Firefox for
google.com
to be the homepage.https://www.google.com/chrome/privacy/whitepaper.html
Chrome has been publishing whitepapers around its features for a long time. I don’t keep up anymore and things may have changed in the last few years but there was really nothing nearly as serious as what people were saying.
Just to be clear, what you’re talking about is, I assume, the fact that if a website loads content (such as fonts) from a CDN then your browser makes requests to that CDN. Google discusses this here, although I’m not sure why this behavior is surprising:
https://developers.google.com/fonts/faq#what_does_using_the_google_fonts_api_mean_for_the_privacy_of_my_users
https://developers.google.com/fonts/faq/privacy
Is there some other aspect of Google Fonts that you’re referring to? Because I’ve really lost my ability to give statements like “Google Fonts tracks you” any benefit of the doubt after a decade of people misunderstanding things like “yes, a CDN can see your IP address”.
Who says they do perfect fingerprinting? Also since when are there a relatively small number of hardware devices? An IP, useragent, and basic device fingerprinting (“how big is the window”) is plenty to identify a lot of people.
Infosec people love Chrome for architectural reasons. Which just goes to show that privacy and security are separate concerns that only marginally overlap.
I agree, Privacy is totally separate from Security. That said, I do not believe Chrome has represented a privacy concern since its inception - at least until recently, which I only say because I no longer care to follow such things.
*$font,third-party
slap this bad boy in your uBlock Origin config & you won’t be downloading any fonts from third-party sites.…But do be warned that laggards are still using icon fonts (even on contemporary sites!) despite it not being the best practice for icons for over a decade.
Out of interest, what is current best practice? I have stopped following most of the frontend stuff over a decade ago.
https://css-tricks.com/svg-symbol-good-choice-icons/ (which as links, but it’s not from 2014 to show how vintage font icons are compared to using vector graphic symbols via SVG)
I use it because (among other reasons) I want protection against viruses more than against Google. The last thing I heard about Firefox security (I don’t follow it actively) was Patrick Walton commenting that they have made significant improvements but still have never caught up to Chrome on security. I want Chrome’s security for lunch, there is no free lunch, and I’m okay with paying for that lunch in ad targetting data. With JavaScript disabled by default (for security), I never even see many of the ads that they might spend that data on targetting.
Your attitude is a good one: You’re conscious about what they’re probably trying to do with data about you, and you accept the trade-off for the services provided, and you do so consciously.
If Google were above-board in what they’re doing, I’d be 100% thumbs up for their behavior. But they’re not.
A major financial reason for Chrome is to save the cost of paying browser vendors to make Google the default search engine. Google pays Apple like $10 billion a year for this purpose on Safari. This is why Microsoft aggressively promoting Edge is such a threat to Google - fewer users using Google.
How do they track using their fonts?
When I looked at this many years ago, I obviously had the same exact question, but I don’t actually have an answer. The same browser version (stock) with the same config (stock) on the same OS and OS version (stock) on the same hardware (stock) with the “same” Google fonts apparently generates a different Google fingerprint, apparently even in private browsing mode through the same VPN IP. Drop the Google fonts, and the fingerprint is apparently identical. It’s been a decade since I looked at any of this, so my memory is pretty fuzzy at this point. And of course Google doesn’t provide any hints as to how they are unveiling users’ identities; this is a super closely guarded trade secret and no one I knew at Google even gave me any hints. (My guess is that they are all completely unaware of how Google does it, since the easiest way to maintain secrets is to not tell all of your employees what the secret is.)
The Google fingerprinting does a render on a hidden canvas (including rendering text using fonts) and then sends Google a hash of that render. Somehow the use of Google fonts (note: specifically when downloaded from Google, which is what most web sites do) appears to give different users their own relatively unique hash. If I had to guess (WAG WARNING!!!), I’d suggest that at least one of the most widely distributed fonts is altered ever-so-imperceptibly per download – but nothing you can see unless you render large and compare every pixel (which is what their fingerprint algo is doing). Fonts get cached for a year, so if (!!!) this is their approach, they basically get a unique ID that lasts for the term of one year, per human being on the planet.
If you examine their legalese, you’ll see that they carefully carve out this possible exception. For example: “The Google Fonts API is designed to limit the collection, storage, and use of end-user data to what is needed to serve fonts efficiently.” Right. They don’t need to collect or store anything from the Fonts API. Because your browser would be doing the work for them. Similarly, “requests for fonts are separate from and do not contain any credentials you send to google.com while using other Google services that are authenticated, such as Gmail.” So they went out of their way to provide the appearance of privacy, yet somehow their fingerprinting is able to defeat that privacy.
The only thing that I know for certain is that Google hires tons of super smart people explicitly to figure out how to work around privacy-protecting features on other companies’ web browsers, and their answer was to give away fonts for free. 🤷♂️
I’m not normally accused of being a conspiracy theorist, but damn, writing this up I sure as hell feel like one now. You’re welcome to call me crazy, because if I read this shit from anyone else, I’d think that they were nuts.
That’s really ingenious, if true. To go along with supporting your theory, there is a bug open since 2016 for enabling Subresource Integrity for Google Fonts that still isn’t enabled.
I’m a bit sceptical about the concept, that seems like it comes with an enormous cost of downsides - fonts are not light objects, and really do benefit from caching. Whereas merely having the Referer tag of the font request in addition to timing information & what is sent with the original request (IP addr, User agent, etc) seem perfectly sufficient in granularity to track a user.
This feels too easy to detect for it not to have been noticed by now - someone would have attempted to add the SRI hash themselves and noticed it break for random users, instead of the expected failure case of “everyone, everywhere, all at once”.
The fonts are constantly updated on Google fonts at the behest of the font owners, so the SRI hash issue being marked as WONTFIX isn’t very exciting, as I wouldn’t be surprised at it being legally easier for Google to host one version of the font (as Google is often not the holder of the Reserved Font Name), as the Open Font License seems to be very particular about referring to fonts by name. Reading through the OFL FAQ (https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=OFL-FAQ_web), if I were a font distributor I would be hesitant to host old conflicting versions of the font amongst each other. Plus, easier to cache a single file than multiple, and lower effort on the side of a font foundry, as it means they do not need to have some versioning system set up for their font families (because they’re not just one weight, type, etc).
The fonts not being versioned goes beyond the SRI hash benefits, fonts often have breaking changes[0] in them e.g. https://github.com/JulietaUla/Montserrat/issues/60 so a designer being to know future font changes wouldn’t result in any changes in the page. So in my mind, it really feels like it’s the foundry who’s wanting there to be a single authoritative version of the font.
0: I suppose a semver major version bump in the font world would be character width changes.
Even if cpurdy’s version is true, I’m sure they use every normal and not so normal trick in the book to track as well. If your singular goal is identifying users uniquely, you would be stupid to rely on only 1 method. You would want 100 different methods, and you would want to deploy every single one. So if a competing browser vendor or unique edge case happens to break a single method, you don’t really care.
I agree caching of fonts is useful, but the browser would cache the most common fonts locally anyway. It would behoove Google to set the cache lifetime of the font file as long as practically possible, even if they were not using it to track you.
I agree, fingerprinting is a breadth of signals game, but I just can’t believe this vector, it feels way too technically complicated for comparable methods available within the same context - the idea was minute changes in font to result in different canvas render hashes, but a user already has a lot of signals within JS & canvas (system fonts, available APIs, etc) that are much quicker to test.
Fonts are cached per site by the browsers as a means of avoiding fingerprinting the cross-domain timing effects - Safari & Chrome call it partitioned cache; Firefox, first party isolation. So Google can tell you’ve visited a site as the referer gets sent on first load, unless they set Referrer-Policy: no-referrer, of course
I agree it’s technically complicated, but I imagine they want a variety of hard and not very hard methods. Assuming they do it, perhaps they only run it when they can’t figure out who you are from some other, easier method.
Considering the other BS that big companies tend to do, my first thought on this is basically https://www.youtube.com/watch?v=WRWt5kIbWAc
You win the interwebs today. The tick is the greatest superhero ever. ❤️
I always have a degoogled Chrome fork installed as a backup browser, in case I have website compatibility problems with Firefox. Some of my problems might be caused by my Firefox extensions, but it’s easier to switch browsers than to start disabling extensions and debugging.
On desktop I use Ungoogled Chromium. On Android I use Bromite and Vanadium. My Android fork is GrapheneOS, which is fully degoogled by default. I am grateful that Google created Android Open Source to counteract iOS, it is an excellent basis for distros like Graphene. I use F-Droid for apps.
Also, I have the Google Noto fonts installed as a debian package (fonts-noto). It’s a great font that eliminates tofu, and I thank Google for creating it. I don’t think Google is spying on my debian installed package list. If I don’t connect to google servers, they can’t see the fonts used by my browser.
I primarily rely on Ublock Origin for blocking spyware and malware, including Google spying. It’s not perfect. I can always use Tor if I feel really paranoid. The internet isn’t anonymous for web browsers if you are connecting with your real IP address (or using the kind of VPN that you warned about). Google isn’t the only surveillance capitalist on the web; I expect the majority of sites spy on you. Even Tor is probably not anonymous if state actors are targeting you. I wouldn’t use the internet at all if I was concerned about that.
This seems ahistoric to me, when Chrome was created, most popular browsers were IE6 and IE7?
Chrome initially came out in late 2008, when Firefox and Safari were actually doing OK, and IE8 was just around the corner, its betas were already out on Chrome’s release date. Chrome wasn’t even a serious player* until about 2010 or 2011, by which time IE9 was out and IE 6 was really quite dead. This article from June 2010 has a chart: https://www.neowin.net/news/ie6-market-share-drops-6-ie8-still-in-top-spot/
You can see IE8 and Firefox 3.5 were the major players, with Safari on the rise (probably mostly thanks to the iphone’s growing popularity).
Firefox was the #2 browser (behind only IE*) when Chrome was introduced, and at the time it was still growing its market share year after year, mostly at the expense of IE.
After its release, Chrome quickly overtook Safari (then #3), and proceeded to eat almost all of IE’s and Firefox’s market share. It is now the #1 browser, by a significant margin.
Interestingly, Safari did not yield market share to Chrome, and continued to grow its market share, albeit at a much lower rate than Chrome did. I assume that this growth is based on the growth of iPhone market share, and relatively few iPhone users install Chrome. Today, Safari is now solidly the #2 browser behind Chrome.
Edge (the new IE) is #3.
Firefox has dropped to the #4 position, in a three-way tie with Opera and Samsung.
Agreed. I’m not sure if IE7 was a thing until after chrome. Also when Chrome first came out it was a breath of fresh air because at the time you either had to use Firefox or Opera, both of which had the issue of sites breaking that were made with IE in mind or the whole browser locking up because one site was hung. While I won’t speculate that tracking was a primary goal of Chrome development let’s not pretend that it wasn’t leaps and bounds ahead of what else was available at the time on the IE6 centric web.
Chrome was definitely aimed directly at IE, most likely because they couldn’t bribe MS to default to Google search and because its outdated tech made the push to web apps much harder - consider the fact that early versions didn’t run on anything other than Windows (about 6 months between 1.0 and previews for Mac and Linux), and the lengths they went to get sandboxing to work on WinXP.
I think it’s fair to say that Firefox did have an impact - but it wasn’t that Chrome was created as a response, rather that Firefox defeated the truism that nothing could dethrone IE because it was built into Windows.
I generally don’t like when technology companies use their product to push some ideological agenda, so I would probably choose Chrome over Firefox if I would have to choose between only those two. Also the new Firefox tabs waste a lot of screen space, and they didn’t give any official way to return to the previous look, so that’s another argument (last time I’ve used FF I had to hack through some CSS, which stopped working few updates later). The only thing I miss from FF are tab containers, but that’s about it.
But still, I use Vivaldi, which runs on Blink, so I’m not sure if I match your criteria, since your question is about “Chrome browser” not “Chrome engine”.
My work uses Google apps heavily and so I maintain a personal/work distinction in browsing by routing everything for work to Chrome. Privacy is a technical dead letter.
Yeah, I obviously have to use Chrome a bit, too. Because as a developer, ignoring the #1 browser seems stupid, any way you look at it. And a few sites only work on Chrome (not even on Safari or Firefox). I try to avoid Edge except for testing stuff, because the nag level is indescribable. “Are you super double extra sure that you didn’t want me to not be not your not default web browser? Choose one: [ Make Edge the default browser ] [ Use Microsoft suggested defaults for your web browser choice]” followed by “It’s been over three minutes since you signed in to your Microsoft cloud-like-thingy user account that we tied to your Windows installation despite your many protestations. Please sign in again, and this time we’ll use seven factor authentication. Also, you can’t not do this right now.” And so on.
I abhor and abjure the modern web, but we all have to live in it. On my Mac I use an app called ‘Choosy’ which lets me reroute URLs to arbitrary browsers, so I can use Safari without worry, given that I send all the garbage to either Chrome or a SSB.
Exciting news tbh. Freenom is a terrible steward + this change means the possibility that .ga doesn’t just get blanked blacklisted everywhere.
Somewhat unfortunate about the bulk deletion, but I’d bet the vast majority are “free” Freenom domains, many of which are reasonably questionable anyway.
The only thing I disagree with is naming this “web maximalism”. This isn’t the web, it’s using the (misleadingly titled) “web browser” as an application platform. Which I don’t know if I disagree with. It’s exclusionary to everyone that isn’t Google, Apple and currently Mozilla, but if you’re not worried about the future of open access to these applications it is probably the best place to plonk your code, assuming your users will have the hardware capabilities to render it.
Maybe future OSes should use Linux to boot directly into a SpiderMonkey userland. I’ll go live with the orangutans at that point.
Congrats, you’ve basically just described ChromeOS?
ChromeOS (as of 2012 when I bought the adorable C720) boots into a Gentoo userland with Chrome as the root window, which isn’t going deep enough. PID 1 should be a JavaScript event loop.
Why even use Linux? Why not put a WASM / WASI / WebGPU runtime directly on hardware? The Birth and Death of JavaScript is getting closer and closer to reality every year. As modern web tech gains in power, it has to absorb and address concerns that traditional multi-user OSes have had for a very long time, while also solving portability problems that traditional OSes have (often) ignored. WASM is a truly portable and reasonably performant ISA; WebGPU is a truly portable and reasonably performant compute / graphics accelerator API, WASI is a truly portable and reasonably performant syscall ABI… If we lean in really heavily to this, isn’t it exciting to think of deploying totally portable containers to essentially any hardware anywhere? (Let me use the spare compute from my RISC-V NVMe controller for some background management tasks.)
It’s also possibly horrifying (rooting out malware, botnets)… But this future has a lot to recommend it, technically.
I only say Linux because it already has drivers for so much hardware. We don’t want to have to rewrite drivers for everything in HardWASM.
So, Nintendo is dead now?
I really don’t need a console game system anyway. It’s not like my eyeballs lack for diversions.
I don’t think so. Because most normal customers are not affected. They buy a Console with some games and play them. To play games on there own Computer is not a feature most users want/need.
Also this case is a bit different, because Nintendo don’t choose to be evil just to be evil. They currently want to prevent pirates from playing a not jet released game. I don’t think this will help, but I can clearly understand the reasoning.
Hah. It’s just another day in the “Nintendo being assholes” news cycle.
Even amongst the small group of people who care right now, a good chunk are going to stop caring as soon as the next installment of
${NOSTALGIC_NINTENDO_ONLY_FRANCHISE}
gets announced.People were super mad at Sony a few years back, and they’re still around, so I think Nintendo will be fine.
Sometimes I feel like I live in a totally different plane of existence as minimal-web enthusiasts. I want my tools to look good! A couple extra HTTP requests to download pretty fonts is definitely worth it. All of the caching done in your browser and in CDNs is conveniently ignored here.
The author is saying that default fonts do look good
The author is also arguing against going out of your way to make web apps look good.
Like in most of such articles, the author could use a couple of “it depends” in there.
I agree with you on aesthetic grounds. However, the caching situation has changed to protect user privacy. The performance calculus therefore changes too. Every first visit to a website requires CDN-hosted fonts to be downloaded regardless of whether some other site the user’s visited used the same font at the same URL. From 2010 to 2020, everyone experienced only occasional FOUT or FOIT from Google-hosted fonts. Now it’s a constant problem. There are mitigation strategies. Or, as the author rudely commands, one can go back to “web-safe” fonts (i.e. ones you think your users have installed in their OS). I think it’s a strategy worth considering for body copy on text-heavy websites, especially if most of your sessions are first-time visitors.
Edit: Meant to post at the top level rather than as a reply to you
This entire article, full of hyperbole, overexaggerated (and unsubstantiated) claims, weird overdone
emphasis
, and eventually landing on an ad, reads like so much like a generic marketing funnel it hurts.There’s some merit here—software eng in general seems to have a problem with cargo culting new tech, and frontend/JavaScript communities seem extra bad about it these days—but it’s pretty hard to process when it’s being commented on by off-brand Billy Mays screaming about how
the sky is fallingnobody using React will have a job in the future, so you should buy his courses ASAP.I’ve heard this argument too many times, and I grow increasingly tired of attempting to reason through it. How is this true… at all? GitHub lets me host practically unlimited code for free. If I value my software being free, this should be the only thing I am concerned with. GitHub is run by capitalists, shocker they’re doing a capitalism or two. There is literally no better way to spread your code to the world than uploading it to GitHub, partially because that’s where all the users are.
The bar for entry for GitHub is extremely low. I learned how to use GitHub’s desktop app almost a decade ago, far before I became comfortable with the git CLI. I’ve met too many people that are not technically adept yet still have a GitHub account and are capable of contributing to projects. I can’t say the same about GitLab, Codeberg, or SourceHut, even if I enjoy parts of those products more than GitHub.
There’s over 100 million users. The evidence that all the geeks/developers are on GitHub has been weighed, and it is overwhelmingly in GitHub’s favor.
I think this beautifully illustrates the author’s perspective of those millions of users on GitHub: they’re writing bad code that doesn’t need to exist. Modern GitHub has almost exclusively devoted itself to streamlining the contribution process and encouraging communities to form around software projects. There are plenty of features I’m not a fan of, and some obvious changes I would make, but overall GitHub is incredibly easy to teach and use. I would love to say the same about any free alternative, but I can’t.
I avoided GitHub for a long time, for many of the arguments in the article. I eventually got an account about 10 years ago because I learned an important lesson: pick your battles. I don’t like the fact that GitHub is a proprietary dependency for a load of F/OSS projects. I don’t like that it distorts the F/OSS ecosystem by making CI free for some platforms and expensive for others. But when I moved projects there, I got more contributors and more useful contributions. It’s trivial for a user to submit a small fix to a project on GitHub without already having a deep engagement with the project. That is not true of anything else, in part because of network effects. I consider that more Free Software existing should be a primary goal of the Free Software movement and I’m willing to compromise and use proprietary tools to make this happen. At the same time, I’d encourage folks who have a problem with this to keep working on alternatives and come up with some way of funding them as a public service.
GitHub takes down code when requested. It is rather difficult to imagine Free Software properly destroying the system of copyrighted code when we are willing to let copyright complainants take down any code which threatens that system. I don’t think that forges ought to solve this problem entirely, but forges should not be complicit in the reaction against Free Software.
I recommend that you try out GitLab. If nothing else, it will refine your opinion.
Yes, basically every public forge will take down code when (legally) requested. America has the DMCA process for this; I’m not particularly familiar with German law, but Codeberg, a very strong proponent of free software, also calls out in their TOS:
Fighting the man and destroying copyright is a nice concept, but it’s a concept that needs to be pushed socially and legally, not just by ignoring the whole thing altogether.
Copyleft is only one approach to fighting copyright, and forges are only one approach to sharing code. It’s easy enough to imagine going beyond forges and sharing code in peer-to-peer ecosystems which do not respect takedown requests; I’m relatively confident that folks share exfiltrated corporate code from past “leaks” via Bittorrent, for example.
What are forges to do? Not accept DMCA requests? Free Software will continue to be incapable of taking down the copyright system, because it works within said system. If you want to change that system, political activism is going to do a lot more than GPL ever could.
I’ve tried GitLab, SourceHut, Gitea, and a few others, and while I enjoy different parts of those products, I couldn’t possibly call them “user-friendly” the same way I can GitHub. https://about.gitlab.com/ is solely an advertisement for they “DevSecOps” platform - this is, of course, really cool, but someone without the necessary background will not care about this. Even though a lot of this is just marketing, that marketing is important in painting an image for potential users. “The place for anyone from anywhere to build anything” speaks volumes while “DevSecOps Solution” silences the room.
I don’t understand why GitLab gets all the love. Because as a user of their hosted service, it is really Just. The. Same. Git, pull requests, wiki, projects boards, discussions, CI.
If you want to host your own instance then yes of course .. you can’t do that with GitHub. But for the hosted version - I would say it is just as proprietary and locked-in as GitHub is.
I feel like every time I log into GitLab I find a new bug.
Don’t get me wrong; I’d still pick it over GitHub, but it’s far, far behind Gitea.
That’s not the bar that was set, though. All I’m suggesting is that GitLab is a free alternative to GitHub, and that (because GitLab is designed to clone GitHub’s UX) GitLab is as “incredibly easy to teach and use” as GitHub. I’ve used GitLab instances in the past, and believe this to be a reasonable and met bar. Neighboring comments recommend Gitea, which also seems like a fine free alternative, and it might also meet the bar; I’m not personally experienced enough with Gitea to have an opinion.
Economies of scale are hard to beat.
One login for all the projects on github.
Automatic back-links between issues lists across all repositories.
And I can get my data back: my commits are easy to clone. I get a copy of all my comments by email (not the default, but an available option).
Some stuff needs a graphql query download to expatriate. Have to admit I don’t do that regularly, so I’m trusting them there.
Cheers to folks that take on this battle against centralization. I’m inclined to put my energy elsewhere.
The author gives two reasons why they think that their proposal will be opposed:
There are more! The third is the complication of all workflows; tools which previously worked offline now sporadically enter codepaths which can break and require additional capabilities to run. The fourth is the possibility of users sending garbage data simply because they can. The fifth is the massive sampling bias introduced in the second post by the fact that many distros will unconditionally disable this telemetry for all prebuilt Go packages. The sixth is that removing features based on percentage of users enjoying the feature will marginalize users who don’t use tools in ways approved by the authors, removing choice and flexibility from supposedly-open-source toolchains.
Not the most ridiculous proposal from the author, but clearly not fully baked. An enormous amount of words were spent explaining in technical detail how the author plans to violate their users’ privacies without overt irritation. Consider this gem from the third post:
How do they know that it’s critical, then? So much ego and so little understanding of people as individuals.
The author is explicitly telling us that they should not be trusted to publish Free Software.
Edit: I explicitly asked the author about the fourth reason. The author doesn’t understand the problem, and intervention may be required in order to teach them a lesson.
A) Russ clearly understands that users can submit garbage data.
B) It’s pretty anti-social to be so opposed to telemetry that instead of merely opting out or boycotting a product you actively send junk data. It’s fine to take direct action against something that is harming you. For example, removing the speakers that play ads at gas station pumps in the US now is a positive good because those ads violate the social contract and provide no benefit to consumers who just want to pump their gas in peace. But this telemetry is meant to benefit Go as an open source tool, not line Google’s pockets. You can disagree about if it’s too intrusive if you want, but taking active measures against it is an uncalled for level of counter-aggression.
We must have different life experiences. Quoting from the Calvin and Hobbes strip published on August 23, 1995:
We don’t need to send data to an advertiser so that the advertiser can improve their particular implementation of a programming language. If this sort of telemetry is truly necessary, then let’s find or establish a reputable data steward who will serve the language’s community without preference to any one particular toolchain.
A) Calvin is meant to be an anti-social twerp, not a role model.
B) Do you think the Go tool is going to start serving DoubleClick ads? There’s no analogy here.
Twice you’ve used the phrase “anti-social”. We aren’t in the UK and your meme doesn’t work here. If you want to complain about a collective action being taken against a corporation, then find a better word, because the sheer existence of Google is damaging to the fabric of society.
Point (B) deserves to be addressed too. You seem to think that an advertiser only causes harm when they advertise. However, at Google’s size, we generally recognize that advertising causes harm merely by collecting structured data and correlating it.
If the problem is that Google exists at all, then get people in SF to throw eggs at the Google buses or bribe some Senators into letting the million and half pending anti-trust lawsuits go through. I don’t see how hurting the telemetry data for Go has any connection whatsoever to the goal of a world without Google. All it does is harm people who would benefit from a better Go compiler/tools. You can hate Google all day, and you probably should. I deliberately have never applied to work at Google because I don’t believe in their corporate mission. However, none of that has anything to do with the issue at hand, which is adding telemetry to Go. If you can show that they’re going to secretly use the telemetry to send Big Mac ads to developers, then you can be mad at them, but you haven’t shown that.
I’d argue that collecting data via ET Phone Home telemetry without explicit consent is a grievous breach of the Social Contract. While I myself am a non-belligerent person, you can bet that there are people who share my outlook about consent and say that one bad turn deserves another. Poisoning telemetry data is a tactic in line with the traditions of Luddism and sabotage. It’s throwing a monkey-wrench into the grinding gears of the machine. While I’m not gonna sit here and encourage it (I’m not stupid), I do understand the mindset.
“Rub some telemetry on it” is definitely in line with Google culture. Poisoning the stream won’t stop that.
Yeah, kids, be good and don’t poison the datastream. The ethical approach is to stop using Go if they add opt-out telemetry to their toolchain.
A marketing company that calls your home phone number at dinner time is scummy, but you don’t get to yell at the person on the other end of the phone because they’re just taking the best low paying job they could find. Instead you have to get the https://en.wikipedia.org/wiki/National_Do_Not_Call_Registry enacted and get funding to enforce the law.
Legally, I sure do. If somebody invades my personal space, I’m well within my right to yell at them for it, even though they’re just some poor schlub doing a job. Morally / ethically, it’s a different story. The compassionate person will refrane from yelling at the aforementioned poor schlub. Even then, it’s worth noting that the most saintly of us has bad days and doesn’t always do the moral / compassionate thing. My point? People who script-read for telemarketing are going to be verbally abused by people whose right to be let alone has been violated, and we cannot pretend otherwise.
At what point does “I’m just doing my job” go from being a reason for compassion to an excuse? I don’t have the answer to that, but I do try to be a compassionate person.
These two scenarios aren’t equivalent anyway. The telemarketing equivalent of poisoning the telemetry stream isn’t yelling at some unfortunate script-reader. The equivalent is lying to the marketing company, wasting its resources, or possibly defrauding it. Think: trolling the Jehovah’s Witnesses and Mormons who invite themselves to your door to share their religion. This can be lots of fun, and it wastes the other person’s cycles while simultaneously preventing them from harassing somebody else.
In 2000, someone called me up from one of those multi-level marketing scams, after a friend referred them to me. I spent three glorious hours on the phone on a Sunday night trolling some scammer. It was better than whatever was on TV at the time, no doubt.
I’m sure there are people who do this sort of thing to telemarketers. It isn’t abuse, but it does cause them to waste cycles while getting paid and possibly entertained, yet preventing them from doing active harm to others.
For lack of a better term, I’ll call this array of tactics “psychological warfare countermeasures”, to contrast them with electronic warfare countermeasures.
Of course, poisoning an opt-out telemetry stream is an electronic warfare countermeasure: the equivalent of deploying chaff to confuse enemy radar.
And I’ll end with the “Kids, don’t poison a telemetry stream” disclaimer I used yesterday.
Are you really trying to rally lobsters into sending junk data to the Go telemetry?
No. You aren’t correctly reading what I wrote. I’m saying that somebody will send them junk data in the future, and I anticipate that nothing short of that future situation will make them understand their mistake.
The plain reading of “intervention may be required in order to teach them a lesson” is that you’re rallying people to intervene, ie spam the system. If that’s not what you mean, then I’m sorry for misinterpreting you, but alyx apparently has the same misinterpretation.
Maybe my final comment in the GitHub discussion will make it clear to you and @alyx that, although I am thinking adversarially, I am still a whitehat in this discussion. I made similar comments during Audacity’s telemetry proposal, and did not develop any software to attack Muse Group.
Perhaps what irritates you is that I find the whole situation amusing. I don’t really respect the author’s understanding of society, and I think that often the correct thing to do in case of disaster is to learn a lesson. The author may not learn a lesson until somebody interferes with their plans, and I expect that the fallout will be hilarious, but I am not necessarily that somebody.
Nor, frankly, do I have thousands of spare USD/mo to waste on cloud instances just for the purpose of distracting Google. What do I look like, the government?
This is good. The tool chains are still open source, that is a fact. This will:
The world cannot be built around power users/tech elite. That would be bad.
The data can be interpreted with its limitations in mind. That seems better than simply having no data whatsoever about the world.
I don’t see how this follows from Russ’ belief the build cache is critical to Go’s UX.
I don’t think it’s valid to conclude that the use cases that occur the most are the use cases that matter the most. Blind and vision-impaired people make up a small portion of all computer users; do you think it would be valid to ignore their needs in favor of making experiences better for users without vision impairments?
True, I agree with you. But I think its a safe default to reject supporting use cases with few users. There are some concerns in specific cases which reasonably can overrule this principle, but I think the bar should be high for spending effort on rare uses.
Mandatory XKCD? https://xkcd.com/1172 :-)
Every project has limited resources to spend. This by definition means that some work will not be done. If you have two bugs with same severity it might be useful to know how many users each is impacting and factor that in decisions about what to work on and in what order.
Vision impaired users are an interesting comparison. There are lots of vision impaired people in the world. A neighbor of mine happens to work at an association for the blind/vision impaired. Many people wear glasses. Many people become blind or near blind in old age. All of us are vision impaired when it is dark, or when we use our eyes to look at something else while trying to use a computer simultaneously. Even if the absolute percentage of blind people is not that high, it ends up being a lot of people, especially factoring in situational blindness.
One can imagine a lot of other conditions that might also be good to accommodate for, but which are less common and more difficult to adapt to. In the end, I think the ADA’s “reasonable accommodation” standard is broadly correct. There has to be a balance to try to include people when the costs are bearable without making things so expensive that it’s not possible for the majority to use it either.
Crazy level of complexity to get that set up and running. Just not going to do that.
Honestly not that much worse than regular Mastodon
Regular Mastodon is already waaaaay too hard to set up and operate. Absolute nightmare compared to GotoSocial, Akkoma, Epicyon, etc.
Probably a good thing that there are many alternatives for getting on the Fediverse. Some of those projects will of course wither with time, but hopefully there will still be healthy competition even so.
This is a very interesting situation, sad to see it downvoted because it goes against prevailing politics.
This article has literally nothing to do with computing besides coincidentally being at a tech company; it seems pretty off-topic for lobsters tbh.
I flagged this because Lobsters is not a place to air dirty laundry about issues with company HR or office drama.
It’s not very interesting at all. Someone got fired because they wouldn’t follow office policy.
The CCC is political and should not be discussed here.
I want to politely disagree. I mean, yes the CCC is a political organization. (I’d even go so far and argue that every organization which declares itself apolitical is in favor of the status quo and thus, politically speaking conservative. But let’s not get into that here. Let’s talk about which kinds of politics belong here or have belonged here.)
In short, the CCC in Germany is somewhat comparable to the EFF in the USA and I’ve seen a significant variety of EFF blog posts on lobste.rs. This search here https://lobste.rs/domains/eff.org is totally including political stuff about governmental control of end-to-end encryption, privacy laws, consumer rights and most of that stuff was upvoted to more than 2 digits.
Maybe someone will search through the moderation log and find more removed submissions than those I find as not-removed, but I’d be surprised..
Well then if EFF posts have snuck in the moderations have some double standards.
Uhhh that would be your double standard then. I highly doubt that people here would be in favour of moderating posts about the CCC away. I hope?
There’s no apolitical way to judge the politicalness of a lobsters submission. Every possible submission is hateful to some ideology, the only important question is whether that ideology is one that the lobsters moderators care about (either positively or negatively).
While I see where you are coming from calling them political, I think it can be odd to enforce it in a sane way. They have their ethical guides, which are very vague along the lines of “make use and share information, without borders” and “computers can be used for good”. Meanwhile the Go website featured a BLM banner on every page, and every other open source project mentions the war in Ukraine. Also Codes of Conduct tend to be political in nature and pretty much every discussion is too. You can go further with censorship resistance, discussing Amazon, Google, etc. On top of that the majority of discussions on AI and automation are political. Moreover the tags practices and philosophy, culture, person, practices and especially law have a huge likelihood of being political in one way or another. The same is true for many open source project including everything GNU related, OpenBSD, etc. have a political part to them.
I think when people talk about politics they mostly mean party/partisan politics and I think that’s not what the CCC is. They neither tell you how to vote nor are they very much tied do certain parties, especially given that on more political talks they already had a big spectrum even on talkers and it always has also been criticized by members of the CCC. And then the Club is not the Congress.
Given that they are often enough invited by politicians of all parties to give their expert opinions on topics and at least some see themselves more as a service and putting their political opinion aside in such situations, I think that if you remove CCC related news for this reason you really also have to remove any Go project links for their BLM banners, etc. But I am happy to not draw the lines there. I just hope they make sense. Usually in communities that “ban” political topics it’s more on topics that are pretty much set up to cause political discussions which a new on a famous event in computer science circles not happening isn’t.
I’m somewhat unhappy that there is a policy of banning everything from certain sources, disregarding the actual content, while I also understand it cause it’s work and effort, time which might be better spent in another way.
As soon as you talk about either individuals, groups of people or organizations and how they work or how their rules are ore are decided there’s politics involved. Like I said, many tags would have to be removed, for politics to be gone. And even then, it will creep in one way or another.
It’s fine, and many of its talks have appeared here before.
How is it political?
(Genuinely, I’m not too familiar with the CCC as an organization, though I’ve enjoyed watching their technical talks over the years.)
Source: https://www.ccc.de/en/hackerethics
I realize the odds we catch a flame war are pretty high, but on the balance this seems topical - it’s news about a business’s policies and practices, but it’s not like it’s about them updating their pricing plans or something. We’ve had a lot of discussions about centralized vs. distributed systems, censorship resistant systems, and generally how the systems we build influence the cultures we have. “Like every human culture, this is done by lurching from crisis to crisis trying to decide what’s acceptable and not, what’s individual or systemic.” But uh… that doesn’t mean we have to have a flame war about it. Please try to be kind to each other and to not indulge in hyperbole.
I love this site, but the seemingly random rules around “business news” are annoying. You are deleting everything that doesn’t relate to computing, yet this one and every time apple announces new things to buy, it is okay. Can the rules be made clearer or abandoned if they don’t work?
I can read about the kiwi farms (fsck them!) elsewhere on the internet. Do we really need that drama here? If we do, where do we draw the line? There are many, many things in the world worse than kiwi farms, but they are equally off topic. Let’s keep it that way.
I should’ve put that better than “not pricing or something”. I think this news is a useful example of what happens in the real world when we’re talking about technical projects like immutable decentralized publishing, scaling, running your own servers, privacy, censorship resistance, and the role of the web in society at large. The discussion is rantier and less technical than usual, but I don’t think it’s wasted. The discussion is hard and often fractious because we developers have incredible power. It’s our work that built the systems and interconnections that led to this situation. So this isn’t technical, but it’s very much the results, and we need to understand them to inform the next systems we build.
You can also see it positively that the rules are more squishy: Do I want business news about startup X dropping feature Z ? No, I generally don’t. Do I want information about a precedence case where cf have used their house right, a company serving ~30% of the network traffic and sometimes called the guys that protect DdOS groups (among other things) ? Yeah I do. Even if I am more interested in coming updates about what this meant for future cases like this. And I’m interested in discussions about the m1 chip from people on lobsters, something that changes the mobile CPU market dramatically in terms of possible power, heat and performance. Even though I don’t intend to buy anything apple for now. Its just not that black & white.
You can get all of that on the orange site, where all of what you explained is on topic. I think KF is garbage, yet again, this site is about computing. Learning that a publicly traded company in the US has one customer less is not about computing. There is nothing technically interesting here, but that is - according to the description -what this site is all about.
Regarding apple announcements: minute by minute live blogging of whatever ipad or 3d emoji or tv series are announced is also very much not about computing.
I haven’t seen that one here
@calvin often does a very nice job covering apple events. Of which some are more about computing than others, in my estimation. But here are a couple of recent-ish examples:
I find them valuable and hope they keep getting posted here.
I don’t find it valuable, but I could chose to ignore it. What annoys me is how any other article that is “business news” gets deleted, yet more apple tv+ shows or the colors of the new iphones are not business news apparently and should be frontpage news on a site that literally says this about itself:
He’s probably referring to calvin’s threads on Apple’s big announcement days. They’re a tradition, where usually we don’t give room for product announcements like that. It’s a fair criticism that this implies our rules aren’t consistent.
Fair, but I think the rules are in the end just to reflect the intention of the community. And because of that I think it is fine when the community decides it wants to break that rule and make these announcements an exception. (Calvins summary could be seen as its own post and is probably the the thing that makes it worthwhile.) Note: I don’t actually buy or like apple stuff, so it’s not like I’m in favor of that exception.
Well, you must be new around here then
well I actually did ignore them, apparently..
Interestingly, their previous blog post (https://blog.cloudflare.com/cloudflares-abuse-policies-and-approach/) laid out their policy framework and was both an excellent post and very computing and internet infrastructure philosophy related, and therefore on topic.
But, we didn’t discuss that post (because no one submitted it?) which makes this discussion all the more off topic, in my opinion: either we’re interested in the discussion AROUND the ban (which is on topic and the other post is better) or we’re only interested in the specific end result (which is obviously wildly off topic).
“Infrastructure business announces policy of trying to act like a common carrier, even though it isn’t one” is a little interesting, and if it wasn’t discussed here, that’s a loss but a small one. “Policy announcement immediately runs into trouble” is interesting and contextually makes the first one more interesting.
Business news is generally off-topic, as I recall for many years. Blog posts announcing a decision to stop service to a customer–especially without accompanying coverage/linkage of the facts of the case, the technical issues at play, or even numbers making an economic argument for why it makes sense–would seem to me to be similarly off-topic.
I generally agree with this stance, as there’s nothing strictly technical here other than “we’re blocking somebody”.
On the other hand…
Fuck KiwiFarms.
If the standard of worthwhile posting and justification has fallen to “Fuck $thing”, this site’s future as a forum for discussion isn’t significantly better than 4chan or reddit.
Let’s put it a different way then.
A major company that touches a LOT of the spaces many of the readers here touch has just made a very large decision regarding a cesspool that’s also impacted, directly or indirectly, many of the readers here.
It seems pretty obvious that this is a bit outside the usual scope but, frankly, this is one of those things a lot of this community do care strongly about and it’s valuable for them to be made aware of.
Also, yeah, fuck KiwiFarms.
Caring strongly is insufficient justification.
Many readers care strongly about the war in Ukraine or Brexit or US politics or Taiwan–all of those are just as off-topic.
This bit of news is being well-covered in the orange site, twitter, and elsewhere; I see no harm in leaving it off here.
KiwiFarms presents a serious existential risk for Lobsters because of the threats to our community. @alyx did not give a list of community members who have been impacted; hopefully it is obvious that giving such a list would be tantamount to surrender.
If we don’t care strongly about each other, then we don’t have a community.
We manifestly don’t care strongly about each other, as seen in the shitflinging any time politics or meaningful policy differences come up. Post as a Palantir employee here and see how much strong caring for fellow Lobsters there is!
(Would I prefer that we be a more united community? Of course. Do I think we were some number of years ago? Yes. Do I think that we can do that today? No, not really, because reasons.)
I disagree with your claim of KF being some special “existential risk” to Lobsters, especially without proof.
And again, I’m not saying anything in support of KF. As far as I know, they’re unrepentant assholes. I am saying that we should be sticking to on-topic material instead of what amounts to gloating.
An article on how to make a botnet to go after Kiwifarms, or on the legal issues encountered in taking them to court, or on scraping them for automatic doxing notification would all be great submissions here! I would upvote them!
This, though, is just an inactionable announcement that Cloudflare kicked off a customer that some people here may dislike.
I can’t speak to that, but I’ve posted here as a Microsoft employee and, although there are some very anti-Microsoft folks here, I’ve never felt personally attacked. That doesn’t mean that everyone agrees with me[1], but it does mean that I can almost always[2] expect that people disagree with me politely and respectfully, with well-reasoned arguments.
[1] Weird, since I’m always right…
[2] Everyone has bad days where they’re unusually cranky, in general this community seems pretty good at pointing out when someone is being uncharacteristically mean and asking them to change, and kicking them out if they don’t.
Then that’s the discussion, not a Cloudflare blog post.
Then this would be off-topic, and the articles that use the banning of kiwifarms as a springboard to talk about these things would be on topic, as has been the precedent for a while.
That’s a strong argument that I made the wrong call here. One of the reasons I kept this up was that I figured we’d see a stream of responses over the next week or so about why DDOS protection works this way, alternate systems that would’ve had different failure modes, etc. Keeping the CF post made for a single natural place to merge those stories so users can discuss or hide conveniently. I commented early trying to point the discussion towards the technical aspects, and clearly did not succeed.
Would you say the distinction you’re making is that Losbters primarily looks at things through a technical lens to discussion implementation? It isn’t well-reflected on /about and hopefully we can improve that writing and public understanding.
Are any of V’s audacious claims real yet?
Basically, the problem with V now (and the reason why I think we should ban it and other similar projects like Urbit from lobsters) is that V boosters have a long enough history of shameless lies which require increasing levels of effort to uncover that I have no confidence that any actually interesting V announcement is real, or that any actually interesting V demo isn’t faked. Like, you could give me the binary to run myself and I’d believe that there was some kind of trick to it rather than the demo being legit unless I had gone over it with a decompiler and a fine toothed comb first.
Also kind of vexing seeing @volt_dev getting away with increasing the V promotional posts after a post recently got popular that showed that many key V claims were obvious lies.
Sorry, not on point but Urbit is horrible. The community is extremely toxic to non-cishet dudes and it’s infuriating how many angry DMs I got for putting my pronouns in my username.
This is an aside at best but how are we not yet past the point where we let people write “change the world” when what they mean is “sell something” or just “program the computer”
Maybe because letting people choose their own words is part of the social norm. You’re welcome to complain about it, but let implied we have some say in the words they choose.
Because who really cares that much about someone else’s hyperbole?
Ugh. Some responses come to mind:
(Apparently, all of my ideas involve ssh.)
The problem here is the audience who’s setting up these instances. It is uh, almost definitely not going to be people who are familiar with any of those things.
No, the actual problem is that someone misled them into thinking that installing WP yourself when you’re not familiar with any of those things is an acceptable state of affairs.
People see that there’s a “free” way to get it, and rather than signing up for some managed plan that will actually serve them better decide that it’s “easy enough” to just follow the steps in a readme… but over time they will almost certainly tend to a 100% chance of breaking or getting hacked, as there is no actual plan to maintain the site.
Wordpress has an automatic update function AFAIK, so I’m not sure the risk is as high as you think it is. The real problem comes when people install shitty unmaintained add-ons.
I didn’t consider these “solutions” as much as “possible workarounds” - they’re not trivial!
Seems like exactly what got owned is being disputed by the VT founder — https://twitter.com/bquintero/status/1518738072820670464
I agree with andyc’s summary but part of the impression I get is that there’s some strong correlations being pushed by the author without looking at the deeper “why?” parts of these correlations.
A single binary site using sqlite is probably going to be easier to deploy/maintain, yes. But why? Because there’s most likely fewer moving parts, period. As we decrease the amount of moving parts, we’re simplifying maintainability and (probably) deployability, but we’re also sacrificing something. Whether that’s sacrificing what can be done live vs what needs “deployed” to do, sacrificing flexibility, etc, we’re not just getting easier deploys and less maintenance for free by rebuilding software as single binary deployables powered by sqlite.
Every time you see an open-source rewrite of a system from a Google paper, you should keep in mind that Google wouldn’t have published the system unless it had already been replaced by something better.
This is not true at all.
Spanner, Zanzibar and Borg among outers are proof that this isn’t true. GRPC is even a little different that they are still adopting it as it was released publicly.
Systems are going to change over time, so in the long term there will be outdated papers, but that is just a fact that papers are a snapshot of knowledge at the time of authorship.
Even if they have, so what? If the concept is one that (when implemented) works well, and there isn’t an obviously better alternative publicly available, who cares whether or not it’s not pulled straight from Google’s latest stack?
Though, even if Google’s moved on internally, these implementations can have a big impact - Hadoop comes to mind.
I have played with Loki and think it’s a pretty good model. For small use-cases you can totally just use the filesystem storage option. You only have to upgrade to an object store when your scale requires it. It does require some configuration on the ingestion side to specify the fields you are going to index on but that doesn’t seem that big of a problem to me.
It may have been me configuring it poorly (probably), but my experience w/ Loki in a small scale setting has been that it will do terrible, terrible things to your inodes when running Loki using filesystem storage.
Just something to look out for, but worth keeping an eye on it. Besides the “Oops! All inodes!” issue, Loki+Grafana is a pretty nice setup.
Related: https://github.com/grafana/loki/issues/364
I have not run into that issue in my setup. It may be a result of the amount of logs I’m pulling in which is actually quite small or something else to do with the my setup.
It also has to do with the file system you are using, so it might partly be about using the right tool for the job. But it would certainly make sense to structure them in a better way, regardless.