You should add Paperkast to the list of sister sites.
How is https://github.com/lobsters/lobsters/wiki not a standardized directory?
I also wish I’d discovered it sooner. However, other than nesting it under the “Wiki” link at the bottom page, I don’t see a solution that wouldn’t start cluttering up the site with information most people won’t need.
It’s linked from the about page.
Thanks! Sudoku solver has been my go-to problem even since I started learning Haskell. I used it to learn and explore various facets of Haskell.
Yes, that’s usually true. Because you have more places to embed the bits, and you can use more LSBs producing usually a less noisy image.
The last is true if the host image is larger, but in the post, the host image remains always at the same size.
In the results, I’m increasing the size of the secret images thus increasing the amount of data to encode, while the quality of the resultant stego image decrease.
This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.
Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.
This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.
A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.
I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.
I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.
Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.
And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.
What kind of impression of the threat models involved do you think does this create?
Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.
He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.
We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.
We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol.
I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”
If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).
If you have a free month, try writing a slack client. Enlightenment will follow :P
I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”
Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.
There are swathes of people still using Windows XP.
The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.
The fact that people are using something tells you nothing about how Good that thing is.
At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.
At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)
At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on […])
The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.
To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.
If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.
IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.
It also shows that our industry is just a pop culture and can give a shit about good tech .
There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.
IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.
My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.
Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.
To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool?
The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.
Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.
That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.
I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.
Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.
The fact that people are using something tells you nothing about how Good that thing is.
I have to disagree here. It shows that it is good enough to solve a problem for them.
I don’t see how Good and “good enough to solve a problem” are related here. The first is a metric of quality, the second is the literal bare minimum of that metric.
Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.
I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.
More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.
I think I understand where the author’s coming from, but I think some of his concerns are probably a bit misplaced. For example, unless you’ve stripped all the Google off your Android phone (which some people can do), Google can muck with whatever on your phone regardless of how you install Signal. In all other cases, I completely get why Moxie would rather insist you install Signal via a mechanism that ensures updates are efficiently and quickly delivered. While he’s got a point on centralized trust (though a note on that in a second), swapping out Google Play for F-Droid doesn’t help there; you’ve simply switched who you trust. And in all cases of installation, you’re trusting Signal at some point. (Or whatever other encryption software you opt to use, for that matter—even if its something built pretty directly on top of libsodium at the end of the day.)
That all gets back to centralized trust. Unless the author is reading through all the code they’re compiling, they’re trusting some centralized sources—likely whoever built their Android variant and the people who run the F-Droid repositories, at a bare minimum. In that context, I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users. Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers. Chances are honestly higher that you’ll be hacked by some random other app you put on your phone than that Google will opt to go after Signal on their end. Moxie’s point is that you’re better off trusting Signal and Google than some random APK you find on the Internet. And for the overwhelming majority of users, I think he’s entirely correct.
When I think about something like Signal, I usually focus on, who am I attempting to protect myself from? Maybe a skilled user with GPG is more secure than Signal (although that’s arguable; we’ve had quite a few CVEs this year, such as this one), but normal users struggle to get such a setup meaningfully secure. And if you’re just trying to defend against casual snooping and overexcited law enforcement, you’re honestly really well protected out-of-the-box by what Signal does today—and, as Mickens has noted, you’re not going to successfully protect yourself from a motivated nation-state otherwise.
and cause irreparable harm to trust in Google from both users and developers
You have good points except this common refrain we should all stop saying. These big companies were caught pulling all kinds of stuff on their users. They usually keep their market share and riches. Google was no different. If this was detected, they’d issue an apologetic press release saying either it was a mistake in their complex, distribution system or that the feature was for police with a warrant with it used accordingly or mistakenly. The situation shifts from everyone ditch evil Google to more complicated one most users won’t take decisive action on. Many wouldn’t even want to think to hard into it or otherwise assume mass spying at government or Google level is going on. It’s something they tolerate.
I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users.
The problem is that moxie could put things in the app if enough rubberhose (or money, or whatever) is applied. I don’t know why this point is frequently overlooked. These things are so complex that nobody could verify that the app in the store isn’t doing anything fishy. There are enough side-channels. Please stop trusting moxie, not because he has done something wrong, but because it is the right thing to do in this case.
Another problem: signals servers could be compromised, leaking the communication metadata of everone. That could be fixed with federation, but many people seem to be against federation here, for spurious reasons. That federation & encryption work together is shown by matrix for example. I give that it is rough on the edges, but at least they try, and for now it looks promising.
Finally (imho): good crypto is hard, as the math behind it has hard constraints. Sure, the user interfaces could be better in most cases, but some things can’t be changed without weakening the crypto.
many people seem to be against federation here, for spurious reasons
Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there.
Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.
Federation seems like a fast path to ossification.
I have been thinking about this. There are certainly many protocols that are unchangeable at this point but I don’t think it has to be this way.
Web standards like HTML/CSS/JS and HTTP are still constantly improving despite having thousands of implementations and different programs using them.
From what I can see, the key to stopping ossification of a protocol is to have a single authority and source of truth for the protocol. They have to be dedicated to making changes to the protocol and they have to change often.
I think your HTTP example is a good one. I would also add SSL/TLS to that, as another potential useful example to analyze. Both (at some point) had concepts of versioning built into them, which has allowed the implementation to change over time, and cut off the “long tail” non-adopters. You may be on to something with your “single authority” concept too, as both also had (for the most part) relatively centralized committees responsible for their specification.
I think html/css/js are /perhaps/ a bit of a different case, because they are more documentation formats, and less “living” communication protocols. The fat clients for these have tended to grow in complexity over time, accreting support for nearly all versions. There are also lots of “frozen” documents that people still may want to view, but which are not going to be updated (archival pages, etc). These have also had a bit more of a “de facto” specification, as companies with dominant browser positions have added their own features (iframe, XMLHttpRequest, etc) which were later taken up by others.
Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there. Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.
It may seem so, but that doesn’t mean it will happen. It has happened with xmpp, but xmpp had other problems, too:
Matrix does some things better:
The google problem you described isn’t inherent to federation. It’s more of a people problem: Too many people being too lazy to setup their own instances, just using googles, forming essentially an centralized network again.
Maybe a skilled user with GPG is more secure than Signal
Only if that skilled user contacts solely with other skilled users. It’s common for people to plaintext reply quoting the whole encrypted message…
And in all cases of installation, you’re trusting Signal at some point.
Read: F-Droid is for open-source software. No trust necessary. Though to be fair, even then the point on centralization still stands.
Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers.
What makes you certain it would be detected so quickly?
“Read: F-Droid is for open-source software. No trust necessary”
That’s non-sense. FOSS can conceal backdoors if nobody is reviewing it. Often the case. Bug hunters also find piles of vulnerabilities in FOSS just like proprietary. People who vet stuff they use have limits on skill, tools, and time that might make them miss vulnerabilities. Therefore, you absolutely have to trust the people and/or their software even if it’s FOSS.
The field of high-assurance security was created partly to address being able to certify (trust) systems written by your worst enemy. They achieved many pieces of that goal but new problems still show up. Almost no FOSS is built that way. So, it sure as hell cant be trusted if you dont trust those making it. Same with proprietary.
It’s not nonsense, it’s just not an assurance. Nothing is. Open source, decentralization, and federation are the best we can get. However, I sense you think we can do better, and I’m curious as to what ideas you might have.
There’s definitely a better method. I wrote it up with roryokane being nice enough to make a better-formatted copy here. Spoiler: none of that shit matters unless the stuff is thoroughly reviewed and proof sent to you by skilled people you can trust. Even if you do that stuff, the core of its security and trustworthiness will still fall on who reviewed it, how, how much, and if they can prove it to you. It comes down to trusting a review process by people you have to trust.
In a separate document, I described some specifics that were in high-assurance security certifications. They’d be in a future review process since all of them caught or prevented errors, often different ones. Far as assurance techniques, I summarized decades worth of them here. They were empirically proven to work addressing all kinds of problems.
even then the point on centralization still stands.
fdroid actually lets you add custom repo sources.
The argument in favour of F-Droid was twofold, and covered the point about “centralisation.” The author suggested Signal run an F-Droid repo themselves.
By May 25, most corporates had just amended their Privacy Policy volumes and annoyed consumers were forced to clicked through to accept them without reading.
I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.
In my opinion preoccupation with the nominal personal data, actually displaces real privacy. Who cares about privacy of their name and family name, or office held? Except to hide shady politicking and worse, majority of us are happy to consciously publicize it as much as possible. It’s wrong, impractical and disrespectful to assume the contrary.
There are dozens of situations when it’s actually socially undesirable to keep it private, yet it is zealously protected under the GDPR in exactly the same way as your shopping history or your family photos.
I do care about the privacy of my name and family name. Is my name public on the internet? Yes. If I wanted to make it not public, would I want to be able to do so? Yes. Simple as that, really.
Equally questionable are formal and bureaucratic prescriptions for better data protection — more documentation, privacy impact audits, formal training, etc.
Does anyone honestly believe that more paperwork will lead to more privacy? More security risks in handling of our data (say thousands of hand signed consents) are somewhat more likely, I’m afraid.
Why would formal training around data protection, auditing of privacy protection and documentation of efforts to comply with the GDPR lead to another other than more privacy?
Apart from the right to complain under the new rules and few marginal rights — which are primarily of interest to the corrupt and the criminal, like the right to be forgotten — the average data subject barely gained any new privacy through the GDPR.
Yeah okay, nothing interesting to read here. The right to be forgotten is certainly not ‘primarily of interest to the corrupt and the criminal’. What a great load of ‘if you have nothing to fear you have nothing to hide’ twaddle.
By May 25, most corporates had just amended their Privacy Policy volumes and annoyed consumers were forced to clicked through to accept them without reading.
I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.
Excuse me if I misunderstand, but isn’t it still the case that they can add terms to their privacy policy, then tell users to either check all the boxes or leave?
That’s exactly what you can’t do — you can’t refuse service if a user says “no” to tracking (unless you can prove in court that the tracking is strictly required for the functioning of the service).
An example of a site that doesn’t follow the rules you state at all:
If you do not agree with our new privacy policy (that haven’t really changed much) we absolutely respect that. Feel free to go to your user settings page and delete your account. Optionally, you can change your settings and/or user profile if that helps. If you miss any settings feel free to let us know. If you just miss-clicked you can always go back and agree to the policy. If you have more questions feel free to send an e-mail to support@{{domainName}} and we will do our very best help you out.
They’re relatively small though, so I hope they’re not representative of too many other companies.
Then their privacy policy is invalid, and they’re committing a crime with every bit of data they collect.
To be allowed to collect userdata, you need consent, and under the GDPR consent is only valid if it has been given freely, without any advantage/disadvantage coming from giving/not giving consent. (except for functionality that directly requires the consent).
Oh. I guess I’ve been doing privacy policy change dialogs wrong then 😅 I could’ve sworn lots of them wouldn’t let you continue until you accepted though.
I don’t know why people find this so hard to understand, but the entire point of the GDPR is that you cannot comply with it simply by adding more terms to your Terms of Service for people to sign away their rights without reading. That’s not how it works.
Have the various aspects of GDPR been applied/tested in court yet?
European civil law originals from Roman civil law, and is quite different from common law systems that originate from British law. Generally the law is quite specific and the intent is that the law will be applied as written rather than interpreted in the social and political context of the day in light of precedent, as is done in common law systems.
I don’t know if that’s the case with the GDPR to the extent that it’s true of say, German law or French law, but if it is, it doesn’t need to be ‘tested’ in court, it is what it is.
There are a few things which GDPR leaves open to interpretation, such as:
This reads like an opinion-as-proof fluff piece.
The thesis of the article seems to be “policy can’t fix this, technology can”. Coincidentally(heh) the author is a cofounder of a cryptocurrency called “Consent Token” that allows you to use the blockchain to sell private information.
https://thenextweb.com/author/mindaugas-kiskis/ http://www.consentok.com/
I hate it when people say “look at what this person built!” or “look at what this person supports!” as though it’s proof the person’s opinion is compromised. It’s totally expected that someone who holds this opinion would follow through on it by building something around it. If you’d like to suggest they’re just saying what’s convenient for their position, you can use as much popular psychology as much as you want; as is common with pop-psych, you’ll find false-positives everywhere.
the guy is free to build whatever he wants on the grounds of whatever he choses to believe. the issue with this piece is that it’s presented as some form of journalism, without any disclosure, when it’s not. that’s the same as having a pro-fracking article in a national newspaper, criticising anti-fracking laws, written by a guy who owns a drilling company. can you not see the conflict of interest? his opinion is compromised, it’s biased! worst than that, is that this dude throws around statements w/o much backup:
I would be happy if the GDPR would at least slow down data processing without my knowledge and by parties with whom I have no relationship, but I see no sign of this happening.
i’ve seen loads of web pages now asking you to consent to things such as trackers. doesn’t that slow down data processing? does it not actually stop it?
the GDPR has not meaningfully changed the privacy status quo
how come? what would a meaningful change be in this author’s opinion be then?
There are dozens of situations when it’s actually socially undesirable to keep it private, (…)
what’s the issue here? this is just rambling at this point. does gdpr keep you from sharing your data in any way? no.
Equally questionable are formal and bureaucratic prescriptions for better data protection — more documentation, privacy impact audits, formal training, etc. Does anyone honestly believe that more paperwork will lead to more privacy? More security risks in handling of our data (say thousands of hand signed consents) are somewhat more likely, I’m afraid.
“hurr durr red tape”… this is just making stuff up… why is it questionable? why are there now more security risks? this article is total garbage.
is gdpr perfect? of course not. is gdpr solving every privacy issue? it certainly isn’t. that doesn’t invalidate it, still.
…the issue with this piece is that it’s presented as some form of journalism, without any disclosure, when it’s not.
Would you rather the article be written by someone with no practical experience in the field? This isn’t rhetorical, it’s a genuine question: do we want experienced and potentially biased people, or inexperienced people with fresh perspectives?
In any case, I don’t think experts are responsible for disclosing everything that has shaped their opinion. I don’t think any of us is. I think dealing with that reality—with the fact that every opinion belies an entire life experience—is just par for the course.
It seems like you disagree with the article’s points, which I can respect. (There are some points that I disagree with as well, and I hope you don’t imagine my argument as just an extension of the author’s.) Going after that person’s prior experience, as though it invalidates their opinion, just doesn’t make sense.
positively beautiful use of Formal Methods, but the story about XP doesn’t ring true: if someone comes to you with Java code that uses notify(), you fire them on the spot and rewrite everything they ever touched. :P
But how fast is the turnover? Was that common knowledge at the time? What else have we yet to realize?
the recommendation was intended as more of a sarcastic commentary on the pitfalls of some Java language features. whilst I appreciate the beauty of a tool to find bugs in such a rough problem domain, I think the wise architect advises to avoid such risks at a whole other level of design.
The knowledge not to use notify() seems kind of arbitrary to me—I’ve certainly never heard of it—and I’m not sure what level of design would be required to have that realization. But I think that’s an important principle in general.
This shows that if you randomly got, say, 0.2, it would get pushed to near 0.6. But if you got something close to 1, it won’t go past 1. In other words, this changes the distribution from uniform to… exponential? This is something I’m still unclear on.
No, it follows a power law, most likely approximating a Pareto distribution. What about it makes the author think it’s exponential?
Author here: brain worms? Lack of training in probability and statistics? I went through a LOT of reading new-to-me things in the course of trying to figure all this out.
Oh no I totally get that. My comment seems rather judge-y, I’m sorry if I came off that way. I admittedly learned an awful lot from reading your post, and I actually read up on the distribution you presented only after reading your article.
It appears similar to an exponential distribution on first sight from what I recall. It’s a lot harder to tell the difference when we’re restricted to such a limited section of the distribution.
No worries! I had tried chasing down a Pareto-style function to make the correct distribution, but the relations among the parameters didn’t fit the constraints. But I was still unsure what to call the final, correct distribution, hence the question mark and caveat about my clarity :)
I only recently started using noscript. A lot of people balk at the fact that the majority of websites don’t work anymore after you install it, and the fact that you have to manually unlock specific scripts, and even think about which scripts you want to allow. It is certainly not something the everyday user wants to deal with. But the speed with which pages load, and the complete absence of all the spying and autoplay videos and the majority of images makes it really worth it.
Obviously the better solution for everyone is for web designers to get their shit together on this issue. But I am not holding my breath. For now noscript is as necessary as adblock ublock origin for having a positive experience of the internet.
It also teaches you who your friends are - the websites that just work as though the plugin were not there are the good ones. The ones that tell you you need to enable javascript and load all scripts directly from their own domain are also resolved with a single click. The ones that are a major hassle to use with noscript running are the ones you should probably be staying away from anyway.
Not just technical people either: an old friend used to train laypeople to use it on NoScript forums. He said there was a small, but steady, stream of them concerned about privacy and/or speeding up machines.
Does it warn you if the scripts contents have changed?
If so, it might mitigate a little this huge security hole hidden in plain sight… but I’m not much sure…
No, you block on a domain basis so that security hole is not even needed to get around it. Its not going to save you from the government, just bloated websites and advertising.
So… do you enable whole CDNs?
Anyway, if I understand what you mean, once the JS execution is enabled for a host, the server could serve a malicious script to you without being noticed, so that bug could be exploited, not only by the government but by several private companies…
most people don’t audit the javascript code before they enable it anyway, so detecting changes wouldn’t solve the core issue
True. Indeed I said that it could mitigate that vulnerability.
As @enkiv2 said in the lobsters’ thread about it, the only reliable solution is to remove scripting languages from browsers. A pretty expensive security fix, I know, but the bug is very dangerous.
expensive in that it would save huge amounts of energy in the form of compute cycles that aren’t spent attacking the user
Apple chooses to not support WebM (eight years and counting!), and you choose to use Safari. Better alternatives exist.
It is and always has been the responsibility of the developer to test cross-browser compatibility. Users have reasons for their choices, and if a webpage is broken they will simply leave. Fortunately, this is one of the easy cases: the HTML standard has a built-in way to ensure the presence of a compatible video format.
I might be in the minority here, but I don’t mind whiteboard puzzles. I’m not saying they’re effective as a hiring tool (I’m also not not saying that), but I’m always surprised when people say they stress specifically about them over other interview methods. I’m genuinely curious what exactly people dislike (other than ‘it’s not representative of the job’, which I agree with). Is it the stress and time pressure? Or the reliance on past knowledge? Would it be possible to construct a good whiteboard interview for you, or is the format itself distasteful?
I think for the majority it’s because a lot of their knowledge is stored on the internet with their working memory copy simply being how to quickly look up the documentation to get the right answer.
I think for the majority it’s because the internet has become an extension of their working memory and without it they flounder simply because they haven’t needed to commit to memory the details, or anything much more than were to find them when needed.
The best analogy I can come up with is mental arithmetic, it used to be that you could take someone to the white board and ask them to work out a long division question or complex multiplication and most would be able to do so with little to no stress. With the prevalence of calculators nobody bothers to remember how to do long division or factorisation of difficult multiplication because they know how to use a calculator that does it quicker (we are animals of path of least resistance after all.)
A white board interview where you’re describing abstract concepts would probably solve a lot of the worries, because that tends to be what most people remember, with the details filled in by a few searches of the documentation.
A white board interview where you’re describing abstract concepts would probably solve a lot of the worries, because that tends to be what most people remember, with the details filled in by a few searches of the documentation.
This makes sense to me – I feel like if you handed someone a marker and said “explain {something from their resume} to me,” a whiteboard interview would be a lot less intimidating.
I’d certainly expect people to be able to do a long multiplication on a whiteboard, though. It’s pretty standard stuff. If they couldn’t, I’d hope it was due to a mind blank under stress and not because they literally don’t understand how multiplying numbers works.
And I think that if you can’t do basic programming without the internet you’ll struggle to be productive. That’s not to say that you should just know everything, but far too many people I’ve seen can’t do any little basic bit of programming without googling the most basic things. I like that programming competitions, if nothing else, at least force you to learn to write the basic ‘glue’ code quickly without having to look up e.g. how to print something to two decimal places or how to read a float from standard input. Basic stuff you should just know. They also are an okay litmus test. I’ve never met anyone that did well in programming contests that was a bad programmer. But I’ve definitely met good programmers that didn’t do programming contests. It has a high false negative rate and a very low false positive rate, I expect.
I personally haven’t had to do long multiplication on paper in well over a decade and while I can describe three different methods in abstract I wouldn’t be able to do them without looking up simply because I have forgotten the details over the years of resolving to use a calculator.
The same can be said for some with programming, maybe they use a framework that provides verbose abstractions but no longer remember how to do such things (e.g sessions, http, file handling) on “bare metal” without first looking it up.
Having a decent memory isn’t a bad thing but as Einstein supposedly once said “Never memorize something that you can look up.”
Einstein probably never said that. However he did say. (In response to not knowing the speed of sound as included in the Edison Test: New York Times (18 May 1921); )
[I do not] carry such information in my mind since it is readily available in books. …The value of a college education is not the learning of many facts but the training of the mind to think.”
Basically don’t labor over learning dumb facts. I also though think it’s wise if you find a free moment to understand how and why things work.
It’s also likely that Einstein never said “Never memorize something you can look up”, so it’s not really fair to call it a quote. If you had to create a pithy new phrase from his quote it might be something like “Facts are no substitution for reason and understanding.”
That is the reason for why I wrote “Einstein supposedly once said”; I wasn’t trying to pass off something that may not be true as fact. However it is certainly something that a small amount of searching found to be a popular phrase attributed to Einstein and that is the reason why I quoted it.
The New York Times reference you provided was much better not only in being easily traced back to the man himself but also because it better conveyed the point I was trying to make. Thank you for sharing it :)
Yes, sorry for being pedantic. I just wanted to make sure I wasn’t being misunderstood there. I had assumed that you said it in good faith. Thank you for being patient.
The ‘details’ are essentially that 2134 * 34 = 2134 * 30 + 2134 * 4. I really don’t think you’d have any trouble if you thought about it for a few seconds.
The problem I’ve seen is that people don’t even think about something. They either know it and do it or they convince themselves they don’t know it and don’t try to work it out. That, the predilection to giving up, that is the danger sign, not that they don’t know it.
The same can be said for some with programming, maybe they use a framework that provides verbose abstractions but no longer remember how to do such things (e.g sessions, http, file handling) on “bare metal” without first looking it up.
I mean if you’re doing high level stuff you shouldn’t expect to know the details of writing low level code. If people are testing your algorithm knowledge at a Javascript webapp gig, then it’s just bad interviewing. But people testing your algorithm knowledge at a routing algorithm gig seems pretty fair.
Having a decent memory isn’t a bad thing but as Einstein supposedly once said “Never memorize something that you can look up.”
But if you asked Einstein some basic physics he wouldn’t look it up, he’d know it. Because having fully internalised the basic principles of physics is just part and parcel of understanding physics at the level he understood physics. Like, if you asked a mathematician the epsilon-delta definition of a limit, they’d be able to explain what it is, and what it meant, even if perhaps they couldn’t write it down formally left to right in one go if they hadn’t recently taught a course on analysis. Not because they’re geniuses that remember everything but because it’s just the most basic fundamental knowledge that everything else is based on.
I think we are both on the same page, possibly I am bad at explaining what I am trying to say.
people testing your algorithm knowledge at a routing algorithm gig seems pretty fair
Agreed, the problem I think many see with white board interviews is that they are largely used to test knowledge that isn’t pertinent to the job at hand for example testing your algorithm knowledge at a job where you’re largely expected to write high level web apps where everything is wrapped in an closed source abstraction you’re going to need to learn on the job anyway.
Exactly. They used to test how an individual candidate goes about problem solving, but nowadays they’re just a litmus test to see if you’ve memorized all the graph algorithms you might be asked to regurgitate.
But if you asked Einstein some basic physics he wouldn’t look it up, he’d know it.
All of my physics professors and PIs looked basic stuff up all the time. There’s a reason we were allowed to take two pages of equations into exams.
I’m trying to convince my workplace to get rid of whiteboarding interviews, does anyone know if there are resources for ideas of alternatives? Anyone have a creative non-whiteboarding interview they’d like to share?
The best that I’ve found is to just ask them to explain some tech that’s listed on their resume. You’ll really quickly be able to tell if its something they understand or not.
My team does basic networking related stuff and my first question for anyone that lists experience with network protocols is to ask them to explain the difference between TCP and UDP. A surprising number of people really flounder on that despite listing 5+ years of implementing network protocols.
This is what I’ve done too. Every developer I’ve ever interviewed, we kept the conversation to 30min-1hr and very conversational. A few questions about, say, Angular if it was listed on their resume, but not questions without any context. It would usually be like- “so what projects are you working on right now? Oh, interesting, how are you solving state management?” etc. Then I could relate that to a project we currently had at work so they could get a sense of what the work would be like. The rapid-fire technical questions I’ve find are quite off-putting to candidates (and off-putting to me when I’ve been asked them like that).
As a side note, any company that interviews me in this conversational style (a conversation like a real human being) automatically gets pushed to the top of my list.
Seconded. Soft interviewing can go a long way. “You put Ada and Assembler on your CV? Oh, you just read about Ada once and you can’t remember which architecture you wrote your assembly for?”
I often flunk questions like that on things I know. This is because a question like that comes without context. If such a problem comes up when I’m building something, I have the context and then I remember.
I don’t think any networking specialist would not know the difference between TCP and UDP, though. That sounds like a pretty clear case of someone embellishing their CV.
So if you can’t whiteboard and you can’t talk about your experience, what options are left? Crystal ball?
I like work examples, open ended coding challenges: Here’s a problem, work on it when you like, how you like, come back in a week and lets discuss the solution. We’ve crafted the problem to match our domain of work.
In an interview I also look out for signs of hostility on the part of the interviewer, suggesting that may not be a good place for me to work.
A sample of actual work expected of the prospective employee is fair. There are pros and cons to whether it should be given ahead of time or only shown there, but I lean towards giving it out in advance of the interview and having the candidate talk it through.
Note that this can be a hard sell, as it requires humility on the part of the individual and the institution. If your organization supports an e-commerce platform, you probably don’t get to quiz people on quicksort’s worst-case algorithmic complexity.
I certainly don’t have code just sitting around I could call a sample of actual work. The software I write for myself isn’t written in the way I’d write software for someone else. I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun. The code I’ve written for work is the intellectual and physical copy of my previous employers, and I couldn’t present a sample even if I had access to it, which I don’t.
Yup, the code I write for myself is either 1) something quick and ugly just to solve a problem 2) me learning a new language or API. The latter is usually a bunch of basic exercises. Neither really show my skills in a meaningful way. Maybe I shouldn’t just throw things on GitHub for the hell of it.
Oh, I think you misinterpreted me. I want the employer to give the employee some sample work to do ahead of time, and then talk to it in person.
As you said, unfortunately, the portfolio approach is more difficult for many people.
I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun.
Perhaps in the future we will see people taking on side projects specifically in order to get the attention of prospective employers.
I recently went through a week of interviewing as the conclusion of the Triplebyte process, and I ended up enjoying 3 of the 4 interviews. There were going to be 5, but there was a scheduling issue on the company’s part. The one I didn’t enjoy involved white board coding. I’ll tell you about the other three.
To put all of this into perspective, I’m a junior engineer with no experience outside of internships, which I imagine puts me into the “relatively easy to interview” bucket, but maybe that’s just my perception.
The first one actually involved no coding whatsoever, which surprised me going in. Of the three technical interviews, two were systems design questions. Structured well, I enjoy these types of questions. Start with the high level description of what’s to be accomplished, come up with the initial design as if there was no load or tricky features to worry about, then add stresses to the problem. Higher volume. New features. New requirements. Dive into the parts that you understand well, talk about how you’d find the right answer for areas you don’t understand as deeply. The other question was a coding design question, centered around data structures and algorithms you’d use to implement a complex, non-distributed application.
The other two companies each had a design question as well, but each also included two coding questions. One company had a laptop prepared for me to use to code up a solution to the problem, and the other had me bring my own computer to solve the questions. In each case, the problem was solvable in an hour, including tests, but getting it to the point of being fully production ready wasn’t feasible, so there was room to stretch.
By the time I got to the fourth company and actually had to write code with a marker on a whiteboard I was shocked at how uncomfortable it felt in comparison. One of my interviews was pretty hostile, which didn’t help at all, but still, there are many, far better alternatives.
I’m a little surprised that they asked you systems design questions, since I’ve been generally advised not to do that to people with little experience. But it sounds like you enjoyed those?
There are extensive resources to help with the evangelism side of things.
I’ve been conducting whiteboard data structures & algorithms interviews for a few months now. (We also give them laptops if they want to type on them.) I think they’re somewhat fair, given alternatives. I don’t ask trivia questions. My questions are pretty simple, with a progression of several possible solutions, each more efficient than the other. They’re the sort of problems I have solved in the past. (No dynamic programming stuff or whatever.) I do expect a candidate to know their language and basic CS concepts really well. They should be able to give solid reasoning for big-O. I do think it’s necessary to know the basics of big-O analysis. Bonus points for knowing standard library APIs well, but no big deal otherwise.
I don’t read resumes before the interviews. I don’t want to add extra bias. I ask a quick question about their technical experience at the beginning. This question is just to gauge general communication skills. At the end of the interview, I write up a summary of what happened. Those are sent to a committee which makes the decision.
This approach is standardized, which means individual biases are minimized.
Requiring open source contributions or take-home work unduly burdens those with families or other care-taking work. I prefer not looking at resumes so I’m not biased toward good universities or companies.
They should be able to give solid reasoning for big-O. I do think it’s necessary to know the basics of big-O analysis.
I’d argue that if someone is familiar with big-O but not with profiling, their knowledge is as good as nothing. Premature optimization is the bane of software development.
I ask a quick question about their technical experience at the beginning. […] I prefer not looking at resumes…
Couldn’t you glean the same knowledge by looking at their résumé?
I’m with you on big-O. It’s only a starting point for real performance engineering. I try to get to the nitty-gritty details of how their solution would work in a real system after they finish the main problem. Hopefully, they have some real-world knowledge about how their solution works.
Well, the way we do it, the question at the beginning is just for me to gauge communication skills and get them comfortable. By the time they’ve gotten to me, the resume has been vetted at a basic level. Later on, someone will take my feedback and the resume into account to make the final decision. I think the anti-bias reason for not reading resumes is more important than the slight changes I can make to my algorithmic questions to account for their experience.
Most people have a fixed supply of discipline for tasks which are not intrinsically fun but still important. This is the kind of discipline that is needed to create secure code. Robert M. Pirsig wrote great words on this in Zen and the art of Motorcycle Maintenance where he describes ‘Gumption’. Recommended.
However - we can deplete our supply of gumption easily enough by fighting our language. We run out of discipline that way. The result may be code that is memory safe but executes plugins from a globally writable directory.
In this sense, the Rust memory security features may not be a net positive for writing safe code.
I don’t follow this reasoning. To me it’s completely backwards.
First, I’ll admit there may be something to this for people new to the language if learning new things costs gumption. However, I don’t think it’s fair to compare how someone feels when they program in a language they’ve been using for decades vs a language they’re still learning. (Sorry ahu, I’m not aware of what your experience is with rust and since you didn’t state it I’m assuming it’s not much.) Otherwise we’d all still be using assembly because ‘C is too complicated and forces you to think about too much irrelevant’.
I’ve been programming Rust for years and doing it professionally for > 1 year now. I don’t fight the borrow checker any more, instead I know that it will catch me when I do something I shouldn’t have. That means that I don’t have to even think about it any more. To some extent the patterns that I’ve learned mean it’s just right the first time, but if I introduce something that isn’t safe I immediately get an error to pop up in the terminal next to me. Only then do I have to think about how what I did could be violating some rule elsewhere. Then as soon as I fix it, I can promptly forget about it.
In the end my brain is limited, the more I can offload to the computer the more I have space for the interesting things. Or as the author puts it: I have to spend less gumption on writing Rust because the compiler brings more of its own gumption.
Right. We develop the resilience to fight through those “not intrinsically fun” problem-solving tasks. In fact, it’s something every single one of us did when we first began learning how to code.
(Grumpy C programmer who’s dabbled with Rust)
I think you are both right. :) Most of the “fighting the borrow checker” happens when one doesn’t have much experience with Rust. As one gets better, I can only assume that one gets better at coming up with designs which lend themselves to fewer borrow checker fights.
I think this is very similar to the pain experienced by a C programmer (really any procedural language) trying something radically different - e.g., common lisp. From my own experience, writing lisp was a pain at first because I just tried to write the code just as I would in C - because that’s just what I’m used to. With time and practice, one gets better at the new language and dealing with its quirks. (For the record, I lost interest in common lisp not long after my first aha moment.)
So, I guess one can define two points in time for a Rust developer: the novice and the expert. For the novice, I think ahu’s statement is more or less correct. For the expert, you are correct. On that note, the steepness of the learning curve for Rust is interesting - if it is too steep, not enough novices will have the gumption to take the hit in productivity to practice enough to become experts.
It’s interesting because the author is not thoughtlessly in favour of GitHub, but I think that his rebuttals are incomplete and ultimate his point is incorrect.
Code changes are proposed by making another Github-hosted project (a “fork”), modifying a remote branch, and using the GUI to open a pull request from your branch to the original.
That is a bit of a simplification, and completely ignores the fact that GitHub has an API. So does GitLab and most other similar offerings. You can work with GitHub, use all of its features, without ever having to open a browser. Ok, maybe once, to create an OAuth token.
Whether using the web UI or the API, one is still performing the quoted steps (which notably never mention the browser).
A certain level of discussion is useful, but once it splits up into longer sub-threads, it becomes way too easy to loose sight of the whole picture.
That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.
Another nice effect is that other people can carry the patch to the finish line if the original author stops caring or being involved.
On GitHub, if the original proposer goes MIA, anyone can take the pull request, update it, and push it forward. Just like on a mailing list. The difference is that this’ll start a new pull request, which is not unreasonable: a lot of time can pass between the original request, and someone else taking up the mantle. In that case, it can be a good idea to start a new thread, instead of resurrecting an ancient one.
What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?
While web apps deliver a centrally-controlled user interface, native applications allow each person to customize their own experience.
GitHub has an API. There are plenty of IDE integrations. You can customize your user-experience just as much as with an email-driven workflow. You are not limited to the GitHub UI.
This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.
Granted, it is not an RFC, and you are at the mercy of GitHub to continue providing it. But then, you are often at the mercy of your email provider too.
There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time. The number of tools which exist for handling offline mail archives is huge; the number of tools for dealing with offline GitHub project archives is … small. Indeed, until today I’d have expected it to be almost zero.
Github can legally delete projects or users with or without cause.
Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.
But of course my own maildir on my own machine will remain.
I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.
We’ve spent about half a century refining the email interface: it’s pretty good.
We’ve spent about half a century refining the email interface. Very good clients exist…. but most people still use GMail regardless.
That’s typically the result of a poor email client. We’ve had threaded discussions since at least the early 90s, and we’ve learned a lot about how to present them well. Tools such as gnus do a very good job with this — the rest of the world shouldn’t be held back because some people use poor tools.
I have never seen an email client that presented threaded discussions well. Even if such a client exists, mailing-list discussions are always a mess of incomplete quoting. And how could they not be, when the whole mailing list model is: denormalise and flatten all your structured data into a stream of 7-bit ASCII text, send a copy to every subscriber, and then hope that they’re able to successfully guess what the original structured data was.
You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process. The rest of the world shouldn’t be held back because some people use poor tools indeed - that means not insisting that all code discussion has to happen via flat streams of 7-bit ASCII just because some people’s tools can’t handle anything more structured.
I agree with there being value in multipolar standards and decentralization. Between a structured but centralised API and an unstructured one with a broader ecosystem, well, there are arguments for both sides. But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).
You could maybe make a case for using an NNTP newsgroup for project discussion
While I love NNTP, the data format is identical to email, so if you think a newsgroup can have nice threads, then so could a mailing list. They’re just different network distribution protocols for the same data format.
accommodate PRs in a structured form
Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.
Even if you discount structured text, emails and newsgroup posts can contain content in any MIME type, so a structured PR format could ride along with descriptive text in an email just fine.
Sure, but I’d expect the people who complain about github would also complain about the use of MIME email.
You could maybe make a case for using an NNTP newsgroup for project discussion, but trying to squeeze it through email is inherently always going to be a lossy process.
Not really — Gnus has offered a newsgroup-reader interface to email for decades, and Gmane has offered actual NNTP newsgroups for mailing lists for 16 years.
But insisting that everything should be done via email is not the way forward; rather, we should argue for an open standard that can accommodate PRs in a structured form that would preserve the very real advantages people get from GitHub. (Perhaps something XMPP-based? Perhaps just a question of asking GitHub to submit their existing API to some kind of standards-track process).
I’m not insisting on email! It’s decent but not great. What I would insist on, were I insisting on anything, is real decentralisation: issues should be inside the repo itself, and PRs should be in some sort of pararepo structure, so that nothing more than a file server (whether HTTP or otherwise) is required.
…the new PR will lose the history of the old PR.
Why not just link to it?
This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.
That strikes me as disingenuous as well. Email is older. Of course it has more clients, with varying degrees of maturity & ease of use. That has no bearing on whether the GitHub API or an email-based workflow is a better solution. Your point is taken; the GitHub API is not yet “Just Add Water!”-tier. But the clients and maturity will come in time, as they do with all well-used interfaces.
Github can legally delete projects or users with or without cause.
Whoever is hosting your mailing list archives, or your mail, can do the same. It’s not unheard of.
But of course my own maildir on my own machine will remain.
Meanwhile, the local copy of my git repo will remain.
I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.
I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.
Meanwhile, the local copy of my git repo will remain.
Which contains no issues, no discussion, no PRs — just the code.
I’d like to see a standard for including all that inside or around a repo, somehow (PRs can’t really live in a repo, but maybe they can live in some sort of meta- or para-repo).
I’m fairly certain that it’s a website of some sort—that is, if you intend on using a definition of “perfect” that scales to those with preferences & levels of experience that differ from yours.
Why on earth would I use someone else’s definition? I’m arguing for my position, not someone else’s. And I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.
Meanwhile, the local copy of my git repo will remain.
Which contains no issues, no discussion, no PRs — just the code.
Doesn’t that strike you as a shortcoming of Git, rather than GitHub? I think this may be what you are getting at.
Why on earth would I use someone else’s definition?
Because there are other software developers, too.
I choose to categorically reject any solution which relies on a) a single proprietary server and/or b) a JavaScript-laden website.
I never said anything about reliance. That being said, I think the availability of a good, idiomatic web interface is a must nowadays where ease-of-use is concerned. If you don’t agree with that, then you can’t possibly understand why GitHub is so popular.
(author here)
Whether using the web UI or the API, one is still performing the quoted steps
Indeed, but the difference between using the UI and the API, is that the latter is much easier to build tooling around. For example, to start contributing to a random GitHub repo, I need to do the following steps:
It is a heavily customised workflow, something that suits me. Yet, it still uses GitHub under the hood, and I’m not limited to what the web UI has to offer. The API can be built upon, it can be enriched, or customised to fit one’s desires and habits. The difference in what I need to do to get the same steps done differs drastically. Yes, my tooling does the same stuff under the hood - but that’s the point, it hides those detail from me!
(which notably never mention the browser).
Near the end of the article I replied to:
“Tools can work together, rather than having a GUI locked in the browser.”
From this, I concluded that the article was written with the GitHub web UI in mind. Because the API composes very well with other tools, and you are not locked into a browser.
That’s typically the result of a poor email client.
I used Gnus in the past, it’s a great client. But my issue with long threads and lots of branches is not that displaying them is an issue - it isn’t. Modern clients can do an amazing job making sense of them. My problem is the cognitive load of having to keep at least some of it in mind. Tools can help with that, but I can only scale so far. There are people smarter than I who can deal with these threads, I prefer not to.
What he sees as a benefit I see as a detriment: the new PR will lose the history of the old PR. Again, I think this is a tooling issue: with good tools resurrecting an ancient thread is just no big deal. Why use poor tools?
The new PR can still reference the old PR, which is not unlike having an In-Reply-To header that points to a message not in one’s archive. It’s possible to build tooling on top of this that would go and fetch the original PR for context.
Mind you, I can imagine a few ways the GitHub workflow could be improved, that would make this kind of thing easier, and less likely to loose history. I’d still rather have an API than e-mail, though.
This is somewhat disingenuous: while GitHub does indeed have an API, the number & maturity of GitHub API clients is rather less than the number & maturity of SMTP+IMAP clients. We’ve spent about half a century refining the email interface: it’s pretty good.
Refining? You mean that most MUAs look just like they did thirty years ago? There were many quality of life improvements, sure. Lots of work to make them play better with other tools (this is mostly true for tty clients and Emacs MUAs, as far as I saw). But one of the most wide-spread MUA (gmail) is absolutely terrible when it comes to working with code and mailing lists. Same goes for Outlook. The email interface story is quite sad :/
There’s a huge difference between being able to easily download all of one’s email (e.g. with OfflineIMAP) and only being at the mercy of one’s email provider going down, and being at the mercy of GitHub preserving its API for all time.
Yeah, there are more options to back up your mail. It has been around longer too, so that’s to be expected. Email is also a larger market. But there are a reasonable number of tools to help backing up one’s GitHub too. And one always makes backups anyway, just in case, right?
So yeah, there is a difference. But both are doable right now, with tools that already exist, and as such, I don’t see the need for such a fuss about it.
I use & like GitHub, and I don’t think email is perfect. I think we’re quite aways from the perfect git UI — but I’m fairly certain that it’s not a centralised website.
I don’t think GitHub is anywhere near perfect, especially not when we consider that it is proprietary software. It being centralised does have advantages however (discoverability, not needing to register/subscribe/whatever to N+1 places, and so on).
None of these tactics remove or prevent vulnerabilities, and would therefore by rejected by a “defense’s job is to make sure there are no vulnerabilities for the attackers to find” approach. However, these are all incredibly valuable activities for security teams, and lower the expected value of trying to attack a system.
I’m not convinced. “Perfect” is an overstatement for the sake of simplicity, but effective security measures need to be exponentially more costly to bypass than they are to implement, because attackers have much greater resources than defenders. IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers. Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.
You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system? In this case and as your threats get more advanced I agree with the article. In higher levels of threats it becomes a problem of economics.
You would end up with a more secure system from attackers with less resources. For example, you can make your system secure to all common methods used by script kiddies but what happens when a state-level actor is attacking your system?
I think just the opposite actually. The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals. Whereas following the truism would lead you to make changes that would protect against all attackers.
The attitude proposed by the article would lead you to implement things that defended against common methods used by script kiddies (i.e. cheap attacks) but did nothing against funded corporate rivals.
That’s not what I understand from this article. The attitude proposed by the article should, IMO, lead you to think of the threat model of the system you’re trying to protect.
If it’s your friend’s blog, you (probably) shouldn’t have to consider state actors. If it’s a stock exchange, you should. If you’re Facebook or Amazon, not the same as lobsters or your sister’s bike repair shop. If you’re a politically exposed individual, exploiting your home automation raspberry pi might be worth more than exploiting the same system belonging to someone who is not a public figure at all.
Besides that, I disagree that all examples are too costly to be worth it. Hashing passwords is always worth it, or at least I can’t think of a case where it wouldn’t be.
To summarize with an analogy, I don’t take the exact same care of my bag when my laptop (or other valuables) are in it than when it only contains my water bottle, and Edward Snowden should care more about the software he uses than the ones I use.
Overall I really like the way of thinking presented by the author!
Whereas following the truism would lead you to make changes that would protect against all attackers.
Or mess with your sense of priority such that all vulnerabilities are equally important so “let’s just go for the easier mitigations”, rather than evaluating based on the cost of the attack itself.
If you’re thinking about “mitigations” you’re already in the wrong mentality, the one the truism exists to protect you against.
It’s important to acknowledge that it’s somewhat counterintuitive to think about the actual parties attempting to crack your defenses. It requires more mental work, in a world where people assume they can get all the info they need just by reading their codebase & judging it on its own merits. It requires methodical, needs-based analysis.
The present mentality is not a pernicious truism; it’s an attractive fallacy.
IME all of the examples this page give are too costly to be worth it, almost all of the time: privilege separation, asset depreciation, exploit mitigation and detection are all very costly to implement while representing only modest barriers to skilled attackers.
How do you figure it’s too costly? If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks. Additionally, there are services out there that scan dep. vulnerabilities if you give them a Gemfile, or access to your repo.
Limited resources would be better expended on “perfect” security measures; someone who expended the same amount of effort while following the truism and focusing on eliminating vulnerabilities entirely would end up with a more secure system.
Perfect it all you want. The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up) If anything, what’s costly is keeping on your employees to not take shortcuts, and on stay alert to missing access cards, rouge network devices in the office, badge surfing, and that they don’t leave their assets lying around.
If anything, all these things are getting much easier, as they become primitive to deployment environments, and frameworks.
I’d frame that as: deployment environments are increasingly set up so that everyone pays the costs.
The weakest link is still an employee who is hung over and provides their credentials to a fake GMail login page. (Or the equivalent fuck up)
So fix that, with a real security measure like hardware tokens. Thinking in terms of costs to attackers doesn’t make that any easier; indeed it would make you more likely to ignore this kind of attack on the grounds that fake domains are costly.
Customers do not care what deals Intel/AMD have made with whom.
The second a competitor comes along that doesn’t have this nonsense built-in, companies that sell computers will begin to source their CPUs from them. It has already begun with RISC-V, some ARM CPUs, POWER9, etc.
Computer security has never been more important than it is now, and its importance is only increasing. Security experts, IT experts, their friends, and their families, etc., will vote with their money.
Meanwhile, these companies will be dealing with lawsuits for intentionally selling customers faulty, backdoored malware. Have fun with that.
I certainly hope you’re correct that the market will demand better. I think it’s possible, but I’m not as optimistic as you. Getting end users to care about security, even when the lack of it directly harms them, isn’t easy.
Getting end users to care about security, even when the lack of it directly harms them, isn’t easy.
I am optimistic because it’s simply the reality. The “users don’t care about privacy/security” refrain is just one of those things some people like to say. It’s total nonsense.
People use insecure, poorly designed technologies only when well designed, secure versions of those technologies do not exist. It’s just a market cycle. Poorly designed tech where engineers cut corners comes out first, and then the properly designed versions come out later. The instant they go on the market everyone abandons what’s broken and upgrades to the newer and better tech. This has always been the case.
Engineers cutting corners is one thing. Entire industries conspiring to preclude any alternatives is another beast altogether.
The second a competitor comes along that doesn’t have this nonsense built-in, companies that sell computers will begin to source their CPUs from them.
There’s been competitors to Intel without the nonsense built in, with simpler architectures, faster at one point, and so on. Many went bankrupt, the products were withdrawn, or the company got acquired. So, your claim has to be assumed false by default given the market history is exactly the opposite. The combo of monopolistic tactics by Intel/IBM/Microsoft and the lock-in to x86 software made that happen. On x86 side, it was mostly the same with AMD happening because IBM forced it to happen. There’s one, surviving, third party that focused on lowest, energy usage. The Centaur’s were sold by VIA but VIA was losing boatloads of money. So, you don’t have a lasting, success story that was able to do non-coerced license of x86 for high-performance chips.
The good news is the prevalence of doing everything in the browser already got hardware diversity in via netbooks and tablets. The new architecture having excellent browser and codec support might be enough to get some of that market. Throw in sync with all devices plus online, private backups. There’s some potential. I’ve also been toying with ideas about cloud servers (esp for web stuff), network appliances, kiosks, and so on. Whereas, taking down Intel/AMD will require x86 support for legacy, x86-optimized apps. Intel publicly threatened to use patent suits on any company that does that.
“People use insecure, poorly designed technologies only when well designed, secure versions of those technologies do not exist.”
That’s nonsense. There are easy-to-use, private solutions in a number of areas. Let’s just say search, chat, email, and backups. The market at large uses the insecure offerings, even those with harder UI. That’s because they thought they were a good deal for every reason but the one you gave: truly private or secure. They don’t care about that. I think the easiest counterpoint is that the top providers of email and ways to hang out with friends are surveillance companies. They know it, private IM’s or group messages aren’t so hard, and they still use the surveillance platforms anyway. That’s hundreds of millions to billions of people. Where’s your market data backing your point a similarly-sized number of people cared enough to switch to DuckDuckGo, Signal, or SpiderOak? I’m cherry-picking things advertised as private that are easy to use with media coverage.
taking down Intel/AMD will require x86 support for legacy, x86-optimized apps. Intel publicly threatened to use patent suits on any company that does that
Microsoft implemented their version of qemu-user into Windows on ARM. Is Intel going to sue them? :)
I doubt it. We’ll see how far that goes given the performance difference. Also, we goes from one sue-happy, ISA monopoly to another. Least the SoC’s themselves are more diverse.
re: performance — it’s not intended to be the primary way to run apps, it’s more of a transitional step, like Rosetta was for Apple. The plan is probably something like:
Now, that’s a great idea! There’s still going to be a legacy base whose stuff won’t port. I think the larger part of the market is using stuff that’s still getting updated. So, that strategy could gradually pull them off x86 if ARM chips get good enough for those users. I’m thinking more like cost-effective with nifty features their SoC’s support more than performance. The multimedia and sensor stuff on a SnapDragon is an example.
There’s been competitors to Intel without the nonsense built in, with simpler architectures, faster at one point, and so on. Many went bankrupt, the products were withdrawn, or the company got acquired. So, your claim has to be assumed false by default given the market history is exactly the opposite.
I’m pretty sure you’re making an elaborate strawman argument to my point. The Intel ME thing is only recently in the news relative to the timeline you’re considering. It was not a factor back then. Now it is.
Where’s your market data backing your point a similarly-sized number of people cared enough to switch to DuckDuckGo, Signal, or SpiderOak? I’m cherry-picking things advertised as private that are easy to use with media coverage.
DuckDuckGo’s search results were (and are) historically poor compared to Google’s. So it’s not “well designed”. I chose my words and criteria carefully.
As far as Signal goes, it has a very large and growing userbase, but it too, doesn’t offer the same (or better) level of quality that the popular messaging services offer. It’s pretty darn buggy. Nevertheless, I use it almost exclusively with all of my friends. These technologies don’t go from zero to out-competing incumbents in a day. It obviously takes some amount of time. Facebook is losing users (to a service that advertises privacy as its #1 feature, albeit misleadingly), Signal and Telegram are gaining users.
As for SpiderOak, I can’t comment on that. Apple’s Time Machine backups are a better idea than cloud backups, no matter who your provider is, and I’m guessing Apple’s Time Machine has more users than whatever it is you have in mind.
The Intel ME thing is only recently in the news relative to the timeline you’re considering.
People have been talking about Intel and DRM for a long time. I have a comment in this thread with links. That the markets ignored the risks to keep buying Intel isn’t a strawman so much as what they actually did. You were talking the hypothetical stuff that might cut into whatever their current, public revenues are. Hasn’t panned out yet if you’re talking secure processors or something like that.
re competition had issues. Most of the big, tech companies had products with issues when they started. Some of the biggest were trash-talked as garbage by many developing for them. They still got tons of users because those wanted or had to use what they offered. It seems like anywhere from most to all the companies focused on privacy or security that actually works vs checklist BS have failed to accomplish anything. You can get rich via sales or VC off a shitty, non-security app many times over before one, secure app will get high uptake. Must be some underlying principle or principles at work, yeah?
It’s why these days I tell people wanting private/secure apps to hide or embed that in a product sold on every other kind of benefit that people actually jump on. Enough people doing that might give us what we need. It will probably take a lot of time and cooperation, too.
People have been talking about Intel and DRM for a long time. I have a comment in this thread with links. That the markets ignored the risks to keep buying Intel isn’t a strawman so much as what they actually did.
This is not true. I repeat myself: the problems of Intel ME were unheard of and out of the public’s consciousness only until recently, and even now, still, many are unaware of its existence. This is fact.
Likewise it is fact that Facebook is losing users to more private platforms, again proving the point that users do care about privacy and security.
One need only look at the security of computers over time to see that it’s constantly improving, just as it is with every other technology, be it cars, trains, spaceships, airplanes, whatever.
You’re right that there’s increased awareness. You’re right that this could affect sales. The thing you’re leaving off is that anyone that cared about privacy could’ve just googled the AMT thing on their box to find out it was a backdoor. They didn’t care enough to do that. Whereas, privacy-conscious, lay people were already avoiding that shit years ago. They used to show up in forums talking about it, running SandboxIE, using NoScript for surfing, and so on.
My argument is most didn’t care, don’t, and won’t. If they buy a private-ish alternative, it will be for other reasons like apps, features, luxury, etc. Apple iPhone being pushed for privacy is an example. Apple succeeded for every other reason. That’s just after the fact that might bump sales up a bit.
One cannot care about something that one is unaware of. So increased awareness = more caring, because of course users care about privacy and security. Many of them just aren’t computer experts like you and I who have the time to sift through all of the b.s. “privacy” marketing claims that companies like Facebook make.
So, again, users do care very much, and once they’re made aware they’ve been lied to, precisely because they care they will ditch these companies.
Many of them just aren’t computer experts like you and I
That’s right. So, the ones that cared asked us on security forums what we thought. They’d get a basic assessment of overall risks, what defense to use, which products were better, and so on. Again, I’m talking about what privacy-conscious laypeople were doing for the past ten years or so I’ve been on security forums. They also usually found it hard to get friends and family using the better stuff. It didn’t have feature X, shiny emoji Y, and so on. They didn’t care. Same with literally over a 1,000 people I’ve tried to market that stuff to face-to-face.
“ So increased awareness = more caring, “
This can happen. I’m even hoping for it. The general public does respond to what’s in the media, esp scary stuff. The thing is, it’s not really an informed response so much as a reaction. They jump at buzzwords and false assurances en masse. So, what privacy-pushing suppliers need to do is keep good products ready for those events. Then, when it makes waves, they have media campaigns targeted at those people. The bullshitters already do this. The honest suppliers will only get so many amidst the competition. The numbers can gradually go up with each media wave while they do more positive type of marketing on a regular basis advertising features, privacy, and good service. Sales from that can drive new products. Even better if they’re nonprofits or public benefit corporations to reduce odds they themselves become the villains down the line.
DuckDuckGo’s search results were (and are) historically poor compared to Google’s. So it’s not “well designed”. I chose my words and criteria carefully.
How about StartPage? Exact same results as Google. Where are all their users?
Consumers won’t care about additional choice if everything they care about is packaged into what they already use.
That’s a good point, I think many people just don’t know it exists. Those who are aware do use it over Google.
I would be curious to know, for example, why Apple doesn’t make it or DDG the default search in Safari. Perhaps some form of collusion going on there.
Apple gets paid for the search engine default. I don’t know if I’d call that ‘collusion’. I think it’s bad – it’s one of many small profit seeking behaviours that Apple engages in to the detriment of their users and their platform as a whole (see also: the 30% cut they take on the App Store).
For default on iOS, I can give you three, billion reasons they’d keep Google. ;)
I think Apple foresees that there would be user backlash. At this point, Google is expected as a default, and providing anything to the contrary is considered presumptive. That would be a huge change; perhaps one day it will be in the forefront of Apple’s attention to take on that change, but for now, we will have to wait, and perhaps do the best we can do as individuals.
I doubt that’s the reason. Apple’s users would praise Apple for the switch. It must be something else, and I’m guessing it’s more along the lines of what @jfb said.
I’ll note one other thing, and that’s that even if users are aware of StartPage, that’s often not enough for them to use it. It isn’t clear at all how to change the default search engine in Safari, especially on iOS, and iOS doesn’t even allow StartPage in Safari AFAIK. So companies like Apple deliberately put roadblocks to adoption.
This doesn’t mean users don’t care. It means big profit-seeking companies don’t care about their users, and this creates an opening for competitors to do a better job. This is why browsers like Brave are a thing and are taking users away from Safari, IE, Firefox, etc.
Apple’s users would praise Apple for the switch.
See the headphone jack debacle. Everything is an inconvenience to somebody; you don’t know how many until you ask.
…companies like Apple deliberately put roadblocks to adoption.
Where would you place that feature in order to guarantee discoverability? Do you think that change would make for a good user experience?
Anecdote: I personally use Safari because it uses the least battery life on my computer, responsiveness stays the same up to a given number of tabs, and the user interface is understandable and consistent; as opposed to Chromium derivatives, which are huge CPU/battery hogs, tend to lag a bit at times, and don’t really mesh well with the rest of macOS (my use of which I could defend similarly). I admire the steps taken by other options such as Brave or qutebrowser, but they forego some basic QoL considerations that are important to users like me. I think that is Apple’s primary consideration.
Where would you place that feature in order to guarantee discoverability?
In the search bar when you search.
Do you think that change would make for a good user experience?
Yes.
I agree that that’s probably the best way to do it. That being said, if I were Apple, I’d be trying to cut down on the number of flow-interrupting pop-ups that occur on performing a simple action such as a web search.
Who said anything about a popup? Even Firefox (on Desktop) does this pretty well today. No popups.
Oh, a dropdown menu? Now I understand what you were saying. That’s fair. I think Safari used to have that, actually. They’ve really been on a minimalist crusade, haven’t they?
Git via email sounds like hell to me. I’ve tried to find some articles that evangelize the practice of doing software development tasks through email, but to no avail. What is the allure of approaches like this? What does it add to just using git by itself?
I tried to collect the pros and cons in this article: https://begriffs.com/posts/2018-06-05-mailing-list-vs-github.html
I also spoke about this at length in a previous article:
While my general experience with git email is bad (it’s annoying to set up, especially in older versions and I don’t like it’s interface too much), my experience of interaction with projects that do this was generally good. You send a patch, you get review, you send a new, self-contained patch, attached to the same thread… etc, in parallel to the rest of the project discussion. It’s a different flavour, but with a project that is used to the flow, it can really be quite pleasing.
What does it add to just using git by itself?
I think the selling point is precisely that it doesn’t add anything else. Creating a PR involves more steps and context changes than git-format-patch git-send-mail.
I have little experience using the mailing list flow, but when I had to do so (because the project required it) I found it very easy to use and better for code reviews.
Creating a PR involves more steps and context changes than
git-format-patchgit-send-mail.
I’m not sure I understand. What steps are removed that would otherwise be required?
Simply, it’s “create a fork and push your changes to it”. But also consider that it’s…
In this workflow, you switched between your terminal, browser, mail client, browser, terminal, and browser before the pull request was sent.
With git send-email, it’s literally just git send-email HEAD^ to send the last commit, then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README. You can skip the second step next time by doing git config sendemail.to someone@example.org. Bonus: no proprietary software involved in the send-email workflow.
Also github pull requests involve more git machinery than is necessary. Most people, when they open a PR, choose to make a feature branch in their fork from which to send the PR, rather than sending from master. The PR exposes the sender’s local branching choices unnecessarily. Then, for each PR, github creates more refs on the remote, so you end up having lots stuff laying around (try running git ls-remote | grep pull).
Compare that with the idea that if you want to send a code change, just mail the project a description (diff) of the change. We all must be slightly brainwashed when that doesn’t seem like the most obvious thing to do.
In fact the sender wouldn’t even have to use git at all, they could download a recent code tarball (no need to clone the whole project history), make changes and run the diff command… Might not be a great way to do things for ongoing contributions, but works for a quick fix.
Of course opening the PR is just the start of the future stymied github interactions.
In my case I tend to also perform steps:
man git-remote to see how to point my local clone (with the changes) to my GitHub forkgit remote commandsman git-push to see how to send my changes to the fork rather than the original repoTo send email, you also have to have an email address. If we are doing a fair comparison, that should be noted as well. Granted, it is much more likely that someone has an email address than a GitHub account, but the wonderful thing about both is that you only have to set them up once. So for this reason, it would be a bit more fair if the list above started from step four.
Now, if I have GitHub integration in my IDE (which is not an unreasonable thing to assume), then I do not need to leave the IDE at all, and I can fork, push, and open a PR (case in point, Emacs and Magithub can do this). I can also do all of this on GitHub, never leaving my browser. I don’t have to figure out where to send an email, because it automatically sends the PR to the repo I forked from. I don’t even need to open a shell and deal with the commandline. I can do everything with shortcuts and a little bit of mousing around, in both the IDE and the browser case.
Even as someone who is familiar with the commandline, and is sufficiently savvy with e-mail (at one point I was subscribed to debian-bugs-dist AND LKML, among other things, and had no problem filtering out the few bits I needed), I’d rather work without having to send patches, using Magit + magithub instead. It’s better integrated, hides uninteresting details from me, so I can get done with my work faster. It works out of the box. git send-email does not, it requires a whole lot of set up per repo.
Furthermore, with e-mail, you have to handle replies, have a firm grip on your inbox. That’s an art on its own. No such issue with GitHub.
With this in mind, the remaining benefit of git send-email is that it does not involve a proprietary platform. For a whole lot of people, that’s not an interesting property.
To send email, you also have to have an email address. If we are doing a fair comparison, that should be noted as well.
I did note this:
then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README
Magit + magithub […] works out of the box
Only if you have a GitHub account and authorize it. Which is a similar amount of setup, if not more, compared to setting up git send-email with your SMTP info.
git send-email does not, it requires a whole lot of set up per repo
You only have to put your SMTP creds in once. Then all you have to do per-repo is decide where to send the email to. How is this more work than making a GitHub fork? All of this works without installing extra software to boot.
then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README
With GitHub, I do not need to obtain any email address, or dig it out of a README. It sets things up automatically for me so I can just open a PR, and have everything filled out.
Only if you have a GitHub account and authorize it. Which is a similar amount of setup, if not more, compared to setting up git send-email with your SMTP info.
Lets compare:
e-mail:
magithub:
The first two steps are pretty much the same, both are easily assisted by my IDE. The difference starts from step 3, because my IDE can’t figure out for me where to send the email. That’s a manual step. I can create a helper that makes it easier for me to do step 4 once I have the address, but that’s about it. For the magithub case, step 3 is SPC g h f; step 4 SPC g s p u RET; step 5 SPC g h p, then edit the cover letter, and , c (or C-c) to finish it up and send it. You can use whatever shortcuts you set up, these are mine. Nothing to figure out manually, all automated. All I have to do is invoke a shortcut, edit the cover letter (the PR’s body), and I’m done.
I can even automate the clone + fork part, and combine push changes + open PR, so it becomes:
Can’t do such automation with e-mailed patches.
I’m not counting GitHub account authorization, because that’s about the same complexity as configuring auth for my SMTP, and both have to be done only once. I’m also not counting registering a GitHub account, because that only needs to be done once, and you can use it forever, for any GitHub-hosted repo, and takes about a minute, a miniscule amount compared to doing actual development.
Again, the main difference is that for the e-mail workflow, I have to figure out the e-mail address, a process that’s longer than forking the repo and pushing my changes, and a process that can’t be automated to the point of requiring a single shortcut.
Then all you have to do per-repo is decide where to send the email to. How is this more work than making a GitHub fork?
Creating a GitHub fork is literally one shortcut, or one click in the browser. If you can’t see how that is considerably easier than digging out email addresses from free-form text, then I have nothing more to say.
And we haven’t talked about receiving comments on the email yet, or accepting patches. Oh boy.
With GitHub, I do not need to obtain any email address, or dig it out of a README. It sets things up automatically for me so I can just open a PR, and have everything filled out.
You already had to read the README to figure out how to compile it, and check if there was a style guide, and review guidelines for contribution…
Lets compare
Note that your magithub process is the same number of steps but none of them have “so I won’t have to figure it out ever again”, which on the email process actually eliminates two of your steps.
Your magithub workflow looks much more complicated, and you could use keybindings to plug into send-email as well.
Can’t do such automation with e-mailed patches
You can do this and even more!
You already had to read the README to figure out how to compile it, and check if there was a style guide, and review guidelines for contribution…
I might have read the README, or skimmed it. But not to figure out how to compile - most languages have a reasonably standardised way of doing things. If a particular project does not follow that, I will most likely just stop caring unless I really, really need to compile it for one reason or another. For style, I hope they have tooling to enforce it, or at least check it, so I don’t have to read long documents and keep it in my head. I have more important things to store there than things that should be automated.
I would likely read the contributing guidelines, but I won’t memorize it, and I certainly won’t try to remember an e-mail address. I might remember where to find it, but it will still be a manual process. Not a terribly long process, but noticeably longer than not having to do it at all.
Note that your magithub process is the same number of steps but none of them have “so I won’t have to figure it out ever again”, which on the email process actually eliminates two of your steps.
Because there’s nothing for me to figure out at all, ever (apart from what repo to clone & fork, but that’s a common step between the two workflows).
Your magithub workflow looks much more complicated
How is it more complicated? Clone, work, fork, push, open PR (or clone+fork, work, push+PR), of which all but “work” is heavily assisted. None of it requires me to look anything up, anywhere.
and you could use keybindings to plug into send-email as well.
And I do, when I’m dealing with projects that use an e-mail workflow. It’s not about shortcuts, but what can be automated, what the IDE can do instead of requiring me to do it.
You can do this and even more!
You can, if you can extract the address to send patches to automatically. You can build something that does that, but then the automation is tied to that platform, just like the PRs are tied to GitHub/GitLab/whatever.
And again, this is just about sending a patch/opening a PR. There’s so much more PRs provide than that. Some of that, you can do with e-mail. Most of it, you can build on top of e-mail. But once you build something on top of e-mail, you no longer have an e-mail workflow, you have a different platform with which you can interact via e-mail. Think issues, labels for them, reviews (with approvals, rejection, etc - all of which must be discoverable by programs reliably), new commits, rebases and whatnot… yeah, you can build all of this on top of e-mail, and provide a web UI or an API or tools or whatever to present the current state (or any prior state). But then you built a platform which requires special tooling to use to its full potential, and you’re not much better than GitHub. You might build free software, but then there’s GitLab, Gitea, Gogs and a whole lot of others which do many of these things already, and are almost as easy to use as GitHub.
I’ve worked with patches sent via e-mail quite a bit in the past. One can make it work, but it requires a lot of careful thought and setup to make it convenient. I’ll give a few examples!
With GitHub and the like, it is reasonably easy to have an overview of open pull requests, without subscribing to a mailing list, or browsing archives. An open PR list is much easier to glance at and have a rough idea than a mailing list. PRs can have labels to help in figuring out what part of the repo they touch, or what state they are in. They can have CI states attached. At a glance, you get a whole lot of information. With a mailing list, you don’t have that. You can build something on top of e-mail that gives you a similar overview, but then you are not using e-mail only, and will need special tooling to process the information further (eg, to limit open PRs to those that need a review, for example).
With GitHub and the like, you can subscribe to issues and pull requests, and you’ll get notifcations about those and those alone. With a mailing list, you rarely have that option, and must do filtering on your own, and hope that there’s a reasonable convention that allows you to do so reliably.
There’s a whole lot of other things that these tools provide over plain patches over email. Like I said before, most - if not all - of that can be built on top of e-mail, but to achieve the same level of convenience, you will end up with an API that isn’t e-mail. And then you have Yet Another Platform.
How is it more complicated? Clone, work, fork, push, open PR (or clone+fork, work, push+PR)
Because the work for the send-email approach is: clone, work, git send-email. This is fewer steps and is therefore less complicated. Not to mention that as projects become more decentralized as they move away from GItHub, the registration process doesn’t go away and starts recurring for every new forge or instance of a forge you work with.
But once you build something on top of e-mail, you no longer have an e-mail workflow, you have a different platform with which you can interact via e-mail. Think issues, labels for them, reviews (with approvals, rejection, etc - all of which must be discoverable by programs reliably), new commits, rebases and whatnot…
Yes, that’s what I’m advocating for.
But then you built a platform which requires special tooling to use to its full potential, and you’re not much better than GitHub
No, I’m proposing all of this can be done with a very similar UX on the web and be driven by email underneath.
PRs can have labels to help in figuring out what part of the repo they touch, or what state they are in. They can have CI states attached.
So let’s add that to mailing list software. I explicitly acknoweldge the shortcomings of mail today and posit that we should invest in these areas rather than rebuilding from scratch without an email-based foundation. But none of the problems you bring up are problems that can’t be solved with email. They’re just problems which haven’t been solved with emails. Problems I am solving with emails. Read my article!
but then you are not using e-mail only, and will need special tooling to process the information further (eg, to limit open PRs to those that need a review, for example).
So what? Why is this even a little bit of a problem? What the hell?
With GitHub and the like, you can subscribe to issues and pull requests, and you’ll get notifcations about those and those alone.
You can’t subscribe to issues or pull requests, you have to subscribe to both, plus new releases. Mailing lists are more flexible in this respect. There are often separate thing-announce, thing-discuss (or thing-users), and thing-dev mailing lists which you can subscribe to separately depending on what you want to hear about.
Like I said before, most - if not all - of that can be built on top of e-mail, but to achieve the same level of convenience, you will end up with an API that isn’t e-mail.
No, you won’t. That’s simply not how this works.
Look, we’re just not on the same wavelength here. I’m not going to continue diving into this ditch of meaningless argument. You keep using whatever you’re comfortable with.
Your magithub workflow looks much more complicated, and you could use keybindings to plug into send-email as well.
I just remembered a good illustration that might explain my stance a bit better. My wife, a garden engineer, was able to contribute to a few projects during Hacktoberfest (three years in a row now), with only a browser and GitHub for Windows at hand. She couldn’t have done it via e-mail, because the only way she can use her email is via her smart phone, or GMail’s web interface. She knows nothing else, and is not interested in learning anything else either, because these perfectly suit her needs. Yet, she was able to discover projects (by looking at what I contributed to, or have starred), search for TODOs or look at existing issues, fork a repo, write some documentation, and submit a PR. She could have done it all from a web browser, but I set up GitHub for Windows for her - in hindsight, I should have let her just use the browser. We’ll do that this year.
She doesn’t know how to use the command-line, has no desire, and no need to learn it. Her email handling is… something that makes me want to scream (no filters, no labels, no folders - one big, unorganized inbox), but it suits her, and as such, she has no desire to change it in any way.
She doesn’t know Emacs, or any IDE for that matter, and has no real need for them, either.
Yet, her contributions were well received, they were useful, and some are still in place today, unchanged. Why? Because GitHub made it easy for newcomers to contribute. They made it so that contributing does not require them to use anything else but GitHub. This is a pretty strong selling point for many people, that using GitHub (and similar solutions) does not affect any other tool or service they use. It’s distinct, and separate.
Not all projects have work for unskilled contributors. Why should we cater to them (who on the whole do <1% of the work) at the expense of the skilled contributors? Particularly the most senior contributors, who in practice do 90% of the work. We don’t build houses with toy hammers so that your grandma can contribute.
I’m not saying we shouldn’t make tools which accomodate everyone. I’m saying we should make tools that accomodate skilled engineers and build simpler tools on top of that. Thus, the skilled engineers are not slowed down and the greener contributors can still get work done. Then, there’s a path for newer users to become more exposed to more powerful tools and more smoothly become senior contributors themselves.
You need to get this point down if you want me to keep entertaining a discussion with you: you can build the same easy-to-use UX and drive it with email.
I’m not saying we shouldn’t make tools which accomodate everyone. I’m saying we should make tools that accomodate skilled engineers and build simpler tools on top of that.
I was under the impression that git + GitHub are exactly these. Git and git send-email for those who prefer those style, GitHub for those who prefer that. The skilled engineers can use the powerful tools they have, while those with a different skillset can use GitHub. All you need is willingness to work with both.
you can build the same easy-to-use UX and drive it with email.
I’m not questioning you can build something very similar, but as long as e-mail is the only driving power behind it, there will be plenty of people who will turn to some other tool. Because filtering email is something you and I can easily do, but many can’t, or aren’t willing to. Not when there are alternatives that don’t require them to do extra work.
Mind you, I consider myself a skilled engineer, and I mainly use GitHub/GitLab APIs, because I don’t have to filter e-mail, nor parse the info in them, the API serves me data I can use in an easier manner. From an integrator point of view, this is golden. If, say, an Emacs integration starts with “Set up your email so mail with these properties are routed here”, that’s not a good user experience. And no, I don’t want to use my MUA to work with git, because magit is a much better, much more powerful tool for that, and I value my productivity.
I’m not questioning you can build something very similar, but as long as e-mail is the only driving power behind it, there will be plenty of people who will turn to some other tool.
I’m pretty sure the whole point would be that the “shiny UI” tool would not expose email to the user at all – so the “plenty of people” wouldn’t leave because they wouldn’t know the difference.
So…. pretty much GitHub/GitLab/Gitea 2.0, but with the added ability to open PRs by email (to cater to that workflow), and a much less reliable foundation?
Sure. What could possibly go wrong.
I don’t think you can count signing up for GitHub if you’re not counting signing up for email.
If you’re using hub, it’s just hub pull-request. No context switching
If you’re counting signing up for email you have to count that for GitHub, too, since they require an email address to sign up with.
Using GitHub requires pushing to different repository and then opening the PR on the GitHub Interface, which is a context change. The git-send-mail would be equivalent to sending the PR.
git-send-email is only one step, akin to opening the PR, no need to push to a remote repository. And from the comfort of your development environment. (Emacs in my case)
As a European, I don’t quite get it: Americans seem to be concerned with net neutrality, meanwhile not protesting huge monopolistic corporations(the gatekeepers) removing some controversial users on their own judgement and with no way to appeal. Are individuals excluded from the net neutrality?
I’m not very familiar with the legal details, but I assume the distinction is general access to the internet being considered a utility, while access to platforms being considered something like a privilege. E.g. roads shouldn’t discriminate based on destination, but that doesn’t mean the destination has to let you in.
edit: As to why Americans don’t seem as concerned with it (which is realize I didn’t address): I think most people see it as a place, like a restaurant. You can be kicked out if you are violating policies or otherwise disrupting their business, which can include making other patrons uncomfortable. Of course there are limits which is why we have anti-discrimination laws.
Well, they’re also private, for-profit companies that legally own and sell the lines. So, there’s another political angle where people might vote against the regulations under theory that government shouldn’t dictate how you run your business or use your property, esp if it cost you money. Under theory of benefiting owners and shareholders, these companies are legal entities specifically created to generate as much profit from those lines as possible. If you don’t like it, build and sell your own lines. That’s what they’d say.
They don’t realize how hard it is to deploy an ISP on a shoe-string budget to areas where existing players already paid off the expensive part of the investment, can undercut you into bankruptcy, and (per people claiming to be ISP founders on Hacker News) will even cut competitors’ lines “accidentally” so their own customers leave them. In the last case, it’s hard to file and win a lawsuit if you just lost all your revenue and opponent has over a billion in the bank. They all just quit.
Do you have the source for these claims regarding ISPs?
Which ones?
One of them described a situation with a contracted, construction crew with guy doing the digging not speaking English well. They were supposedly digging for incumbent but dug through his line. He aaid he pointed that it was clearly marked with paint or something. The operator claimed he thought that meant there wasnt a line there.
That’s a crew that does stuff in that area for a living not knowing what a line mark means. So, he figured they did it on purpose. He folded since he couldnt afford to sue them. Another mentioned them unplugging their lines in exchanges or something that made their service appear unreliable. Like the rest, they’d have to spend money they didnt have on lawyers who’d have to prove (a) it happened snd/or (b) it was intentional.
The landmark case in the United States is throttling of Netflix by Comcast. Essentially, Comcast held Netflix customers hostage until Netflix paid (which they did).
It’s important to understand that many providers (Comcast, AT&T), also own the channels (NBC, CNN, respectively). They have an interest in charging less for their and their partners content, and more for their competitors content, while colluding to raise prices across the board (which they have done in the past with television and telephone service).
Collectively, they all have an interest in preventing new entrants to the market. The fear is that big players (Google, Amazon) will be able to negotiate deals (though they’d probably prefer not to), and new or free technologies (like PeerTube) will get choked out.
Net neutrality is somewhere where the American attitude towards corporations being able to do whatever to their customers conflicts with the American attitude that new companies and services must be able to compete in the marketplace.
You’re right to observe that individuals don’t really enter into it, except that lots of companies are pushing media campaigns to sway public opinion towards their own interests. You’re seeing those media campaigns leaking out.
Switching to the individual perspective.
I just don’t want to pay more for the same service. In living memory Americans have seen their gigantic monopolistic telecommunications company get broken up, and seen prices for services drop 100 fold; more or less as a direct consequence of that action.
As other posts have noted, the ISP situation in the US is already pretty dire unless you’re a business. Internet providers charge whatever they can get away with and have done an efficient job of ensuring customers don’t have alternatives. Telephone service got regulated, but internet service did not.
Re-reading your post after diving on this one… We’re not really concerned about the same gatekeepers. I don’t think any American would be overly upset to see players like Amazon, Facebook, Google, Twitter, and Netflix go away and I wouldn’t be surprised to see one or more of those guys implode as long as they don’t get access to too much of the infrastructure.
Right-leaning US Citizen here. I’ll attempt to answer this as best as I can.
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal (and they’re correct, except for the “fascist, nazi” bit). But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs. I do remember this article was moderately interesting, saying that the majority of root DNS servers are run by US companies. But, that doesn’t really faze me. As soon as people start censoring, that get backlash whether the media covers it or not
Side note, the reason you don’t see the protests against the “gatekeepers” is that most of the mainstream media isn’t accurately covering the reaction of the people to the censorship. I bet you didn’t know that InfoWars was the #1 news app with 5 stars on the Apple app store within a couple of weeks of them getting banned from Facebook, etc. I don’t really have any opinion about Alex Jones (lots of people on the right don’t agree with him), but you can bet I downloaded his app when I found out he got banned.
P.S. I assumed that InfoWars was what you were referring to when you said “removing some controversial users” P.P.S. I just checked the app store again, and it’s down to #20 in news, but still has 5 stars.
I think this is too optimistic. I live in Chicago, the third biggest city in the country and arguably the tech hub of the midwest. In my building I get to choose between AT&T and Comcast. I’m considered lucky: most of my friends in the city get one option, period. If their ISP starts doing anything shady they don’t have an option to switch, because there’s nobody they can switch to.
It’s interesting to contrast this to New Zealand, where I live in a town of 50,000 people and have at least 5 ISPs I can choose from. I currently pay $100 NZ a month for an unlimited gigabit fibre connection, and can hit ~600 mbit from my laptop on a speed test. The NZ government has intervened heavily in the market, effectively forcing the former monopolist (Telecom) to split into separate infrastructure (Chorus) and services (Telecom) companies, and spending a lot of taxpayer money to roll out a nationwide fibre network. The ISPs compete on the infrastructure owned by Chorus. There isn’t drastic competition on prices: most plans are within $10-15 of each other, on a per month basis, but since fibre rolled out plans seem to have come down from around $135 per month to now around $100.
I was lucky to have decent internet through a local ISP when I lived in one of Oakland’s handful of apartment buildings, but most people wouldn’t have had that option. I think the ISP picture is a lot better in NZ. Also, net neutrality is a non-issue, as far as I know. We have it, no-one seems to be trying to take it away.
I’m always irritated that there are policies decried in the United States as “impossible” when there are demonstrable implementations of it elsewhere.
I can see it being argued that the United States’s way is better or something, but there are these hyperbolic attacks on universal health care, net neutrality, workers’ rights, secure elections, etc that imply that they are simply impossible to implement when there are literally dozens of counterexamples…
At the risk of getting far too far off topic.
One of the members of the board at AT&T was the CEO of an insurance company, someone sits on the boards of both Comcast/NBC and American Beverages. The head of the FCC was high up at Verizon.
These are some obvious, verifiable, connections based in personal interest. Not implying that it’s wrong or any of those individuals are doing anything which is wrong, you’ve just gotta take these ‘hyperbolic attacks’ with a grain of salt.
Interlocking Directorates
Non-mobile link
Oh yeah it’s infuriating. It helps to hit them with examples. Tell them the media doesn’t talk about them since they’re all pushing something. We all know that broad statement is true. Then, briefly tell them the problems that we’re trying to solve with some goals we’re balancing. Make sure it’s their problems and goals. Then, mention the solution that worked else where which might work here. If it might not fit everyone, point out that we can deploy it in such a way where its specifics are tailored more to each group. Even if it can’t work totally, maybe point out that it has more cost-benefit than the current situation. Emphasize that it gets us closer to the goal until someone can figure out how to close the remaining gap. Add that it might even take totally different solutions to address other issues like solving big city vs rural Internet. If it worked and has better-cost benefit, then we should totally vote for it to do better than we’re doing. Depending on audience, you can add that we can’t have (country here) doing better than us since “This is America!” to foster some competitive, patriotic spirit.
That’s what I’ve been doing as part of my research talking to people and bouncing messages off them. I’m not any good at mass marketing, outreach or anything. I’ve just found that method works really well. You can even be honest since the other side is more full of shit than us on a lot of these issues. I mean, them saying it can’t exist vs working implementations should be an advantage for us. Should. ;)
Beautifully said.
My family’s been in this country since the Mayflower. I love it dearly.
Loving something means making it better and fixing its flaws, not ignoring them.
Thanks and yes. I did think about leaving for a place maybe more like my views. That last thing you said is why I’m still here. If we fix it, America won’t be “great again:” it would be fucking awesome. If not for us, then for the young people we’re wanting to be able to experience that. That’s why I’m still here.
Only if you can’t find Austin on a map… ;)
Native Texan/Austinite here. Texas is the South, Southwest, or just Texas. All the rest of y’all are just Yankees. ;)
But if their ISP starts doing anything shady, they’ll surely get some backlash, even if they can’t switch they can complain.
They’ve been complaining for decades. Nothing happens most of the time. The ISP’s have many lobbyists and lawyers to insulate them from that. The big ones are all doing the same abusive practices, too. So, you can’t switch to get away from it.
Busting up AT&T’s monopoly got results in lower costs, better service, better speeds, etc. Net neutrality got more results. I support more regulation of these companies and/or socialized investment to replace them like the gigabit for $350/mo in Chattanooga, TN. It’s 10Gbps now I think but I don’t know what price.
Actually, I go further due to their constant abuses and bribing politicians: Im for having a court seizetheir assets, converting them to nonprofits, and putting new management in charge. If at all possible. It would send a message to other companies that think they can do damage to consumers and mislead regulators with immunity to consequences.
The problem is that corporate fines are generally a small percentage of profits.
https://www.theguardian.com/world/2011/apr/03/us-bank-mexico-drug-gangs https://www.huffingtonpost.com/dana-radcliffe/should-companies-obey-the-law_b_1650037.html
What incentive does the ISP have to change? Unless you can complain to some higher authority (FCC, perhaps) then there is no reason for the ISP to make any changes even with backlash. I’d be more incentivized to complain if there was at least some competition.
Nobody says this. It’s being pushed because it prevents large corporations from locking out smaller players. The Internet is a great economic equalizer: I can start a business and put a website up and I’m just as visible and accessible as Microsoft.
We don’t want Microsoft to be able to pay AT&T to slow traffic to my website but not theirs. It breaks the free market by allowing collusion that can’t be easily overcome. It’s like the telephone network; I can’t go run wires to everyone’s house, but I want my customers to be able to call me. I don’t want my competitors to pay AT&T to make it harder to call me than to call them.
That assumes people have a choice. They very often don’t. Internet service has a massively high barrier to entry, similar to a public utility. Most markets in the United States have at most two providers (both major corporations opposed to net neutrality). Very, very rarely is there a third.
More importantly, there are only five tier-1 networks in the United States. Five. It doesn’t matter how many local ISPs there are; without Net Neutrality, five corporations effectively control what can and can’t be transmitted. If those five decide something should be slowed down or forbidden, there is nothing I can do. Changing to a different provider won’t do a thing.
(And of those five, all of them donate significantly more to one major political party than the other, and the former Associate General Counsel of one of them is currently chairman of the FCC…)
Net neutrality says nothing about how much it costs. It just says you can’t charge different amounts based on content. It would be like television stations charging more money to Republican candidates to run ads than to Democratic candidates. They’re free to charge whatever they want; they’re not free to charge different people different amounts based on the content of the message.
Democracy requires communication. It does no good to say “freedom!” if the major corporations can effectively silence whoever they want. “At least it’s not the government” is not a good defense of stifling public debate.
And there’s a difference between a newspaper and a television/radio station/internet service. I can buy a printing press and make a newspaper and refuse to carry whatever I want. There are no practical limits to the number of printing presses in the country.
There is a limited electromagnetic spectrum. Not just anyone can broadcast a TV signal. There is a limit to how many cables can be run on utility polls or buried underground. Therefore, discourse carried over those media are required to operate more in the public trust than others. As they become more essential to a healthy democracy, that only becomes more important. It’s silly to say “you still have freedom of speech” if you’re blocked from television, radio, the Internet, and so on. Those are the public forums of our day. That a corporation is doing the blocking doesn’t make it any better than if the government were to do it.
There’s a big difference between Twitter not wanting to carry Alex Jones and net neutrality. Jones is still free to go start up a website that carries his message; with Net Neutrality not only could he be blocked from Twitter, but the network itself could make his website inaccessible.
There is no alternative with Net Neutrality. You can’t build your own Internet. Without mandating equal treatment of traffic, we hand the Internet over solely to the big players. Preventing monopolistic and oligarchic control of public discourse is a valid use of government power. It’s not censorship, it’s the exact opposite.
This was also brought up by @hwayne, @caleb and @friendlysock, and is not something that occurred to me. I appreciate all who are mentioning this.
Wow, I did not know that. I can see that as a legitimate reason to want net neutrality. But, I also think that they’ll piss off a lot of people if they can stream CNN but not InfoWars.
I understood it to also mean that you also couldn’t charge customers differently because of who they are. Also, don’t things like Tor mitigate things like that?
I completely agree. But in the US we have a free market (at least, we used to) and that means that the government is supposed to stay out of it as much as possible.
I also agree. But these corporations (the tier-1 ISPs) haven’t done anything noticeable to me to limit my enjoyment of conservative content, and I’m pretty sure that they would’ve by now if they wanted to.
The reason I oppose net neutrality is more because I don’t think that the government should control it than any more than I think AT&T and others should.
But they haven’t.
edit: how -> who
Even though I’m favoring net neutrality, I appreciate you braving the conservative position on this here on Lobsters. I did listen to a lot of them. What I found is most had reasonable arguments but had no idea about what ISP’s did, are doing, are themselves paying Tier 1’s, etc. Their media sources’ bias (all have bias) favoring ISP’s for some reason didn’t tell them any of it. So, even if they’d have agreed with us (maybe, maybe not), they’d have never reached those conclusions since they were missing crucial information to reflect on when choosing to regulate or not regulate.
An example is one telling me companies like Netflix should pay more to Comcast per GB or whatever since they used more. The guy didn’t know Comcast refuses to do that when paying Tier 1’s negotiating transit agreements instead that worked entirely different. He didn’t know AT&T refused to give telephones or data lines to rural areas even if they were willing to pay what others did. He didn’t know they could roll out gigabit today for same prices but intentionally kept his service slow to increase profit knowing he couldn’t switch for speed. He wasn’t aware of most of the abuses they were doing. He still stayed with his position since that guy in particular went heavily with his favorite, media folks. However, he didn’t like any of that stuff which his outlets never even told him about. Even if he disagrees, I think he should disagree based on an informed decision if possible since there’s plenty smart conservatives out there who might even favor net neutrality if no better alternative. I gave him a chance to do that.
So, I’m going to give you this comment by @lorddimwit quickly showing how they ignored the demand to maximize profit, this comment by @dotmacro showing some abuses they do with their market control, and this article that gives nice history of what free market did with each communications medium with the damage that resulted. Also note that the Internet itself was an open, free-if-you-have-a-wire system that competed with the proprietary, charge-per-use, lock-them-in-forever-if-possible systems the private sector was offering. It smashed them so hard you might have even never heard of them or forgotten a lot about them depending on your age. It also democratized more goods than about anything other than maybe transportation. Probably should stick with the principles that made that happen to keep innovation rolling. Net neutrality was one of them that was practiced informally at first then put into law as the private sector got too much power and was abusing it. We should keep doing what worked instead of the practices ISP’s want that didn’t work but will increase their profits at our expense for nothing in return. That is what they want: give us less or as little improvement in every way over time while charging us more. It’s what they’re already doing.
I read the comments, and I read most of the freecodecamp article.
I like the ideal of the internet being a public utility, but I don’t really want the government to have that much control.
I think the real problem I have with government control of the internet, is that I don’t want the US to end up like china with large swaths of the internet completely blocked.
I don’t really know how to solve our current problems. But, like @jfb said elsewhere in this thread, I don’t think that net neutrality is the best possible solution.
I might recognize a name, but I probably wasn’t even around yet.
Thanks for the info, I’ll read it and possibly form a new opinion.
What obvious reasons? Because customers will switch providers if they don’t treat all traffic equally? That would require (a) users are able to tell if a provider prioritizes certain traffic, and (b) that there is a viable alternative to switch to. I have no confidence in either.
I don’t personally care if the prioritize certain websites, but I sure as hell care if the block something.
As far as I’m concerned, they can slow down Youtube by 10% for conservative channels and I wouldn’t give a damn even though I watch and enjoy some. What really bothers me is when they “erase” somebody or block people from getting to them.
well you did say they have an incentive to provide “equal service” so i guess you meant something else. net neutrality supporters like me aren’t satisfied with “nobody gets blocked,” because throttling certain addresses gives big corporations more tools to control media consumption, and throttling have similar effects to blocking in the long term. i’m quite surprised that you’d be fine with your ISP slowing down content you like by 10%… that would adversely affect their popularity compared to the competitors that your ISP deems acceptable, and certain channels would go from struggling to broke and be forced to close down.
Well, I have pretty fast internet, so 10% wouldn’t be terrible for me. However, I can see how some people would take issue with such a slowdown.
I was using a bit an extreme example to illustrate my point. What I was trying to say was that they can’t really stop people from watching the content that they want to watch.
I recall, but didn’t review, a study saying half of web site users wanted the page loaded in 2 seconds. Specific numbers aside, I’ve been reading that kind of claim from many people for a long time that a new site taking too long to load, being sluggish, etc makes them miss lots of revenue. Many will even close down. So, the provider of your favorite content being throttled for even two seconds might kill half their sales since Internet users expect everything to work instantly. Can they operate with a 50% cut in revenue? Or maybe they’re bootstrapping up a business with a few hundred or a few grand but can’t afford to pay for no artificial delays. Can they even become the content provider your liked if having to pay hundreds or thousands extra on just extra profit? I say extra profit since ISP’s already paid for networks capable of carrying it out of your monthly fee.
yeah, the shaping of public media consumption would happen in cases where people don’t know what they want to watch or don’t find out about something that they would want to watch
anti-democratic institutions already shape media consumption and discourse to a large extent, but giving them more tools will hurt the situation. maybe it won’t affect you or me directly, but sadly we live in a society so it will come around to us in the form of changes in the world
Most customers have exceedingly limited options in their area, and they’re not going to switch houses because of their ISP. Especially in apartment complexes, you see cases where, say, Comcast has the lockdown on an entire population and there really isn’t a reasonable alternative.
In a truly free market, maybe I’d agree with you, but the regulatory environment and natural monopolistic characteristics of telecomm just don’t support the case.
That’s a witty way of putting it.
But yeah, @lorddimwit mentioned the small number of tier-1 ISPs. I didn’t realize there were so few, but I still think that net neutrality is overreaching, even if its less than I originally thought.
Personally, I feel that net neutrality, such as it is, would prevent certain problems that could be better addressed in other, more fundamental ways. For instance, why does the US allow the companies that own the copper to also own the ISPs?
Awkward political jabs aside, most of your statements imply that you believe customers are free to choose who they get their internet from, which is just plain incorrect. Whatever arguments you want to make against net neutrality, there is one indisputable fact that you cannot just ignore or paper over:
ISPs do not operate in a free market.
In the vast majority of the US, cable and telephone companies are granted local monopolies in the areas they operate. That is why they must be regulated. As the Mozilla blog said, they have both the incentive and means to abuse their customers and they’ve already been caught doing it on multiple occasions.
I think you’re a bit late to the party, I’ve conceded that fact already.
All of that is gibberish. Net Neutrality is being pushed because it creates a more competitive marketplace. None of it has anything to do with professional liar Alex Jones.
That’ s not how markets work. And it’s not how the technology or permit process for ISPs work. There is very little competition among ISPs in the US market.
Hey, here’s a great example from HN of the crap they pull without net neutrality. They advertised “unlimited,” throttled it secretly, admitted it, and forced them to pay extra to get actual unlimited.
@lorddimwit add this to your collection. Throttling and fake unlimited been going on long time but they couldve got people killed doing it to first responders. Id have not seen that coming just for PR reasons or avoiding local, govt regulation if nothing else.
It’s not about how much internet costs, it’s about protecting freedom of access to information, and blocking things like zero-rated traffic that encourage monopolies and discourage competition. If I pay for a certain amount of traffic, ISPs shouldn’t be allowed to turn to Google and say “want me to prioritize YouTube traffic over Netflix traffic? Pay me!”
Where on earth did you hear that? I sure hope you’re not making it up—you’ll find this site doesn’t take too kindly to that.
I might’ve been conflating two different political issues, but I have heard “fascist” and “nazi” used to describe the entire right wing.
A quick google search for “net neutrality fascism” turned this up https://motherboard.vice.com/en_us/article/kbye4z/heres-why-net-neutrality-is-essential-in-trumps-america
You assume that net neutrality is a left-wing issue, which it’s not. It actually has bipartisan support. The politicians who oppose it have very little in common, aside from receiving a large sum of donations from telecom corporations.
As far as terms like “fascist” or “Nazi” are concerned—I think they have been introduced into this debate solely to ratchet up the passions. It’s not surprising that adding these terms to a search yields results that conflate the issues.
Ill add on your first point that conservatives who are pro-market are almost always pro-competition. They expect the market will involve competition driving whats offered up, its cost down, and so on. Both the broadband mandate and net neutrality achieved that with an explosion of businesses and FOSS offering about anything one can think of.
The situation still involves 1-3 companies available for most consumers that, like a cartel, work together to not compete on lowering prices, increasing service, and so on. Net neutrality reduced some predatory behavior the cartel market was doing. They still made about $25 billion in profit between just a few companies due to anti-competitive behavior. Repealing net neutrality for anti-competitive market will have no positives for consumer but will benefit roughly 3 or so companies by letting them charge more for same or less service.
Bad for conservative’s goals of market competition and benefiting conservative voters.
One part of it is that we already have net neutrality, and it’s easier to try to hang on to a regulation than to create a new one.