I don’t understand why this matters. Both Windows and Mac versions can still be downloaded from the docker website without logging in:
I found those by googling “docker for $OS”. The Mac page was the top result and the windows was the third.
I searched docker for windows and it took me to this page. Which asks for a login to download. I think the big deal is how dishonest the reply from the docker team is.
“we’ve made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.”
This is such obvious marketing BS and it’s insulting that they think the average developer doesn’t know this is so they can send more marketing emails and not to “Improve experiences”.
In their defense, it takes money to improve the experience, and marketing yields money. So indirectly, marketing allows them to improve the experience. I entirely agree that they should just come out and say that, however.
I love this reasoning! I wonder where else they could improve.
I think funneling more docker users into Enterprise plans would be big $$$, maybe they could cap the number of images downloaded from the store for free, and then sell licences for more downloads.
By the way, this is a part of what Ubuntu motd contains now:
Check out 6 great IDEs now available on Ubuntu. There may even be something worthwhile there for those crazy EMACS fans ;)
Wouldn’t want all those crazy Stallmanites hanging around calling them on advertising non-free software, which you can get from their new package manager that caters to for-profit companies.
As a European, I don’t quite get it: Americans seem to be concerned with net neutrality, meanwhile not protesting huge monopolistic corporations(the gatekeepers) removing some controversial users on their own judgement and with no way to appeal. Are individuals excluded from the net neutrality?
I’m not very familiar with the legal details, but I assume the distinction is general access to the internet being considered a utility, while access to platforms being considered something like a privilege. E.g. roads shouldn’t discriminate based on destination, but that doesn’t mean the destination has to let you in.
edit: As to why Americans don’t seem as concerned with it (which is realize I didn’t address): I think most people see it as a place, like a restaurant. You can be kicked out if you are violating policies or otherwise disrupting their business, which can include making other patrons uncomfortable. Of course there are limits which is why we have anti-discrimination laws.
Well, they’re also private, for-profit companies that legally own and sell the lines. So, there’s another political angle where people might vote against the regulations under theory that government shouldn’t dictate how you run your business or use your property, esp if it cost you money. Under theory of benefiting owners and shareholders, these companies are legal entities specifically created to generate as much profit from those lines as possible. If you don’t like it, build and sell your own lines. That’s what they’d say.
They don’t realize how hard it is to deploy an ISP on a shoe-string budget to areas where existing players already paid off the expensive part of the investment, can undercut you into bankruptcy, and (per people claiming to be ISP founders on Hacker News) will even cut competitors’ lines “accidentally” so their own customers leave them. In the last case, it’s hard to file and win a lawsuit if you just lost all your revenue and opponent has over a billion in the bank. They all just quit.
…existing players … (per people claiming to be ISP founders on Hacker News) will even cut competitors’ lines “accidentally” so their own customers leave them.
One of them described a situation with a contracted, construction crew with guy doing the digging not speaking English well. They were supposedly digging for incumbent but dug through his line. He aaid he pointed that it was clearly marked with paint or something. The operator claimed he thought that meant there wasnt a line there.
That’s a crew that does stuff in that area for a living not knowing what a line mark means. So, he figured they did it on purpose. He folded since he couldnt afford to sue them. Another mentioned them unplugging their lines in exchanges or something that made their service appear unreliable. Like the rest, they’d have to spend money they didnt have on lawyers who’d have to prove (a) it happened snd/or (b) it was intentional.
The landmark case in the United States is throttling of Netflix by Comcast. Essentially, Comcast held Netflix customers hostage until Netflix paid (which they did).
It’s important to understand that many providers (Comcast, AT&T), also own the channels (NBC, CNN, respectively). They have an interest in charging less for their and their partners content, and more for their competitors content, while colluding to raise prices across the board (which they have done in the past with television and telephone service).
Collectively, they all have an interest in preventing new entrants to the market. The fear is that big players (Google, Amazon) will be able to negotiate deals (though they’d probably prefer not to), and new or free technologies (like PeerTube) will get choked out.
Net neutrality is somewhere where the American attitude towards corporations being able to do whatever to their customers conflicts with the American attitude that new companies and services must be able to compete in the marketplace.
You’re right to observe that individuals don’t really enter into it, except that lots of companies are pushing media campaigns to sway public opinion towards their own interests. You’re seeing those media campaigns leaking out.
Switching to the individual perspective.
I just don’t want to pay more for the same service. In living memory Americans have seen their gigantic monopolistic telecommunications company get broken up, and seen prices for services drop 100 fold; more or less as a direct consequence of that action.
As other posts have noted, the ISP situation in the US is already pretty dire unless you’re a business. Internet providers charge whatever they can get away with and have done an efficient job of ensuring customers don’t have alternatives. Telephone service got regulated, but internet service did not.
Re-reading your post after diving on this one… We’re not really concerned about the same gatekeepers. I don’t think any American would be overly upset to see players like Amazon, Facebook, Google, Twitter, and Netflix go away and I wouldn’t be surprised to see one or more of those guys implode as long as they don’t get access to too much of the infrastructure.
Right-leaning US Citizen here. I’ll attempt to answer this as best as I can.
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal (and they’re correct, except for the “fascist, nazi” bit). But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs. I do remember this article was moderately interesting, saying that the majority of root DNS servers are run by US companies. But, that doesn’t really faze me. As soon as people start censoring, that get backlash whether the media covers it or not
Side note, the reason you don’t see the protests against the “gatekeepers” is that most of the mainstream media isn’t accurately covering the reaction of the people to the censorship. I bet you didn’t know that InfoWars was the #1 news app with 5 stars on the Apple app store within a couple of weeks of them getting banned from Facebook, etc. I don’t really have any opinion about Alex Jones (lots of people on the right don’t agree with him), but you can bet I downloaded his app when I found out he got banned.
P.S. I assumed that InfoWars was what you were referring to when you said “removing some controversial users” P.P.S. I just checked the app store again, and it’s down to #20 in news, but still has 5 stars.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
I think this is too optimistic. I live in Chicago, the third biggest city in the country and arguably the tech hub of the midwest. In my building I get to choose between AT&T and Comcast. I’m considered lucky: most of my friends in the city get one option, period. If their ISP starts doing anything shady they don’t have an option to switch, because there’s nobody they can switch to.
I think this is too optimistic. I live in Chicago, the third biggest city in the country and arguably the tech hub of the midwest. In my building I get to choose between AT&T and Comcast. I’m considered lucky: most of my friends in the city get one option, period. If their ISP starts doing anything shady they don’t have an option to switch, because there’s nobody they can switch to.
It’s interesting to contrast this to New Zealand, where I live in a town of 50,000 people and have at least 5 ISPs I can choose from. I currently pay $100 NZ a month for an unlimited gigabit fibre connection, and can hit ~600 mbit from my laptop on a speed test. The NZ government has intervened heavily in the market, effectively forcing the former monopolist (Telecom) to split into separate infrastructure (Chorus) and services (Telecom) companies, and spending a lot of taxpayer money to roll out a nationwide fibre network. The ISPs compete on the infrastructure owned by Chorus. There isn’t drastic competition on prices: most plans are within $10-15 of each other, on a per month basis, but since fibre rolled out plans seem to have come down from around $135 per month to now around $100.
I was lucky to have decent internet through a local ISP when I lived in one of Oakland’s handful of apartment buildings, but most people wouldn’t have had that option. I think the ISP picture is a lot better in NZ. Also, net neutrality is a non-issue, as far as I know. We have it, no-one seems to be trying to take it away.
I’m always irritated that there are policies decried in the United States as “impossible” when there are demonstrable implementations of it elsewhere.
I can see it being argued that the United States’s way is better or something, but there are these hyperbolic attacks on universal health care, net neutrality, workers’ rights, secure elections, etc that imply that they are simply impossible to implement when there are literally dozens of counterexamples…
At the risk of getting far too far off topic.
One of the members of the board at AT&T was the CEO of an insurance company, someone sits on the boards of both Comcast/NBC and American Beverages. The head of the FCC was high up at Verizon.
These are some obvious, verifiable, connections based in personal interest. Not implying that it’s wrong or any of those individuals are doing anything which is wrong, you’ve just gotta take these ‘hyperbolic attacks’ with a grain of salt.
Oh yeah it’s infuriating. It helps to hit them with examples. Tell them the media doesn’t talk about them since they’re all pushing something. We all know that broad statement is true. Then, briefly tell them the problems that we’re trying to solve with some goals we’re balancing. Make sure it’s their problems and goals. Then, mention the solution that worked else where which might work here. If it might not fit everyone, point out that we can deploy it in such a way where its specifics are tailored more to each group. Even if it can’t work totally, maybe point out that it has more cost-benefit than the current situation. Emphasize that it gets us closer to the goal until someone can figure out how to close the remaining gap. Add that it might even take totally different solutions to address other issues like solving big city vs rural Internet. If it worked and has better-cost benefit, then we should totally vote for it to do better than we’re doing. Depending on audience, you can add that we can’t have (country here) doing better than us since “This is America!” to foster some competitive, patriotic spirit.
That’s what I’ve been doing as part of my research talking to people and bouncing messages off them. I’m not any good at mass marketing, outreach or anything. I’ve just found that method works really well. You can even be honest since the other side is more full of shit than us on a lot of these issues. I mean, them saying it can’t exist vs working implementations should be an advantage for us. Should. ;)
Beautifully said.
My family’s been in this country since the Mayflower. I love it dearly.
Loving something means making it better and fixing its flaws, not ignoring them.
Thanks and yes. I did think about leaving for a place maybe more like my views. That last thing you said is why I’m still here. If we fix it, America won’t be “great again:” it would be fucking awesome. If not for us, then for the young people we’re wanting to be able to experience that. That’s why I’m still here.
Native Texan/Austinite here. Texas is the South, Southwest, or just Texas. All the rest of y’all are just Yankees. ;)
But if their ISP starts doing anything shady, they’ll surely get some backlash, even if they can’t switch they can complain.
They’ve been complaining for decades. Nothing happens most of the time. The ISP’s have many lobbyists and lawyers to insulate them from that. The big ones are all doing the same abusive practices, too. So, you can’t switch to get away from it.
Busting up AT&T’s monopoly got results in lower costs, better service, better speeds, etc. Net neutrality got more results. I support more regulation of these companies and/or socialized investment to replace them like the gigabit for $350/mo in Chattanooga, TN. It’s 10Gbps now I think but I don’t know what price.
Actually, I go further due to their constant abuses and bribing politicians: Im for having a court seizetheir assets, converting them to nonprofits, and putting new management in charge. If at all possible. It would send a message to other companies that think they can do damage to consumers and mislead regulators with immunity to consequences.
The problem is that corporate fines are generally a small percentage of profits.
https://www.theguardian.com/world/2011/apr/03/us-bank-mexico-drug-gangs https://www.huffingtonpost.com/dana-radcliffe/should-companies-obey-the-law_b_1650037.html
What incentive does the ISP have to change? Unless you can complain to some higher authority (FCC, perhaps) then there is no reason for the ISP to make any changes even with backlash. I’d be more incentivized to complain if there was at least some competition.
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal
Nobody says this. It’s being pushed because it prevents large corporations from locking out smaller players. The Internet is a great economic equalizer: I can start a business and put a website up and I’m just as visible and accessible as Microsoft.
We don’t want Microsoft to be able to pay AT&T to slow traffic to my website but not theirs. It breaks the free market by allowing collusion that can’t be easily overcome. It’s like the telephone network; I can’t go run wires to everyone’s house, but I want my customers to be able to call me. I don’t want my competitors to pay AT&T to make it harder to call me than to call them.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
That assumes people have a choice. They very often don’t. Internet service has a massively high barrier to entry, similar to a public utility. Most markets in the United States have at most two providers (both major corporations opposed to net neutrality). Very, very rarely is there a third.
More importantly, there are only five tier-1 networks in the United States. Five. It doesn’t matter how many local ISPs there are; without Net Neutrality, five corporations effectively control what can and can’t be transmitted. If those five decide something should be slowed down or forbidden, there is nothing I can do. Changing to a different provider won’t do a thing.
(And of those five, all of them donate significantly more to one major political party than the other, and the former Associate General Counsel of one of them is currently chairman of the FCC…)
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs.
Net neutrality says nothing about how much it costs. It just says you can’t charge different amounts based on content. It would be like television stations charging more money to Republican candidates to run ads than to Democratic candidates. They’re free to charge whatever they want; they’re not free to charge different people different amounts based on the content of the message.
Democracy requires communication. It does no good to say “freedom!” if the major corporations can effectively silence whoever they want. “At least it’s not the government” is not a good defense of stifling public debate.
And there’s a difference between a newspaper and a television/radio station/internet service. I can buy a printing press and make a newspaper and refuse to carry whatever I want. There are no practical limits to the number of printing presses in the country.
There is a limited electromagnetic spectrum. Not just anyone can broadcast a TV signal. There is a limit to how many cables can be run on utility polls or buried underground. Therefore, discourse carried over those media are required to operate more in the public trust than others. As they become more essential to a healthy democracy, that only becomes more important. It’s silly to say “you still have freedom of speech” if you’re blocked from television, radio, the Internet, and so on. Those are the public forums of our day. That a corporation is doing the blocking doesn’t make it any better than if the government were to do it.
Side note, the reason you don’t see the protests against the “gatekeepers” is that most of the mainstream media isn’t accurately covering the reaction of the people to the censorship.
There’s a big difference between Twitter not wanting to carry Alex Jones and net neutrality. Jones is still free to go start up a website that carries his message; with Net Neutrality not only could he be blocked from Twitter, but the network itself could make his website inaccessible.
There is no alternative with Net Neutrality. You can’t build your own Internet. Without mandating equal treatment of traffic, we hand the Internet over solely to the big players. Preventing monopolistic and oligarchic control of public discourse is a valid use of government power. It’s not censorship, it’s the exact opposite.
That assumes people have a choice. They very often don’t.
This was also brought up by @hwayne, @caleb and @friendlysock, and is not something that occurred to me. I appreciate all who are mentioning this.
More importantly, there are only five tier-1 networks in the United States.
Wow, I did not know that. I can see that as a legitimate reason to want net neutrality. But, I also think that they’ll piss off a lot of people if they can stream CNN but not InfoWars.
It just says you can’t charge different amounts based on content.
I understood it to also mean that you also couldn’t charge customers differently because of who they are. Also, don’t things like Tor mitigate things like that?
“At least it’s not the government” is not a good defense of stifling public debate.
I completely agree. But in the US we have a free market (at least, we used to) and that means that the government is supposed to stay out of it as much as possible.
Preventing monopolistic and oligarchic control of public discourse is a valid use of government power.
I also agree. But these corporations (the tier-1 ISPs) haven’t done anything noticeable to me to limit my enjoyment of conservative content, and I’m pretty sure that they would’ve by now if they wanted to.
The reason I oppose net neutrality is more because I don’t think that the government should control it than any more than I think AT&T and others should.
not only could he be blocked from Twitter, but the network itself could make his website inaccessible.
But they haven’t.
edit: how -> who
Even though I’m favoring net neutrality, I appreciate you braving the conservative position on this here on Lobsters. I did listen to a lot of them. What I found is most had reasonable arguments but had no idea about what ISP’s did, are doing, are themselves paying Tier 1’s, etc. Their media sources’ bias (all have bias) favoring ISP’s for some reason didn’t tell them any of it. So, even if they’d have agreed with us (maybe, maybe not), they’d have never reached those conclusions since they were missing crucial information to reflect on when choosing to regulate or not regulate.
An example is one telling me companies like Netflix should pay more to Comcast per GB or whatever since they used more. The guy didn’t know Comcast refuses to do that when paying Tier 1’s negotiating transit agreements instead that worked entirely different. He didn’t know AT&T refused to give telephones or data lines to rural areas even if they were willing to pay what others did. He didn’t know they could roll out gigabit today for same prices but intentionally kept his service slow to increase profit knowing he couldn’t switch for speed. He wasn’t aware of most of the abuses they were doing. He still stayed with his position since that guy in particular went heavily with his favorite, media folks. However, he didn’t like any of that stuff which his outlets never even told him about. Even if he disagrees, I think he should disagree based on an informed decision if possible since there’s plenty smart conservatives out there who might even favor net neutrality if no better alternative. I gave him a chance to do that.
So, I’m going to give you this comment by @lorddimwit quickly showing how they ignored the demand to maximize profit, this comment by @dotmacro showing some abuses they do with their market control, and this article that gives nice history of what free market did with each communications medium with the damage that resulted. Also note that the Internet itself was an open, free-if-you-have-a-wire system that competed with the proprietary, charge-per-use, lock-them-in-forever-if-possible systems the private sector was offering. It smashed them so hard you might have even never heard of them or forgotten a lot about them depending on your age. It also democratized more goods than about anything other than maybe transportation. Probably should stick with the principles that made that happen to keep innovation rolling. Net neutrality was one of them that was practiced informally at first then put into law as the private sector got too much power and was abusing it. We should keep doing what worked instead of the practices ISP’s want that didn’t work but will increase their profits at our expense for nothing in return. That is what they want: give us less or as little improvement in every way over time while charging us more. It’s what they’re already doing.
I read the comments, and I read most of the freecodecamp article.
I like the ideal of the internet being a public utility, but I don’t really want the government to have that much control.
I think the real problem I have with government control of the internet, is that I don’t want the US to end up like china with large swaths of the internet completely blocked.
I don’t really know how to solve our current problems. But, like @jfb said elsewhere in this thread, I don’t think that net neutrality is the best possible solution.
Also note that the Internet itself was an open, free-if-you-have-a-wire system that competed with the proprietary, charge-per-use, lock-them-in-forever-if-possible systems the private sector was offering. It smashed them so hard you might have even never heard of them or forgotten a lot about them depending on your age.
I might recognize a name, but I probably wasn’t even around yet.
So, I’m going to give you…
Thanks for the info, I’ll read it and possibly form a new opinion.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
What obvious reasons? Because customers will switch providers if they don’t treat all traffic equally? That would require (a) users are able to tell if a provider prioritizes certain traffic, and (b) that there is a viable alternative to switch to. I have no confidence in either.
I don’t personally care if the prioritize certain websites, but I sure as hell care if the block something.
As far as I’m concerned, they can slow down Youtube by 10% for conservative channels and I wouldn’t give a damn even though I watch and enjoy some. What really bothers me is when they “erase” somebody or block people from getting to them.
well you did say they have an incentive to provide “equal service” so i guess you meant something else. net neutrality supporters like me aren’t satisfied with “nobody gets blocked,” because throttling certain addresses gives big corporations more tools to control media consumption, and throttling have similar effects to blocking in the long term. i’m quite surprised that you’d be fine with your ISP slowing down content you like by 10%… that would adversely affect their popularity compared to the competitors that your ISP deems acceptable, and certain channels would go from struggling to broke and be forced to close down.
Well, I have pretty fast internet, so 10% wouldn’t be terrible for me. However, I can see how some people would take issue with such a slowdown.
I was using a bit an extreme example to illustrate my point. What I was trying to say was that they can’t really stop people from watching the content that they want to watch.
I recall, but didn’t review, a study saying half of web site users wanted the page loaded in 2 seconds. Specific numbers aside, I’ve been reading that kind of claim from many people for a long time that a new site taking too long to load, being sluggish, etc makes them miss lots of revenue. Many will even close down. So, the provider of your favorite content being throttled for even two seconds might kill half their sales since Internet users expect everything to work instantly. Can they operate with a 50% cut in revenue? Or maybe they’re bootstrapping up a business with a few hundred or a few grand but can’t afford to pay for no artificial delays. Can they even become the content provider your liked if having to pay hundreds or thousands extra on just extra profit? I say extra profit since ISP’s already paid for networks capable of carrying it out of your monthly fee.
yeah, the shaping of public media consumption would happen in cases where people don’t know what they want to watch or don’t find out about something that they would want to watch
anti-democratic institutions already shape media consumption and discourse to a large extent, but giving them more tools will hurt the situation. maybe it won’t affect you or me directly, but sadly we live in a society so it will come around to us in the form of changes in the world
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
Most customers have exceedingly limited options in their area, and they’re not going to switch houses because of their ISP. Especially in apartment complexes, you see cases where, say, Comcast has the lockdown on an entire population and there really isn’t a reasonable alternative.
In a truly free market, maybe I’d agree with you, but the regulatory environment and natural monopolistic characteristics of telecomm just don’t support the case.
Most customers have exceedingly limited options in their area, and they’re not going to switch houses because of their ISP.
That’s a witty way of putting it.
But yeah, @lorddimwit mentioned the small number of tier-1 ISPs. I didn’t realize there were so few, but I still think that net neutrality is overreaching, even if its less than I originally thought.
Personally, I feel that net neutrality, such as it is, would prevent certain problems that could be better addressed in other, more fundamental ways. For instance, why does the US allow the companies that own the copper to also own the ISPs?
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
Awkward political jabs aside, most of your statements imply that you believe customers are free to choose who they get their internet from, which is just plain incorrect. Whatever arguments you want to make against net neutrality, there is one indisputable fact that you cannot just ignore or paper over:
ISPs do not operate in a free market.
In the vast majority of the US, cable and telephone companies are granted local monopolies in the areas they operate. That is why they must be regulated. As the Mozilla blog said, they have both the incentive and means to abuse their customers and they’ve already been caught doing it on multiple occasions.
most of your statements imply that you believe customers are free to choose who they get their internet from, which is just plain incorrect
I think you’re a bit late to the party, I’ve conceded that fact already.
All of that is gibberish. Net Neutrality is being pushed because it creates a more competitive marketplace. None of it has anything to do with professional liar Alex Jones.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
That’ s not how markets work. And it’s not how the technology or permit process for ISPs work. There is very little competition among ISPs in the US market.
Hey, here’s a great example from HN of the crap they pull without net neutrality. They advertised “unlimited,” throttled it secretly, admitted it, and forced them to pay extra to get actual unlimited.
@lorddimwit add this to your collection. Throttling and fake unlimited been going on long time but they couldve got people killed doing it to first responders. Id have not seen that coming just for PR reasons or avoiding local, govt regulation if nothing else.
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs.
It’s not about how much internet costs, it’s about protecting freedom of access to information, and blocking things like zero-rated traffic that encourage monopolies and discourage competition. If I pay for a certain amount of traffic, ISPs shouldn’t be allowed to turn to Google and say “want me to prioritize YouTube traffic over Netflix traffic? Pay me!”
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal (and they’re correct, except for the “fascist, nazi” bit).
Where on earth did you hear that? I sure hope you’re not making it up—you’ll find this site doesn’t take too kindly to that.
I might’ve been conflating two different political issues, but I have heard “fascist” and “nazi” used to describe the entire right wing.
A quick google search for “net neutrality fascism” turned this up https://motherboard.vice.com/en_us/article/kbye4z/heres-why-net-neutrality-is-essential-in-trumps-america
“With the rise of Trump and other neo-fascist regimes around the world, net neutrality will be the cornerstone that activists use to strengthen social movements and build organized resistance,” Wong told Motherboard in a phone interview. “Knowledge is power.”
You assume that net neutrality is a left-wing issue, which it’s not. It actually has bipartisan support. The politicians who oppose it have very little in common, aside from receiving a large sum of donations from telecom corporations.
As far as terms like “fascist” or “Nazi” are concerned—I think they have been introduced into this debate solely to ratchet up the passions. It’s not surprising that adding these terms to a search yields results that conflate the issues.
Ill add on your first point that conservatives who are pro-market are almost always pro-competition. They expect the market will involve competition driving whats offered up, its cost down, and so on. Both the broadband mandate and net neutrality achieved that with an explosion of businesses and FOSS offering about anything one can think of.
The situation still involves 1-3 companies available for most consumers that, like a cartel, work together to not compete on lowering prices, increasing service, and so on. Net neutrality reduced some predatory behavior the cartel market was doing. They still made about $25 billion in profit between just a few companies due to anti-competitive behavior. Repealing net neutrality for anti-competitive market will have no positives for consumer but will benefit roughly 3 or so companies by letting them charge more for same or less service.
Bad for conservative’s goals of market competition and benefiting conservative voters.
One part of it is that we already have net neutrality, and it’s easier to try to hang on to a regulation than to create a new one.
This is interesting, but another 20 years of software development continues to prove him wrong.
The current dominant paradigm is flat, single-ordered lists, and search (perhaps augmented with tags like our dear lobste.rs here).
This is even more of all the bad stuff he’s railing against at the start of the article, but this is the stuff that works and there are innumerable other approaches dead or dying.
It suspect that for UI’s less freedom is simpler (one button, one list, one query, one purpose, etc.) and not the other way around.
For developers, I think he was right, and it’s also what we’ve got today. It’s clearly preferable for developers to have a simple model to work against (Like URIs + JSON).
apt-get install firefox (Which unpacks to a resource identifier and a standardized, machine-readable package file) is quite probably as good as it gets. It’s a directed graph instead of an undirected graph like his zipper system, but undirected graphs require an unrealistic (and in my opinion probably harmful) amount of federation between producers of API’s and their consumers.
When the pitch is “good computing is possible”, “bad computing has dominated” isn’t actually a great counterargument – particularly when the history of so much of it comes down to dumb luck, path dependence, tradeoffs between technical ability & marketing skills, and increasingly fast turnover and the dominance of increasingly inexperienced devs in the industry.
If you’re trying to suggest that the way things shook out is actually ideal for users – I don’t know how to even start arguing against that. If you’re suggesting that it’s inevitable, then I can’t share that kind of cynicism because it would kill me.
A better world is possible but nobody ever said it would be easy.
Your comment is such a good expression of how I feel about the status quo! I was just having a similar discussion in another thread about source code, where I said “text is hugely limiting for working with source code”, and somebody objected with “but look at this grep-like tool, it’s totally enough for me”. I can understand when people raise practical objections to better tools (hard to get traction, hard to interface with existing systems etc.). What’s dispiriting is the refusal to even admit that better tools are possible.
The mistake is believing that we’re anywhere close to status quo in software development. The tools and techniques used today are completely different from the tools we used 5 and 10 years ago, and are almost unrecognizable next to the tools and techniques used 40 and 50 years ago.
Some stuff sticks around, (keyboards are fast!) but other things change and there is loads of innovative stuff going on all the time. With reference to visual programming: I recently spent a weekend playing with the Unreal 4 SDK’s block programming language (they call it blueprints) it has fairly seamless C++ integration and I was surprised with how nice it was for certain operations… You might also be interested in Scratch.
Often, these systems are out there, already existing. Sometimes they’re not in the mainstream because of institutional momentum, but more often they’re not in the mainstream because they’re not good (the implementations or the ideas themselves).
The proof of the pudding is in the eating.
I don’t think I can agree with this. I’m pretty sure the “write code-compile-run” approach to writing code that is still in incredibly widespread use is over 40 years old. Smalltalk was developed in the 70s. Emacs was developed in the 70s. Turbo Pascal, which had an integrated compiler and editor, was released in mid-80s (more than 30 years ago). CVS was developed in mid-80s (more than 30 years ago). Borland Delphi and Microsoft Visual Studio, which were pretty much full-fledged IDEs, were released in the 90s (20 years ago). I could go on.
What do we have now that’s qualitatively different from 20 years ago?
Yup. Some very shallow things have changed but the big ideas in computing really all date to the 70s (and even the ‘radical’ ideas from the 70s still seem radical). I blame the churn: half of the industry has less than 10 years of experience, and degree programs don’t emphasize an in-depth understanding of the variety of ideas (focusing instead on the ‘royal road’ between Turing’s UTM paper and Java, while avoiding important but complicated side-quests into domains like computability).
Somebody graduating with a CS degree today can be forgiven for thinking that the web is hypertext, because they didn’t really receive an education about it. Likewise, they can be forgiven for thinking (for example) that inheritance is a great way to do code reuse in large java codebases – because they were taught this, despite the fact that everybody knows it isn’t true. And, because more than half their coworkers got fundamentally the same curriculum, they can stay blissfully unaware of all the possible (and actually existing) alternatives – and think that what they work with is anywhere from “all there is” to “the best possible system”.
Thanks!
There are more details in that, but I’m not sure whether or not they’ll be any more accessible than my explanation here.
Most languages aren’t AOT compiled, there’s usually a JIT in place (if even that, Ruby and python are run-time languages through and through). These languages did not exist 20 years ago, though their ancestors did (and died, and had some of the good bits resurrected, I use Clojure regularly, which is both modern and a throwback).
Automated testing is very much the norm today, it was a fringe idea 10 years ago and something that you were only crazy enough to do if you were building rockets or missiles or something.
Packages and entire machines are regularly downloaded from the internet and executed in production. I had someone tell me that a docker image was the best way to distribute and run a desktop Linux application.
Smartphones, and the old-as-new challenges of working around vendors locking them down.
The year of the Linux desktop surely came sometime in the last or next 20 years.
Near dominance of Linux in the cloud.
Cloud computing and the tooling around it.
The browser wars ended, though they started to heat up before the 20 year cutoff.
The last days of Moore’s law and the 10 years it took most of the industry to realize the party was over.
CUDA, related, the almost unbelievable advances in computer graphics. (Which we aren’t seeing in web/UI design, again, probably not for lack of trying, but maybe the right design hasn’t been struck)
Success with Neural Networks on some problem sets and their fledgling integration into other parts of the stack. Wondering when or if I’ll see a NN based linter I can drop into Emacs.
I could go on too, QWERTY keyboards have been around 150 years because it’s good enough and the alternatives aren’t better then having one standard. I don’t think that the fact that my computer has a QWERTY keyboard on it is an aberration or failure, and not for lack of experimentation on my own part and on the parts of others. Now if only we could do something about that caps lock key… Oh wait, I remapped it.
It’s easy to pick up on the greatest hits in computer science, 20, 30, and 40 years ago. There’s a ton of survivorship bias and you don’t point to all of those COBOL-alikes and stack-based languages which have all but vanished from the industry. If it seems like there’s no progress today, it’s only because it’s more difficult to pick the winners without the benefit of hindsight. There might be some innovation still buried that makes two way linking better then one way linking, but I don’t know what it is and my opinion is that it doesn’t exist.
Fair enough. Let me clarify my comment, which was narrowly focused on developer tools for no good reason.
There is no question that there have been massive advances in hardware, but I think the software is a lot more hit and miss.
In terms of advances on the software front, I would point to distributed storage in addition to cloud computing and machine learning. For end users, navigation and maps are finally really good too. There are probably hundreds of other specific examples like incredible technology for animated films.
I think my complaints are to do with the fact that most of the effort in the last 20 years seems to have been directed to reimplementing mainframes on top of the web. In many ways, there is churn without innovation. I do not see much change in software development either, as I mentioned in the previous comment (I don’t think automated testing counts), and it’s what I spend most of my time on so there’s an availability bias to my complaints. There is also very little progress in tools for information management and, for lack of a better word, “end user computing” (again, spreadsheets are very old news).
I think my perception is additionally coloured by the fact that we ended up with both smartphones and the web as channels for addictive consumption and advertising industry surveillance. It often feels like one step forward and ten back.
I hope this comment provides a more balanced perspective.
In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.
Opensource and the internet have given a ton of ideas a fair shake, including these ideas. Stuff is getting better (not worse). The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.
In the last 20 years, the ideas in that paper have been attempted a lot, by a lot of people.
Dozens of people, and I’ve met or worked with approximately half of them. Post-web, the hypertext community is tiny. I can describe at length the problems preventing these implementations from becoming commercially successful, but none of them are that the underlying ideas are difficult or impractical.
The two way links thing is crummy, and you don’t have to take my word for it, you can go engage with any of the dozens of systems implementing it (including several by the author of that paper) and form your own opinions.
I wrote some of those systems, while working under the author of that paper. That’s how I formed my opinions.
That’s awesome. Maybe you can change my mind!
Directed graphs are more general then undirected graphs (You can implement two-way undirected graphs out of one way arrows, you can’t go the other way around). Almost every level of the stack from the tippy top of the application layer to the deepest depths of CPU caching and branch prediction is implemented in terms of one-way arrows and abstractions, I find it difficult to believe that this is a mistake.
EDIT: I realized that ‘general’ in this case has a different meaning for a software developer then it does in mathematics and here I was using the software-developers perspective of “can be readily implemented using”. Mathematically, something is more general when it can be described with fewer terms or axioms. Undirected graphs are more maths general because you have to add arrowheads to an undirected graph to make a directed graph, but for the software developer it feels more obvious that you could get a “bidirected” graph by adding a backwards arrow to each forwards arrow. The implementation of a directed graph from an undirected graph is difficult for a software developer because you have to figure out which way each arrow is supposed to go.
Bidirectional links are not undirected edges. The difference is not that direction is unknown – it’s that the edge is visible whichever side of the node you’re on.
(This is only hard on the web because HTML decided against linkbases in favor of embedded representations that must be mined by a third party in order to reverse them – which makes jump links a little bit easier to initially implement but screws over other forms of linking. The issue, essentially, is that with a naive host-centric way of performing jump links, no portion of the graph is actually known without mining.
Linkbases are literally the connection graph, and links are constructed from linkbases. In the XanaSpace/XanaduSpace model, you’ve got a bunch of arbitrary linkbases representing arbitrary subgraphts that are ‘resident’ – created by whoever and distributed however – and when a node intersects with one of the resident links, the connection is displayed and made navigable.
Also in this model a link might actually be a node in itself where it has multiple points on either side, or it might have zero end points on one side, but that’s a generalization & not necessarily interesting since it’s equivalent to all combinations of either end’s endsets.)
TL;DR: bidirectional links are not undirected links – merely links understood above the level of the contents of a single node.
Ok then, and how is it that you construct a graph out of a set of subgraphs? Is that construction also two way links thereby assuring that every participant constructs the same graph?
Participants are not guaranteed to construct the same graph, and the graphs aren’t guaranteed to even be fully connected. (The only difference between bidirectional links & jump links is that you can see both points.)
Instead, you get whatever collection of connected subgraphs are navigable from the linkbases you have resident (which are just lists of directed edges).
This particular kind of graph-theory analysis isn’t terribly meaningful for either the web or translit, since it’s the technical detail of how much work you have to do to get a link graph that differs, not the kind of graph itself. (Graph theory is useful for talking about ZigZag, but ZigZag is basically unrelated to translit / hypertext and is more like an everted tabular database.)
I guess I’m trying to understand how this is better or different from what already exists. If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now, time to throw one back and celebrate.
I’m trying to understand how this is better or different from what already exists
Well, when the project started, none of what we have existed. This was the first attempt.
If it’s a curated list of one way links that you can search and discuss freely with others, then guess what, lobste.rs is your dream, the future is now,
‘Link’ doesn’t actually mean ‘URL’ in this sense. A link is an edge between two nodes – each of these nodes being a collection of positions within a document. So, a linkbase isn’t anything like a collection of URLs, but it it’s a lot like a collection of pairs of URLs with an array of byte offsets & lengths affixed to each URL. (In fact, this is exactly what it is in the XanaSpace ODL model.) A URL by itself is only capable of creating a jump link, not a bidirectional link.
It’s not a matter of commenting on a URL, but of creating sharable lists of connections between sections of already-existing content. That’s the point of linking: that you can indicate a connection between two existing things without coordinating with any authors or owners.
URL-sharing sites like lobste.rs provide one quarter of that function: by coordinating with one site, you can share a URL to another site, but you don’t have control over either side beyond the level of an entire document (or, if you’re very lucky and the author put useful anchors, you can point to the beginning of a section on only the target side of the link).
To take an example of a system which steps in the middle and does take greater control over both ends, Google’s AMP. I feel like it is one of the worse things anyone has ever tried to do to the internet in it’s entire existence.
Control oriented systems like AMP and to a lesser degree sharing sites like Imgur, Pinterest, Facebook, and soon (probably) Medium, represent existential threats to forums like lobste.rs.
So, in short, you’re really not selling me on why this two way links thing is better.
We actually don’t have centralization like that in the system. (We sort of did in XU88 and XU92 but that stopped in the mid-80s.)
It’s not about controlling the ends. The edges are not part of the ends, and therefore the edges can be distributed and handled without permission from the ends.
Links are not part of a document. Links are an association between sections of documents. Therefore, it doesn’t make any sense to embed them in a document (and then require a big organization like Google to extract them and sell them back to you). Instead, people create connections between existing things & share them.
I’m having a hard time understanding what your understanding of bidirectional linking is, so let me get down to brass tacks & implementation details:
A link is a pair of spanpointers. A spanpointer is a document address, a byte offset from the beginning of the document, and a span length. Anyone can make one of these between any two things so long as you have the addresses. This doesn’t require control of either endpoint. It doesn’t require any third party to control anything either. I can write a link on a piece of paper and give it to you, and you can make the same link on your own computer, without any bits being transferred between our machines.
We do not host the links. We do not host the endpoints. We don’t host anything. We let you see connections between documents.
Seeing connections between documents manifests in two ways:
It’s not about control, or centralization. Documents aren’t aware of their links.
The only requirement for bidirectional linking is that an address points to the same document forever. (This is a solved problem: ignore hosts & use content addressing, like IPFS.)
Wow, thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.
I still think we’ve got this, or could implement it on the existing web stack. I think any user could implement zig-zag links in a hierarchal windows-style file structure since ’98 if not ‘95. I think it’s informative that most users do not construct those links, who knows how many of us have tried it in the name of getting organized.
I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.
I’m going to be thinking over this a lot more. A system like git stores the differences between documents instead of the documents themselves, so clearly there are places for other ways of relating documents to each other then what we’ve got, which work!
I should clarify: I’ve been describing bidirectional links in translit (aka hypertext or transliterature). ZigZag is actually a totally different (incompatible) system. The only similarity is that they’re both interactive methods of looking at associations between data invented by Ted Nelson.
If we want to compare to existing stacks, transliterature is a kind of whole-document authoring and annotation thing like Word, while ZigZag is a personal database like Access – though in both cases the assumptions have been turned inside-out.
You’re right that these things, once they’re understood, aren’t very difficult to implement. (I implemented open source versions of core data structures after leaving the project, specifically as demonstrations of this.)
I really believe that any interface more complex then a single item is too complex, and if you absolutely must you can usually present a list without distracting from a UI too badly. I think a minimalist and relatively focused UI is what allows this website to thrive and us to have this discussion.
Depending on how you chunk, a site like this has a whole host of items. I see a lot of characters, for instance. I see multiple buttons, and multiple jump links. We’ve sort of gotten used to a particular way of working with the web, so its inherent complexity is forgotten.
thank you for taking the time to walk me through these ideas. I think I’m starting to understand a little better.
No problem! I feel like it’s my duty to explain Xanadu ideas because they’re explained so poorly elsewhere. I spent years trying to fully understand them from public documentation before I joined the project and got direct feedback, and I want to make it easier for other people to learn it than it was for me.
I wouldn’t say so. What you have is more and more people are using the same tools, therefore you will never get a “perfect” solution. Generally, nature doesn’t provide a perfect system but “good enough to survive”. My partner and I are getting a child at the moment, and the times the doctor told us: “This is not perfect, but nature doesn’t care about that. It just cares about good enough to get the job done”.
After I’ve heard this statement, I see it everywhere. Also with computers. Code and how we work run a huge chunk of important systems, and somehow they work. Maybe they work because they are not perfect.
I agree that things will change (“for the better”), but it will come in phases. We will have a bigger catastrophic thing happening and afterwards systems and tools will change and adapt. As long everything sort of works, well, there is no big reason to change it (for the majority of people) since they can get the job done and then enjoy the sun, beaches and human interactions.
Nobody’s complaining that we don’t have perfection here. We’re complaining about the remarkable absence of not-awful in projects by people who should know better.
I think the best way to describe what we have is “Design by Pop Culture”. Our socio-economic system is a low pass filter, distilling ideas until you can package them and sell them. Is it the best we got given those economic constraints? maybe…
But that’s like saying “Look, this is the best way to produce cotton, it’s the best we got” during the slave era…(slavery being a different socio-economic system)
It’s more like, deflect and avoid an automatic “yes” to superiors.
Obviously you need to weigh input from organizational heads differently, they have a different context then you do (otherwise what’s the point of em’). They tend to have a broader context, you tend to have a more narrow and detailed context.
The superiors’ version of this advice is, don’t automatically override the decision making power of your subordinates, they probably have details that you don’t have. This article is full of good advice.
If you believe that this, no doubt incredibly expensive, no doubt top secret, one of a kind machine was used to figure out where trains stalled on the tracks, I have a bridge to sell you.
The thing has three (four?) wheels of ticker tape on it, that’s a surprising number of train-stallings-per-minute if you get my meaning. I wonder what the machine’s original purpose was! The armored car I understand the need for secrecy around, but why the computer?
This is a great rant and I’m first in line to hate on the web stack, but I think it’s tilting at the wrong windmill.
It’s very concerned about code that runs on people’s computers when the thing that throws elections is the text that runs on people’s brains.
I didn’t expect the great popularity of Ruby.
There’s probably some missing evidence here. He’s not looking at all startups, he’s looking at successful startups.
It’s unlikely that startups never choose .NET considering it’s large developer base. It’s more likely that startups who choose .NET fail. I think we could all speculate on the reason.
“Startups with just PHP are probably e-commerce websites or non-software at all. “
It’s definitely not fair to dismiss PHP like this.
When a distribution starts messing with your dependencies, all your QA goes out the window
Guix addresses this by running the package’s tests as part of the build.
Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow.
Guix is aware of this issue too and provides import tools to address it… but it’s still not enough for application development and deployment.
I don’t think it’s unreasonable (from an application developer’s perspective) to want to bundle an ever-increasing number of dependencies. I read an article yesterday advocating bundling an extra virtual machine as a build step, like that was a normal and sane thing to do.
Other important political aspect of Material Design (and some other UI/web styles that are popular now) is “minimalism”. Your UI should have few buttons. User should have no choices. User should be consumer of content, not a producer. Having play and pause buttons is enough. User should have few choices how and what to consume — recommender system (“algorithmic timeline”, “AI”) should tell them what to consume. This rhetoric is repeated over and over in web and mobile dev blogs.
Imagine graphics editor or DAW with “material design”. It’s just nearly impossible. It’s suitable only for scroll-feed consumption and “personal information sharing” applications.
Also, it’s “mobile-first”, because Google controls mobile (80% market share or something like that). Some pages on Google itself (i.e. account settings) look on desktop like I’m viewing it on giant handset.
P.S. compared with “hipster” modernist things of ~2010, which often were nice and “warm”, Material Design looks really creepy for me even when considering only visual appearance.
A potentially interesting challenge: What does a design language for maker-first applications look like?
Not sure if such design languages exist, but from what I’ve seen, I have feeling that every “industry” has its own conventions and guidelines, and everything is very inconsistent.
I thought UI guidelines for desktop systems (as opposed to cellphone systems) have lots of recommendations for such data editing programs, but seems that no, they mostly describe how to place standard widgets in dialogs. MacOS guidelines are based on programs that are included with MacOS, which are mostly for regular consumers or “casual office” use. Windows and Gnome guidelines even try to combine desktop and mobile into one thing.
Most “editing” programs ignore these guidelines and have non-native look and feel (often the same look-and-feel on different OSes).
3D: complicated window splits, use of all 3 mouse buttons, also dark themes. Nonstandard widgets, again. UI have heritage from Silicon Graphics workstations and maybe Amiga.
Try Lisp machines. 3D was a strong market for Symbolics.
I’d suggest–from time spent dealing with CAD, programming, and design tools–that the biggest thing is having common options right there, and not having overly spiffy UI. Ugly Java swing and MFC apps have shipped more content than pretty interfaces with notions of UX (notable exceptions tend to be music tools and DAW stuff, for reasons incomprehensible to me). A serious tool-user will learn their tooling and extend it if necessary if the tool is powerful enough.
(notable exceptions tend to be music tools and DAW stuff, for reasons incomprehensible to me)
Because artists demand an artsy-looking interface!
We had a great post about two months back on pie menus. After that, my mind goes to how the Android app Podcast Addict does it: everything is configurable. You can change everything from the buttons it shows to the tabs it has to what happens when you double-click your headset mic. All the good maker applications I’ve used give me as much customization as possible.
It’s identical to the material design guidelines but with a section on hotkeys, scripts, and macros.
Stuff like Bootstrap mentioned there, early Instagram, Github. Look-and-feels commonly associated with Silicon Valley startups (even today).
These things usually have the same intentions and sins mentioned in this article, but at least look not as cold-dead as Material Design.
Isn’t this like… today? My understanding was: web apps got the material design feel, while landing pages and blogs got bootstrappy.
I may be totally misinterpreting what went on though
To create five different game levels of difficulty, we used data from a previous experiment [33] and used a regression model to manipulate factors predicted to vary in difficulty from very easy to very hard. We varied several design factors, including Error Tolerance (target size), Time Limit (amount of time players have to make their selection) and Item Sets (items presented).
This seems like a problem… I think the experiment would carry more weight if they varied one ‘difficulty’ parameter at a time. It’s pretty obvious (to me) that varying the success rate by any available means isn’t going to get useful results, since they could have a pirate pop up randomly every fixed fraction of games saying, “Ha Ha, You Lose, play again”.
In fact, maybe they should do that experiment as well!
[Comment removed by author]
Haskell has no syntax in the core language to sequence one expression after another.
It has quite a few alternatives actually. Depending what you mean by “syntax in the core language”, there are some things with specific grammar rules in the Haskell98 and Haskell2010 standards; there are some “userspace” functions/operators (i.e. their syntax is a special case of functions/operators) which are nevertheless mandated by those standards; there are some things which the de facto implementation GHC supports (e.g. via commandline flags); etc. Here are a few:
a : b is the expression a followed by the sequence of expressions b (all of the same type)a ++ b is the sequence a followed by the sequence b (again, of the same type)[a, b] is a sequence of the expression a followed by the expression b (of the same type)(a, b) is a sequence of the expression a followed by the expression b (can be different types)f . g is the expression g followed by the expression f (input and output types must coincide)g >>> f is the expression g followed by the expression f (same as above but their order flipped)a -< b is the expression b followed by the expression a (must have compatible input/output types)do { a; b } is the expression a followed by the expression b
f <$> x is the expression followed by the expression f (must have compatible input/output types)These all define a specific order on their sub-expressions. They’re not all identical, but they follow roughly similar usage:
a : b tells a Prolog-style interpreter to perform the computation/branch a before trying those in b
a ++ b generalises the above to multiple computations (the above is equivalent to [a] ++ b)[a, b] is a specialisation of the above, equivalent to [a] ++ [b]
(a, b) generalises [a, b] to allow different types. We can use this to implement a linear sequence (it’s essentially how GHC implements IO). Somewhat surprisingly, and completely separately to anything IO related, it also represents parallel composition
f . g is a rather general form of composition
g >>> f is the same as above
a -< b is is part of arrow notation and desugars to a mixture of sequential and parallel composition (using lambdas, >>>, (a, b), etc.)do { a; b } is a generalisation of b . a, corresponding to join (const b <$> a), which is the most similar form to the ; operator of other languages you refer to: both because it has the same syntax (an infix ; operator) and a similar meaning (generalised composition). This can also be written as a >> b, and is related to a >>= b and a >=> b which are also built-in sequencing syntax, but didn’t seem worth their own entries.f <$> x is generalised application of f to x. That generality also makes it a composition/pipeline operator
The reason I’ve listed all these isn’t so much to say “look, there are some!”; but more to point out how many different meanings the word “sequence” can have (a list of values, an composition of functions, a temporal ordering on side-effects, etc.); how many different implementations of sequencing we can build; and, most crucially, that they all seem to overlap and interminge (e.g. the blurring of “container of values” with “context for computation”; how we can generalise a single thing like “composition” in multiple ways; how generalising seemingly-separate ideas ends up at the same result; etc.). This tells us that there’s something important lurking here. I don’t think investigating and harnessing this makes someone a wanker.
[Comment removed by author]
I’m an application programmer at Atlassian. A monad is a critical tool for code reuse in our applications. It’s not about PLT research or even evaluation order.
Monads only matter for representing sequential execution in extremely constrained languages, like haskell. (Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.)
This is not true. Monads are critical for code reuse. I’ve used the concept of a monad in many areas, but explicitly and critically in Scala.
[Comment removed by author]
[Comment removed by author]
[Comment removed by author]
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO (IO m) k = IO (\ s -> case m s of (# new_s, a #) -> unIO (k a) new_s)
[Comment removed by author]
I’m not being pedantic and your point is not clear. IO can be sequenced, this sequence can be abstracted, code reuse is what is gained from the abstraction. That is the total relationship between IO and monad.
[Comment removed by author]
Monad is about much more than IO. IO is about much more than monad.
Objects and classes have a different relationship.
[Comment removed by author]
This is not being pedantic, it is a very critical part of understanding monad and IO. I teach Haskell at work and have successfully corrected this mistake many times.
Suppose there existed a function that reversed a list. A few fruit grocers used this function to reverse a list of oranges. They also sometimes use it to reverse lists of apples. Other things happened with this function also, but we only know of these specific circumstances.
Suppose then someone came along and proclaimed, “the reverse function is all about fruit!” then they wrote an article about this new apparent fact. Would you be able to clearly see a categorical error occurring here? What would you say to the article author? Would you reverse a list of list of functions right in front of their face? Or reverse a list of wiggley-woos? What if that person then replied, “you’re just being pedantic”? Where would you take the discussion from here? Would you be the meany person who informs them that they have almost no grasp of the subject matter? It’s quite a bind to be in :)
That’s exactly the error being made here (among some others) and it is a very obvious error only to those who have a concrete understanding of what monad means. It’s not pedantic. It’s not “avoiding a debate.” It’s a significant categorical error, and it is very common among beginners. It limits any further understanding so significantly, that it is better to have no knowledge at all. This specific error is also commonly repeated among beginners, as they struggle and aspire to understand the subject, and to the point that it becomes very difficult to stamp out, even for many of those who know the subject well. The ultimate consequence is a net overall lack of progress in understanding, for absolutely everyone.
Who wants to contribute to that?
Haskell has no syntax in the core language to sequence one expression after another.
Yes it does: do-notation. You can even use semicolons if you don’t like newlines. It’s the syntax to sequence expressions which can be sequenced. You can’t use semicolons to sequence things that can’t be sequenced in other languages, either.
And why talk about Maybe but not MonadPlus, free monads, transformers…? All you know is Maybe and IO? Of course it’s boring to you. In stead of writing blog sized posts about how blog tutorials don’t teach you everything you could read up, but oh well you do you.
By changing each stage to take and return a fat outer type holding the entire context, you can just as easily achieve the cool pipeline effect by defining >>= as function composition rather than bind.
With bind you don’t have to change each stage.
Understanding how to write programs which allow change without triggering catastrophic rewrites is pretty useful.
Understanding why some programs are easy to modify is pretty useful.
Having language to discuss why some programs are easy to modify and others are not; also pretty useful.
The original post is about how thinking in terms of Monads can make a program which is hard to modify into a program which is easy to modify, it’s a useful post.
Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.
First of all, by far the most popular monadic interface in modern software development is not Haskell’s IO type, it’s JavaScript’s Promise type, together with similar systems for writing asynchronous logic in other languages. If we’re talking about use cases where monads are “certainly important,” I think it’s worth mentioning the large number of programmers writing monadic code on a daily basis in languages which certainly do not lack native support for semicolons.
I love monads and find they’re actually among the most useful and important tools I’ve ever acquired as a programmer, but I agree that the PLT and functional programming communities could do a better job communicating exactly why monads are actually important. The use of monads as “extendable semicolons” does have some narrow but critically important use cases, such as asynchronous code, exception handling, and recursive backtracking, but I actually believe that the exotic forms of control flow you can express with monads is of only secondary importance.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not. Given that a function’s interaction with an environment is an extremely important aspect of its semantics, this is information that you would be informally documenting and keeping track of anyway; Haskell just allows you document it in a precise, machine-checked format with great integration with the compiler.
This immediately allows you to separate functions which perform IO from those which do not, but that’s not actually the coolest part. The coolest part is that once you start defining your own monad types, you can express much much more precise and interesting classes of side effects, like “a function that interacts only with a random number generator” or “a function that interacts only with my database state” or “a function which interacts only with a sequential-identifier generator.” This is the real power of monads: the ability to make fine-grained guarantees about the data dependencies and side effects of a function given only its type signature.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not.
One nit here: the idea of separating pure and effectful operations is actually pretty old. You see this in Pascal and Ada and the like, where “functions” are pure and “procedures” are effectful. This is baked into the core language semantics. The different terms fell out of favor when C/C++ got big, and now people don’t really distinguish them anymore. But there’s no reason we couldn’t start doing that again, aside from inertia and stuff.
To my understanding you also don’t need monads to separate effects in pure FP, either; Eff has first-class syntax for effect handlers and takes measures to distinguish the theory from monads.
To my understanding you also don’t need monads to separate effects in pure FP, either;
Well, you need something. Proposing to have an effect system without monads is like proposing to do security without passwords: there are some interesting possibilities there, but you have to explain how you’re going to solve the problems that monads solve.
Your Eff paper refers to papers on effect tensors to justify the claim that effects can be easier to combine than monads, but then doesn’t seem to actually model those tensors? Their example of what combining effects looks like in practice seems to end in just letting them be composed in the same order that the primary code is composed, when the whole point of a pure language is to be able to get away from that. So while the language is pure at the level of individual effects, it seems to be effectively impure in terms of how composition of effects behaves?
[Comment removed by author]
It’s not specific to Javascript. The type Task<T> is the same interface in C#, Future<V> is the same thing in Java. The concept is generally useful in all languages. It’s useful even in Haskell, where Async allows the introduction of explicit concurrency, even though the runtime automatically does the work that Task<T> is mostly for in C# (avoiding blocking on threads).
In addition async/await is a monadic syntax, which is generally useful (as evidenced by it now being in C#, F#, Scala, Javascript, Python, and soon C++).
(LINQ in C# is another generally-useful monadic syntax, which is used for just about everything except doing IO and sequencing.)
My latest install has been Guix on top of Arch.
I’ve been running it as my daily driver for a couple months.
Pros:
Cons:
Overall, I like having it on my system, it’s especially nice for managing Emacs packages! I fall back onto Pacman frequently to work around the cons. I couldn’t run GuixSD as my daily driver.
The last time, I tried nix, everything in the store had to be world-readable, which made it rather questionable to manage things like private keys for TLS certificates and so on through nix expressions. Is this an issue with guix too?
We can expect a lot more of this, especially in new companies which never had a human middle management layer.
Even scratch is nothing but a bunch of structured data underneath, stored as text (with some blobs mixed in).
https://en.m.wikipedia.org/wiki/.sb2_file
A way to render it, and a way to write it. There’s nothing wrong with it.
Everything hangs on correctly applying governance tools and methods, and it can take a great deal of experience and a lot of failed experiments before you get that right. But the better you get at it, the more likely that you’ll really be in control of your project, in the sense of consistently steering it toward the optimal result.
Agile as described here is waterfall, but fast, with a huge emphasis on process and top-down control and where that control is exercised (at the top of the waterfall start of each sprint).
Not gambling. Securities fraud! If you want to gamble you have to go find a slotmachine game with loot boxes.
This sounds like it aligns closely with the mission of the Lobsters community. I’ve emailed to explore options for collaboration. I’ve had to timebox my Lobsters coding to my Wed + Thu morning code and coffee time and it would be great if this foundation could fund some time from me or junior devs trying to build up their resumes.
I’m overall loving what’s in there. I’d definitely support it. It seems like a more realistic thing of hackers supporting other hackers instead of hoping society will change to. Great stuff.
“The Foundation will help to create environments where hackers are welcomed, supported, nurtured and celebrated. Creating a safer, more supportive and accepting world for hacker will help to reduce depression and suicide among hackers, and enable hackers to live fuller and happier lives.”
I still think someone should bring up one point when stuff like above is mentioned. The corporate media and Hollywood, the most powerful influencers, have totally redefined the label “hacker” to be criminality to a point where it’s probably beyond salvage. If anyone uses it, the laypeople hearing it will immediately have a negative mindset that creates a harder conversation for that person. Its constant association with evil by media makes me think of it as the geek N-word or something in terms of average person’s negative usage or reactions, hackers arguing about why they identify with positive version of it, and media fueling fires for ratings and profit. Although I got good at explaining real meaning, I’ve found that there was no real effect among the hundreds of laypeople I tried that on. There’s not enough of us doing it to counter media’s reach. Being uncommon and marginalized group means that will remain true a long while.
This is a marketing problem more than anything. A brand was trashed but people keep using and defending it. Marketing practice (and results!) say we need new brands so we start fresh in the minds of the audience to get broader support. I’ve been using the words thinker, inventer, technologist, and recently maker. Three already have positive connotations which correspond with what people will be doing on software and hardware side. Lay people might be happy to invest in locations, equipment, and support for (those words) among the nation’s youth. Ive found that maker generates confusion (“What’s that?”) that lets me explain the concept with positive examples from makerspaces. So, it’s weaker initially and requires a little work but that can be as simple as linking to a story. So, there’s some options.
I say we keep the hacker term among ourselves while garnering mainstream support with the kind of words they understand and would back. I’ve already been doing it with positive results. I see others doing it, too, even though many wouldn’t call themselves hackers. They’re just folks recruiting youth to let them do group projects, dream big, and so on. Happens in many fields. It’s a proven model. If we use it, though, we might not call them hackerspaces given that name leaking out to external supporters could cause conflicts or damagen. We’d have to use makerspaces, invention/technology centers, and so on… which again is already happening in many places. Hacker stays internal. Alternatively, we take those little conflicts using them to educate people on the term with positive examples risking losing support or funding on principle. If so, I argue we don’t explain the term: use examples of people who built amazing things that countered the status quo using their inquisitive attitude and deep understanding. A number of them showing up in the media overtime coming out of these (words here)-spaces might positively define the hacker spirit in the new terms and/or slowly undue damage to original term.
It would also help if we kept collectively pushing the media for distinction between positive and criminal hackers. Showing more of the inventive ones plus their disdain for the damaging ones. Least there’s hackers on the cop shows saving peoples’ asses. That’s… something… I don’t think they understand good hacking, though. I see hints of it with characters that use tech or deep knowledge to bypass limitations of an obstacle or tool. Maybe hackers can keep coming up with ideas like that for major shows forwarding them to the producers. Trickle examples into stories with wide audiences. Similarly, keep forwarding inventive folks to local and national media outlets so they highlight them slipping in words like maker or hacker. Need a positive pushback against what media is already doing preferably in a way that they co-opt it into their own, standard practice due to positive ratings. In the end, they’ll be taking credit for their show elements with us rolling our eyes at least being grateful that we’re a bit safer and more appreciated for our efforts.
Just some thoughts I have after years of fighting this battle with the general public looking at what worked and didn’t. We need to do more of what works. It’s more about perception and influences than facts or tech. Our methods must be likewise. Just the way it is.
I am not enthusiastic about the word “hacker”. I like some versions of the idea, but I don’t think the word is redeemable, the mainstream is too big and the term is too entrenched there.
In at least this respect I think that the lobste.rs community does not align with the No Starch Press Foundation.
The word hacker is used 40 times in that announcement and clearly references an intention to protect people who otherwise might be prosecuted by peers and mentors.
A hacker likes to push boundaries, pick locks (for fun), and find ways to control hardware and make it do things that it wasn’t intended to do.
That implies a certain rule-breaking attitude that is more political (and includes wearing the divisive label) then the somewhat broader lobste.rs community.
Fair point. We’ve got a few people who live that ethos and love the word, but it’s definitely not universal.
Appreciate the link. I also like the Chicago vs Bay Area write-up. Glad you’re putting this stuff out there.
I think we also have a bigger, recurring problem where technical people think they’ll solve all their problems with technical arguments or technology itself. Most of these are people problems, esp with word “hacker.” They gotta learn it takes completely different skills to win political and media battles. Some know and do it but they’re rare. I about want to joke they’ll be better off accomplishing OP mission if they hit Barnacles instead reading everything under tags marketing, business, and pricing. That kind of stuff, online or actual books, focused on non-profit goals with lots of practice. Then, they’ll get some stuff done.
Past few years gave me the hard realization that most of us wanting to change things were building the wrong skills. Gotta fix that in near future.
EDIT: Speaking of your linked article, another one just showed up on HN about Ghost that I thought was a good example of some of your points. Worth a Barnacles submission. :)
ChromeOS with a full Linux terminal environment would handle 100% of my computing needs and would “just work”…but it involves selling even more of my soul to Google than I already do, so, I’m conflicted.
EDIT: Well, not 100% because I need to be able to run VMs. But, you know, 80%.
I guess there’s still the faint hope that VMware Workstation for Linux will run inside ChromeOS eventually.
If Google was good, then Google would push their drivers for Chromebooks upstream. Google does not push their drivers for Chromebooks upstream.
While I in general agree with the sentiment, there is one sticky part. If automated tests are code, don’t they need test coverage?
The answer is yes, but it’s a rabbit hole you might not expect a junior developer to see the way out of.