Apologies for commenting on the form rather than content here, but is wikia.com really the place this community decided to organize around? Seeing TV shows and cosmetic products for teenagers advertised next to an article on formal methods is… a strange experience.
That’s strange but submission has many links. Mostly a positive. The only negative I had, which is dependent on my goal of industrial adoption, is that Z notation proved too hard to understand in a lot of projects for a lot of programmers. Alternatives, many coming later, also had a better story in automatic verification of code against the specs, generation of code from specs, prover integration, and so on. It’s still interesting for people studying various kinds of logic or historical use of formal methods. I keep Z papers in my collection just in case their work ever has ideas for solving a new problem.
I wasn’t exactly sure of the best link to post here. There’s a whole book that’s available online, but I thought a portal-type link might be a better entry point.
The Way of Z: https://staff.washington.edu/jon/z-book/
@nickpsecurity, what of Z’s successors would you say did a better job with being understandable to humans?
The main one these days is Alloy. Jackson designed it specifically to address two problems he had with Z, which were that it was too intimidating to beginners and that a lot of valid Z specs couldn’t be model-checked.
Survey is here.
It largely failed due to its learning curve. B method did, too. Both did improve software quality, though. I had a resource diving into the various methods in a detailed comparison whose criteria I want everyone doing model-checking or formal verification to consider and weight in on. Turns out I didn’t submit it even though I thought I did. Oops…
I’ve been referencing the results in comments here: Abstract, State Machines and TLA+/PlusCal came out easiest to use with high cost-benefit analysis. ASM’s and PlusCal can look pretty similar to each other and FSM’s. TLA+ even has similar foundations of Z minus the complexity. If you like Z, you might also like the concept of TLZ where Lamport combined Z with temporal logic. That link has him saying the Z and CSP community ignoring his TLZ work twice. So, he ditched Z and created something better: TLA+. It’s going mainstream with non-mathematicians picking it up thanks to the work by folks like hwayne.
Since I didn’t submit that survey and it’s late, I’ll submit it in the morning around 10-11am as usual. That way people can check it out at lunch time. Stay tuned. :)
The send button on the conversation should be un-clickable, if a message has already been sent but has not received a response
Wouldn’t this mean that if you never get a response, you can’t use the app anymore?
Yes, but moreover, it does not make the client request idempotent. The definition given for idempotence is “the property of certain operations whereby they can be applied multiple times without changing the result beyond the initial application”. Making the button un-clickable doesn’t make it possible to apply the operation multiple times without changing the result; it prevents the operation from being applied more than once via the UI.
To make chat message sending idempotent, one could add a unique id or sequence number to each message so that it’s possible to detect and drop the duplicate message.
Good point. Even nowadays, the web still relies on the “refresh” button as well as system issues are still magically fixed on reboot :)
As a European, I don’t quite get it: Americans seem to be concerned with net neutrality, meanwhile not protesting huge monopolistic corporations(the gatekeepers) removing some controversial users on their own judgement and with no way to appeal. Are individuals excluded from the net neutrality?
I’m not very familiar with the legal details, but I assume the distinction is general access to the internet being considered a utility, while access to platforms being considered something like a privilege. E.g. roads shouldn’t discriminate based on destination, but that doesn’t mean the destination has to let you in.
edit: As to why Americans don’t seem as concerned with it (which is realize I didn’t address): I think most people see it as a place, like a restaurant. You can be kicked out if you are violating policies or otherwise disrupting their business, which can include making other patrons uncomfortable. Of course there are limits which is why we have anti-discrimination laws.
Well, they’re also private, for-profit companies that legally own and sell the lines. So, there’s another political angle where people might vote against the regulations under theory that government shouldn’t dictate how you run your business or use your property, esp if it cost you money. Under theory of benefiting owners and shareholders, these companies are legal entities specifically created to generate as much profit from those lines as possible. If you don’t like it, build and sell your own lines. That’s what they’d say.
They don’t realize how hard it is to deploy an ISP on a shoe-string budget to areas where existing players already paid off the expensive part of the investment, can undercut you into bankruptcy, and (per people claiming to be ISP founders on Hacker News) will even cut competitors’ lines “accidentally” so their own customers leave them. In the last case, it’s hard to file and win a lawsuit if you just lost all your revenue and opponent has over a billion in the bank. They all just quit.
…existing players … (per people claiming to be ISP founders on Hacker News) will even cut competitors’ lines “accidentally” so their own customers leave them.
One of them described a situation with a contracted, construction crew with guy doing the digging not speaking English well. They were supposedly digging for incumbent but dug through his line. He aaid he pointed that it was clearly marked with paint or something. The operator claimed he thought that meant there wasnt a line there.
That’s a crew that does stuff in that area for a living not knowing what a line mark means. So, he figured they did it on purpose. He folded since he couldnt afford to sue them. Another mentioned them unplugging their lines in exchanges or something that made their service appear unreliable. Like the rest, they’d have to spend money they didnt have on lawyers who’d have to prove (a) it happened snd/or (b) it was intentional.
The landmark case in the United States is throttling of Netflix by Comcast. Essentially, Comcast held Netflix customers hostage until Netflix paid (which they did).
It’s important to understand that many providers (Comcast, AT&T), also own the channels (NBC, CNN, respectively). They have an interest in charging less for their and their partners content, and more for their competitors content, while colluding to raise prices across the board (which they have done in the past with television and telephone service).
Collectively, they all have an interest in preventing new entrants to the market. The fear is that big players (Google, Amazon) will be able to negotiate deals (though they’d probably prefer not to), and new or free technologies (like PeerTube) will get choked out.
Net neutrality is somewhere where the American attitude towards corporations being able to do whatever to their customers conflicts with the American attitude that new companies and services must be able to compete in the marketplace.
You’re right to observe that individuals don’t really enter into it, except that lots of companies are pushing media campaigns to sway public opinion towards their own interests. You’re seeing those media campaigns leaking out.
Switching to the individual perspective.
I just don’t want to pay more for the same service. In living memory Americans have seen their gigantic monopolistic telecommunications company get broken up, and seen prices for services drop 100 fold; more or less as a direct consequence of that action.
As other posts have noted, the ISP situation in the US is already pretty dire unless you’re a business. Internet providers charge whatever they can get away with and have done an efficient job of ensuring customers don’t have alternatives. Telephone service got regulated, but internet service did not.
Re-reading your post after diving on this one… We’re not really concerned about the same gatekeepers. I don’t think any American would be overly upset to see players like Amazon, Facebook, Google, Twitter, and Netflix go away and I wouldn’t be surprised to see one or more of those guys implode as long as they don’t get access to too much of the infrastructure.
Right-leaning US Citizen here. I’ll attempt to answer this as best as I can.
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal (and they’re correct, except for the “fascist, nazi” bit). But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs. I do remember this article was moderately interesting, saying that the majority of root DNS servers are run by US companies. But, that doesn’t really faze me. As soon as people start censoring, that get backlash whether the media covers it or not
Side note, the reason you don’t see the protests against the “gatekeepers” is that most of the mainstream media isn’t accurately covering the reaction of the people to the censorship. I bet you didn’t know that InfoWars was the #1 news app with 5 stars on the Apple app store within a couple of weeks of them getting banned from Facebook, etc. I don’t really have any opinion about Alex Jones (lots of people on the right don’t agree with him), but you can bet I downloaded his app when I found out he got banned.
P.S. I assumed that InfoWars was what you were referring to when you said “removing some controversial users” P.P.S. I just checked the app store again, and it’s down to #20 in news, but still has 5 stars.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
I think this is too optimistic. I live in Chicago, the third biggest city in the country and arguably the tech hub of the midwest. In my building I get to choose between AT&T and Comcast. I’m considered lucky: most of my friends in the city get one option, period. If their ISP starts doing anything shady they don’t have an option to switch, because there’s nobody they can switch to.
I think this is too optimistic. I live in Chicago, the third biggest city in the country and arguably the tech hub of the midwest. In my building I get to choose between AT&T and Comcast. I’m considered lucky: most of my friends in the city get one option, period. If their ISP starts doing anything shady they don’t have an option to switch, because there’s nobody they can switch to.
It’s interesting to contrast this to New Zealand, where I live in a town of 50,000 people and have at least 5 ISPs I can choose from. I currently pay $100 NZ a month for an unlimited gigabit fibre connection, and can hit ~600 mbit from my laptop on a speed test. The NZ government has intervened heavily in the market, effectively forcing the former monopolist (Telecom) to split into separate infrastructure (Chorus) and services (Telecom) companies, and spending a lot of taxpayer money to roll out a nationwide fibre network. The ISPs compete on the infrastructure owned by Chorus. There isn’t drastic competition on prices: most plans are within $10-15 of each other, on a per month basis, but since fibre rolled out plans seem to have come down from around $135 per month to now around $100.
I was lucky to have decent internet through a local ISP when I lived in one of Oakland’s handful of apartment buildings, but most people wouldn’t have had that option. I think the ISP picture is a lot better in NZ. Also, net neutrality is a non-issue, as far as I know. We have it, no-one seems to be trying to take it away.
I’m always irritated that there are policies decried in the United States as “impossible” when there are demonstrable implementations of it elsewhere.
I can see it being argued that the United States’s way is better or something, but there are these hyperbolic attacks on universal health care, net neutrality, workers’ rights, secure elections, etc that imply that they are simply impossible to implement when there are literally dozens of counterexamples…
At the risk of getting far too far off topic.
One of the members of the board at AT&T was the CEO of an insurance company, someone sits on the boards of both Comcast/NBC and American Beverages. The head of the FCC was high up at Verizon.
These are some obvious, verifiable, connections based in personal interest. Not implying that it’s wrong or any of those individuals are doing anything which is wrong, you’ve just gotta take these ‘hyperbolic attacks’ with a grain of salt.
Oh yeah it’s infuriating. It helps to hit them with examples. Tell them the media doesn’t talk about them since they’re all pushing something. We all know that broad statement is true. Then, briefly tell them the problems that we’re trying to solve with some goals we’re balancing. Make sure it’s their problems and goals. Then, mention the solution that worked else where which might work here. If it might not fit everyone, point out that we can deploy it in such a way where its specifics are tailored more to each group. Even if it can’t work totally, maybe point out that it has more cost-benefit than the current situation. Emphasize that it gets us closer to the goal until someone can figure out how to close the remaining gap. Add that it might even take totally different solutions to address other issues like solving big city vs rural Internet. If it worked and has better-cost benefit, then we should totally vote for it to do better than we’re doing. Depending on audience, you can add that we can’t have (country here) doing better than us since “This is America!” to foster some competitive, patriotic spirit.
That’s what I’ve been doing as part of my research talking to people and bouncing messages off them. I’m not any good at mass marketing, outreach or anything. I’ve just found that method works really well. You can even be honest since the other side is more full of shit than us on a lot of these issues. I mean, them saying it can’t exist vs working implementations should be an advantage for us. Should. ;)
Beautifully said.
My family’s been in this country since the Mayflower. I love it dearly.
Loving something means making it better and fixing its flaws, not ignoring them.
Thanks and yes. I did think about leaving for a place maybe more like my views. That last thing you said is why I’m still here. If we fix it, America won’t be “great again:” it would be fucking awesome. If not for us, then for the young people we’re wanting to be able to experience that. That’s why I’m still here.
Native Texan/Austinite here. Texas is the South, Southwest, or just Texas. All the rest of y’all are just Yankees. ;)
But if their ISP starts doing anything shady, they’ll surely get some backlash, even if they can’t switch they can complain.
They’ve been complaining for decades. Nothing happens most of the time. The ISP’s have many lobbyists and lawyers to insulate them from that. The big ones are all doing the same abusive practices, too. So, you can’t switch to get away from it.
Busting up AT&T’s monopoly got results in lower costs, better service, better speeds, etc. Net neutrality got more results. I support more regulation of these companies and/or socialized investment to replace them like the gigabit for $350/mo in Chattanooga, TN. It’s 10Gbps now I think but I don’t know what price.
Actually, I go further due to their constant abuses and bribing politicians: Im for having a court seizetheir assets, converting them to nonprofits, and putting new management in charge. If at all possible. It would send a message to other companies that think they can do damage to consumers and mislead regulators with immunity to consequences.
The problem is that corporate fines are generally a small percentage of profits.
https://www.theguardian.com/world/2011/apr/03/us-bank-mexico-drug-gangs https://www.huffingtonpost.com/dana-radcliffe/should-companies-obey-the-law_b_1650037.html
What incentive does the ISP have to change? Unless you can complain to some higher authority (FCC, perhaps) then there is no reason for the ISP to make any changes even with backlash. I’d be more incentivized to complain if there was at least some competition.
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal
Nobody says this. It’s being pushed because it prevents large corporations from locking out smaller players. The Internet is a great economic equalizer: I can start a business and put a website up and I’m just as visible and accessible as Microsoft.
We don’t want Microsoft to be able to pay AT&T to slow traffic to my website but not theirs. It breaks the free market by allowing collusion that can’t be easily overcome. It’s like the telephone network; I can’t go run wires to everyone’s house, but I want my customers to be able to call me. I don’t want my competitors to pay AT&T to make it harder to call me than to call them.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
That assumes people have a choice. They very often don’t. Internet service has a massively high barrier to entry, similar to a public utility. Most markets in the United States have at most two providers (both major corporations opposed to net neutrality). Very, very rarely is there a third.
More importantly, there are only five tier-1 networks in the United States. Five. It doesn’t matter how many local ISPs there are; without Net Neutrality, five corporations effectively control what can and can’t be transmitted. If those five decide something should be slowed down or forbidden, there is nothing I can do. Changing to a different provider won’t do a thing.
(And of those five, all of them donate significantly more to one major political party than the other, and the former Associate General Counsel of one of them is currently chairman of the FCC…)
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs.
Net neutrality says nothing about how much it costs. It just says you can’t charge different amounts based on content. It would be like television stations charging more money to Republican candidates to run ads than to Democratic candidates. They’re free to charge whatever they want; they’re not free to charge different people different amounts based on the content of the message.
Democracy requires communication. It does no good to say “freedom!” if the major corporations can effectively silence whoever they want. “At least it’s not the government” is not a good defense of stifling public debate.
And there’s a difference between a newspaper and a television/radio station/internet service. I can buy a printing press and make a newspaper and refuse to carry whatever I want. There are no practical limits to the number of printing presses in the country.
There is a limited electromagnetic spectrum. Not just anyone can broadcast a TV signal. There is a limit to how many cables can be run on utility polls or buried underground. Therefore, discourse carried over those media are required to operate more in the public trust than others. As they become more essential to a healthy democracy, that only becomes more important. It’s silly to say “you still have freedom of speech” if you’re blocked from television, radio, the Internet, and so on. Those are the public forums of our day. That a corporation is doing the blocking doesn’t make it any better than if the government were to do it.
Side note, the reason you don’t see the protests against the “gatekeepers” is that most of the mainstream media isn’t accurately covering the reaction of the people to the censorship.
There’s a big difference between Twitter not wanting to carry Alex Jones and net neutrality. Jones is still free to go start up a website that carries his message; with Net Neutrality not only could he be blocked from Twitter, but the network itself could make his website inaccessible.
There is no alternative with Net Neutrality. You can’t build your own Internet. Without mandating equal treatment of traffic, we hand the Internet over solely to the big players. Preventing monopolistic and oligarchic control of public discourse is a valid use of government power. It’s not censorship, it’s the exact opposite.
That assumes people have a choice. They very often don’t.
This was also brought up by @hwayne, @caleb and @friendlysock, and is not something that occurred to me. I appreciate all who are mentioning this.
More importantly, there are only five tier-1 networks in the United States.
Wow, I did not know that. I can see that as a legitimate reason to want net neutrality. But, I also think that they’ll piss off a lot of people if they can stream CNN but not InfoWars.
It just says you can’t charge different amounts based on content.
I understood it to also mean that you also couldn’t charge customers differently because of who they are. Also, don’t things like Tor mitigate things like that?
“At least it’s not the government” is not a good defense of stifling public debate.
I completely agree. But in the US we have a free market (at least, we used to) and that means that the government is supposed to stay out of it as much as possible.
Preventing monopolistic and oligarchic control of public discourse is a valid use of government power.
I also agree. But these corporations (the tier-1 ISPs) haven’t done anything noticeable to me to limit my enjoyment of conservative content, and I’m pretty sure that they would’ve by now if they wanted to.
The reason I oppose net neutrality is more because I don’t think that the government should control it than any more than I think AT&T and others should.
not only could he be blocked from Twitter, but the network itself could make his website inaccessible.
But they haven’t.
edit: how -> who
Even though I’m favoring net neutrality, I appreciate you braving the conservative position on this here on Lobsters. I did listen to a lot of them. What I found is most had reasonable arguments but had no idea about what ISP’s did, are doing, are themselves paying Tier 1’s, etc. Their media sources’ bias (all have bias) favoring ISP’s for some reason didn’t tell them any of it. So, even if they’d have agreed with us (maybe, maybe not), they’d have never reached those conclusions since they were missing crucial information to reflect on when choosing to regulate or not regulate.
An example is one telling me companies like Netflix should pay more to Comcast per GB or whatever since they used more. The guy didn’t know Comcast refuses to do that when paying Tier 1’s negotiating transit agreements instead that worked entirely different. He didn’t know AT&T refused to give telephones or data lines to rural areas even if they were willing to pay what others did. He didn’t know they could roll out gigabit today for same prices but intentionally kept his service slow to increase profit knowing he couldn’t switch for speed. He wasn’t aware of most of the abuses they were doing. He still stayed with his position since that guy in particular went heavily with his favorite, media folks. However, he didn’t like any of that stuff which his outlets never even told him about. Even if he disagrees, I think he should disagree based on an informed decision if possible since there’s plenty smart conservatives out there who might even favor net neutrality if no better alternative. I gave him a chance to do that.
So, I’m going to give you this comment by @lorddimwit quickly showing how they ignored the demand to maximize profit, this comment by @dotmacro showing some abuses they do with their market control, and this article that gives nice history of what free market did with each communications medium with the damage that resulted. Also note that the Internet itself was an open, free-if-you-have-a-wire system that competed with the proprietary, charge-per-use, lock-them-in-forever-if-possible systems the private sector was offering. It smashed them so hard you might have even never heard of them or forgotten a lot about them depending on your age. It also democratized more goods than about anything other than maybe transportation. Probably should stick with the principles that made that happen to keep innovation rolling. Net neutrality was one of them that was practiced informally at first then put into law as the private sector got too much power and was abusing it. We should keep doing what worked instead of the practices ISP’s want that didn’t work but will increase their profits at our expense for nothing in return. That is what they want: give us less or as little improvement in every way over time while charging us more. It’s what they’re already doing.
I read the comments, and I read most of the freecodecamp article.
I like the ideal of the internet being a public utility, but I don’t really want the government to have that much control.
I think the real problem I have with government control of the internet, is that I don’t want the US to end up like china with large swaths of the internet completely blocked.
I don’t really know how to solve our current problems. But, like @jfb said elsewhere in this thread, I don’t think that net neutrality is the best possible solution.
Also note that the Internet itself was an open, free-if-you-have-a-wire system that competed with the proprietary, charge-per-use, lock-them-in-forever-if-possible systems the private sector was offering. It smashed them so hard you might have even never heard of them or forgotten a lot about them depending on your age.
I might recognize a name, but I probably wasn’t even around yet.
So, I’m going to give you…
Thanks for the info, I’ll read it and possibly form a new opinion.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
What obvious reasons? Because customers will switch providers if they don’t treat all traffic equally? That would require (a) users are able to tell if a provider prioritizes certain traffic, and (b) that there is a viable alternative to switch to. I have no confidence in either.
I don’t personally care if the prioritize certain websites, but I sure as hell care if the block something.
As far as I’m concerned, they can slow down Youtube by 10% for conservative channels and I wouldn’t give a damn even though I watch and enjoy some. What really bothers me is when they “erase” somebody or block people from getting to them.
well you did say they have an incentive to provide “equal service” so i guess you meant something else. net neutrality supporters like me aren’t satisfied with “nobody gets blocked,” because throttling certain addresses gives big corporations more tools to control media consumption, and throttling have similar effects to blocking in the long term. i’m quite surprised that you’d be fine with your ISP slowing down content you like by 10%… that would adversely affect their popularity compared to the competitors that your ISP deems acceptable, and certain channels would go from struggling to broke and be forced to close down.
Well, I have pretty fast internet, so 10% wouldn’t be terrible for me. However, I can see how some people would take issue with such a slowdown.
I was using a bit an extreme example to illustrate my point. What I was trying to say was that they can’t really stop people from watching the content that they want to watch.
I recall, but didn’t review, a study saying half of web site users wanted the page loaded in 2 seconds. Specific numbers aside, I’ve been reading that kind of claim from many people for a long time that a new site taking too long to load, being sluggish, etc makes them miss lots of revenue. Many will even close down. So, the provider of your favorite content being throttled for even two seconds might kill half their sales since Internet users expect everything to work instantly. Can they operate with a 50% cut in revenue? Or maybe they’re bootstrapping up a business with a few hundred or a few grand but can’t afford to pay for no artificial delays. Can they even become the content provider your liked if having to pay hundreds or thousands extra on just extra profit? I say extra profit since ISP’s already paid for networks capable of carrying it out of your monthly fee.
yeah, the shaping of public media consumption would happen in cases where people don’t know what they want to watch or don’t find out about something that they would want to watch
anti-democratic institutions already shape media consumption and discourse to a large extent, but giving them more tools will hurt the situation. maybe it won’t affect you or me directly, but sadly we live in a society so it will come around to us in the form of changes in the world
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
Most customers have exceedingly limited options in their area, and they’re not going to switch houses because of their ISP. Especially in apartment complexes, you see cases where, say, Comcast has the lockdown on an entire population and there really isn’t a reasonable alternative.
In a truly free market, maybe I’d agree with you, but the regulatory environment and natural monopolistic characteristics of telecomm just don’t support the case.
Most customers have exceedingly limited options in their area, and they’re not going to switch houses because of their ISP.
That’s a witty way of putting it.
But yeah, @lorddimwit mentioned the small number of tier-1 ISPs. I didn’t realize there were so few, but I still think that net neutrality is overreaching, even if its less than I originally thought.
Personally, I feel that net neutrality, such as it is, would prevent certain problems that could be better addressed in other, more fundamental ways. For instance, why does the US allow the companies that own the copper to also own the ISPs?
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
Awkward political jabs aside, most of your statements imply that you believe customers are free to choose who they get their internet from, which is just plain incorrect. Whatever arguments you want to make against net neutrality, there is one indisputable fact that you cannot just ignore or paper over:
ISPs do not operate in a free market.
In the vast majority of the US, cable and telephone companies are granted local monopolies in the areas they operate. That is why they must be regulated. As the Mozilla blog said, they have both the incentive and means to abuse their customers and they’ve already been caught doing it on multiple occasions.
most of your statements imply that you believe customers are free to choose who they get their internet from, which is just plain incorrect
I think you’re a bit late to the party, I’ve conceded that fact already.
All of that is gibberish. Net Neutrality is being pushed because it creates a more competitive marketplace. None of it has anything to do with professional liar Alex Jones.
But without net neutrality, the ISPs still have an incentive to provide equal service, because otherwise they’ll lose customers (for obvious reasons).
That’ s not how markets work. And it’s not how the technology or permit process for ISPs work. There is very little competition among ISPs in the US market.
Hey, here’s a great example from HN of the crap they pull without net neutrality. They advertised “unlimited,” throttled it secretly, admitted it, and forced them to pay extra to get actual unlimited.
@lorddimwit add this to your collection. Throttling and fake unlimited been going on long time but they couldve got people killed doing it to first responders. Id have not seen that coming just for PR reasons or avoiding local, govt regulation if nothing else.
I can’t speak to why open-source advocates are also pushing for net neutrality, because (in my opinion) the government shouldn’t be involved in how much internet costs.
It’s not about how much internet costs, it’s about protecting freedom of access to information, and blocking things like zero-rated traffic that encourage monopolies and discourage competition. If I pay for a certain amount of traffic, ISPs shouldn’t be allowed to turn to Google and say “want me to prioritize YouTube traffic over Netflix traffic? Pay me!”
Net neutrality is being pushed by the media because it “fights discrimination”, and they blame the “fascist, nazi right” for it’s repeal (and they’re correct, except for the “fascist, nazi” bit).
Where on earth did you hear that? I sure hope you’re not making it up—you’ll find this site doesn’t take too kindly to that.
I might’ve been conflating two different political issues, but I have heard “fascist” and “nazi” used to describe the entire right wing.
A quick google search for “net neutrality fascism” turned this up https://motherboard.vice.com/en_us/article/kbye4z/heres-why-net-neutrality-is-essential-in-trumps-america
“With the rise of Trump and other neo-fascist regimes around the world, net neutrality will be the cornerstone that activists use to strengthen social movements and build organized resistance,” Wong told Motherboard in a phone interview. “Knowledge is power.”
You assume that net neutrality is a left-wing issue, which it’s not. It actually has bipartisan support. The politicians who oppose it have very little in common, aside from receiving a large sum of donations from telecom corporations.
As far as terms like “fascist” or “Nazi” are concerned—I think they have been introduced into this debate solely to ratchet up the passions. It’s not surprising that adding these terms to a search yields results that conflate the issues.
Ill add on your first point that conservatives who are pro-market are almost always pro-competition. They expect the market will involve competition driving whats offered up, its cost down, and so on. Both the broadband mandate and net neutrality achieved that with an explosion of businesses and FOSS offering about anything one can think of.
The situation still involves 1-3 companies available for most consumers that, like a cartel, work together to not compete on lowering prices, increasing service, and so on. Net neutrality reduced some predatory behavior the cartel market was doing. They still made about $25 billion in profit between just a few companies due to anti-competitive behavior. Repealing net neutrality for anti-competitive market will have no positives for consumer but will benefit roughly 3 or so companies by letting them charge more for same or less service.
Bad for conservative’s goals of market competition and benefiting conservative voters.
One part of it is that we already have net neutrality, and it’s easier to try to hang on to a regulation than to create a new one.
Lesson 5: Extract complex conditionals and test them in isolation
There’s a tension here I’ve not seen many people talk about. In order to test an extracted conditional in isolation you need to make it exportable from your module. But that increases the “public API” of your module, which encourages more coupling. But if you keep the function private to the module then you can’t test it.
At least in the Ruby world people seem to err on the side of not testing private methods and only testing public behavior. I wonder if this is a sign we need different testing techniques.
I was going to say ‘Property based testing is pretty good for this!’ and then promptly realised who I was replying to..
In any case, Property Based Testing /is/ in my view a good fit for this. Taking the example from the article, the invariants of the system are basically if you’re in the USA and less than 21, you shouldn’t be able to purchase a beer, ever. To me that feels actually pretty simple to express using the aforementioned technique.
Last week I did the TLA+ workshop, which was a blast. Everybody loved it and thought it was going to change how they programed! This week I’m taking the feedback and improving the workshop, like expanding some of my explanations and fleshing out exercises.
I’m also looking for corporate opportunities, so if you or anyone you know would be interested in using it at work, get in touch :)
Funwise, I’m teaching myself minizinc and constraint programming. It’s fun stuff but I don’t think it’s that applicable to the kind of work I do. But I might try to throw up a demo with a physics simulation.
Never heard of it, but it seems like a super interesting approach to interactive environment. I cannot help but remember this Bret Victor’s talk about how we have been programming in almost-anachronistic ways with no innovation in the interfaces.
There’s nothing obsolete about text. Visual languages don’t work. They’ve been tried hundreds of times, to no avail, because GUIs are fundamentally bad user interfaces for experienced users. Text is a better interface for power users, and programming languages are for power users.
Why can’t I re-sort the definitions in my source instead of scrolling around then? Why is it hard to see a dependency graph for all my functions? Why do I have to jump between files all the time? Text - an interface for linear presentation of information - is fundamentally a kludge for code, which is anything but linear.
Why can’t I re-sort the definitions in my source instead of scrolling around then?
Sort them by what? It wouldn’t be difficult to write a script using the compiler module of Python to reorder the declarations in your file in an order you chose, which you could then use to replace the text of a buffer in your text editor. But usually I’d suggest what you want is to see a list of definitions in a particular order, which you could then use to jump to the definitions.
In every case that I’ve seen of not using plain text, it inevitably become inscrutable. What is actually in my Smalltalk/Lisp image? What is actually there? What can people get out of it later when I deploy it?
Why is it hard to see a dependency graph for all my functions?
Because nobody has written something that will take your source files, determine their dependencies, and produce the DOT output (a very popular text-based format for graphs, far superior in my opinion to any binary graph description format) for that graph? It’s not like it’s particularly difficult.
Why do I have to jump between files all the time?
Because it turns out it’s useful to organise things into parts. Because it turns out it’s useful to be able to parallelise compilation and not reparse every bit of code you’ve ever written every time you change any part of it.
I think that it’s definitely a requirement of any decent programming language to have a way to easily take the source code of that programming language and reify it into a syntax tree, for example. That’s very useful to have in a standard library. In Lisp it’s just read, Python has more complex syntax and requires more machinery which is in a standard library module, other languages have similar things.
One point might be: maybe you don’t need a dependency graph if you can just make your code simpler, maybe you don’t need to jump around files much if your code is properly modularised (and you have a big enough screen and narrow enough maximum line length to have multiple files open at once), maybe sorting your definitions is wrong and what you want is a sortable list of declarations you can jump to the definitions.
Not to mention that version control is important and version controlling things that aren’t text is a problem with conventional version control tools. Might not be an issue, you have your own VCS, but then you enter the land of expecting new users of your language to not only not use their standard editor, but also to not use their standard VCS, not use their standard pastebin, etc. How do you pastebin a snippet of a visual language so someone on an IRC channel can see it and give you help? How do you ask questions on StackOverflow about a visual language?
It’s not even an issue of them being unusual and unsupported. By their very nature, not using text means that these languages aren’t compatible with generic tools for working with text. And never will be. That’s the thing about text, rather than having many many many binary formats and few tools, you have one binary format and many many tools.
Hey Miles, thanks for elaborating. I think we could have more interesting discussions if you give me a bit more credit and skip the trivial objections. You’re doing the same thing you did last time with C++ compilers. Yes, I know I could write a script, it’s not the point. I’m talking about interactive tools for source code analysis and manipulation, not a one-off sort.
I don’t agree with your objections about parallel compilation and parsing. It seems to me that you’re just thinking about existing tools and arguing from the status quo.
Further down, you make a suggestion which I interpret as “better languages could mitigate these issues” which is fair, but again I have to disagree because better languages always lead to more complex software which again requires better tools, so that’s a temporary solution at best.
You also raise a few objections, and here I should clarify that what I have in mind is not some kind of visual flowchart editor. What I’m claiming is that the conflation of internal representation and visual representation for code is counterproductive, but I think that a display representation that mostly looks like text is fine (as long as it’s actually within a structured editor). What I’m interested in is being able to manipulate symbols and units of code as well as aspects of its structure rather than individual characters.
Consequently, for pastebin or StackOverflow, you could just paste some text projection of the code, no problem. When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there. For example, if there was a VCS that showed me diffs that take into account the semantics of the language (eg like this: https://www.semanticmerge.com), that would be pretty cool.
For the rest of your objections, I offer this analogy: imagine that we only had ASCII pictures, and none of this incompatible JPG/PSD/PNG nonsense with few complicated tools. Then we could use generic tools for working with text to manipulate these files, and we wouldn’t be constrained in any way whether we wanted to create beautiful paintings or complex diagrams. That’s the thing about text!
I think the practitioners and particularly academics in our field should have more sense of possibilities and less affection for things the way they are.
When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there.
Existing VCS could work reasonably well if the serialisation/“text projection” was deterministic and ‘stable’, i.e. minimising the amount of spurious changes like re-ordering of definitions, etc. As a first approximation I can imagine an s-expression language arranging the top-level expressions into lexicographic order, spreading them out so each sub-expression gets its own line, normalising all unquoted whitespace, etc. This would be like a very opinionated gofmt.
If users wan’t to preserve some layout/etc. then the editor can store that as metadata in the file. I agree that semantics-aware diffing would be great though ;)
So you always end up separating the storage format from display representation in order to create better tools, which is exactly my point.
Yes, I agree with your points. Was just remarking that some of these improvements (e.g. VCS) are easier to prototype and experiment with than others (e.g. semantics-aware queries of custom file formats).
The way I see it is that there are tools for turning text into an AST and you can use them to build the fancy things you want. My point wasn’t ‘you can write that sort as a one-off’. You can edit code written in a text-based programming language with a really fancy editor that immediately parses it to an AST and works with it as an AST, and only turns it into text when written to disk. I have no problem with that. But really you’re still editing text when using something like paredit.
Something like vim but where the text objects are ‘identifier’, ‘ast node’, ‘expression’, ‘statement’, ‘logical line of code’, ‘block’, etc. rather than ‘text between word separators’, ‘text between spaces’, ‘line’, etc. would be a useful thing. In fact, you could probably do this in vim. I have an extension I use that lets you modify quotes around things taking into account escaped quotes within, etc. That’d probably work way better if it had that default structure for normal text and then could be customised to actually take into account the proper grammar of particular programming languages for which that is supported.
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language, where you can’t use the same tools with multiple languages. And then having to reimplement the same basic functionality for every single language in separate IDEs for each, where everything works slightly differently.
I do find it useful that I can do ci( and vim will delete everything inside the nearest set of parentheses, properly taking into account nesting. So if I have (foo (hello 1 2 3) bar) and my cursor is on the a, it’ll delete everything, even though the nearest ( and ) are beside hello and not foo. That kind of thing, more structured editing? I’m all for that.
Consequently, for pastebin or StackOverflow, you could just paste some text projection of the code, no problem. When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there. For example, if there was a VCS that showed me diffs that take into account the semantics of the language (eg like this: https://www.semanticmerge.com), that would be pretty cool.
Ultimately I think if you have a recognised standardised text projection of your code, you might as well just make that the standardised format for it, then your fancy editor or editor plugin can parse it into the structures it needs. This helps ensure you can edit code over SSH, and have a variety of editors compatible with it, rather than just the single language-designer-provided IDE.
One of the nice things about git is that it stores snapshots internally rather than diffs. So if you have a language-specific tool that can produce diffs that are better due to being informed by the grammar of the language (avoiding the problem of adding a function and the diff being ‘added a new closing brace to the previous function then writing a new function except for a closing brace’, for example), then you can do that! Change the diff algorithm.
For the rest of your objections, I offer this analogy: imagine that we only had ASCII pictures, and none of this incompatible JPG/PSD/PNG nonsense with few complicated tools. Then we could use generic tools for working with text to manipulate these files, and we wouldn’t be constrained in any way whether we wanted to create beautiful paintings or complex diagrams. That’s the thing about text!
Well I mean I do much prefer creating a graph by writing some code to emit DOT than by writing code to emit PNG. I did so just the other day in fact. http://rout.nz/nfa.svg. Thank god for graphviz, eh?
Note that there’s also for example farbfeld, and svg, for that matter: text-based formats for images. Just because it’s text underneath doesn’t mean it has to be rendered as ASCII art.
Cool, I’m glad we can agree that better tools would be good to have.
As far as the storage format, I don’t actually have a clear preference. What’s clearly needed is a separation of storage format and visual representation. If we had that, arguments about tabs vs spaces, indent size, let/in vs where, line length, private methods first or public methods first, vertical vs horizontal space (and on and on) could be nullified because everybody could arrange things however they like. Why can’t we have even such simple conveniences? And that’s just the low hanging fruit, there are far more interesting operations and ways of looking at source that could be implemented.
The other day there was a link to someone’s experiment (https://github.com/forest-lang/forest-compiler) where they use one of the text projections as the storage format. That might work, but it seems to me that the way parsing currently happens, there’s a lot of unnecessary work as whole files are constantly being reparsed because there is no structure to determine the relevant scope. It seems that controlling operations on the AST and knowing which branches are affected could be a lot more efficient. I’m sure there’s plenty of literature of this - I’ll have to look for it (and maybe I’m wrong about this).
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language, where you can’t use the same tools with multiple languages. And then having to reimplement the same basic functionality for every single language in separate IDEs for each, where everything works slightly differently.
I understand your concern, but this sounds exactly like the current state of affairs (other than really basic stuff like syntax highlighting maybe). There’s a separate language plugin (or plugins) for every combination of editor/IDE and language, and people keep rewriting all that stuff every time a new editor becomes popular, don’t they?
One of the nice things about git is that it stores snapshots internally rather than diffs.
Sure, we can glean a bit more information from a pair of snapshots, but still not much. It’s still impossible to track a combination of “rename + change definition”, or to treat changes in the order of definitions as a no-op, for example. Whereas if we were tracking changes in a more structured way (node renamed, sub-nodes modified etc.), it seems like we could say a lot more meaningful things about the evolution of the tree.
Thank god for graphviz, eh?
Perhaps the analogy was unclear. Being able to write a set of instructions to generate an image with a piece of software has nothing to do with having identical storage format and visual representation. If we approached images the same way we approach code, we would only have ASCII images as the output format, because that’s what is directly editable with text tools. Since you see the merits of PNG and SVG, you’re agreeing that there’s merit in separating internal/storage representation from the output representation.
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language
I might have missed something, but I didn’t see anyone proposing this.
In particular, my understanding of Luna is that the graphical and textual representations are actually isomorphic (i.e. one can be derived if given the other). This means we can think of the textual representation as the being both a traditional text-based programming language and as a “file format” for serialising the graphical programming language.
Likewise we can switch to a text view, use grep/sed/etc. as much as we like, then switch back to a graphical view if we want (assuming that the resulting text is syntactically valid).
Tools that improve navigation within textual source have existed for a long time. I’ve been using cscope to bounce around in C and Javascript source bases for as long as I can remember. The more static structure a language has, the easier it is to build these tools without ambiguity. The text source part isn’t really an issue – indeed it enables ad hoc tooling experiments to be built with existing text management tools; e.g., grep.
Those tools aren’t text, though. They’re other things the augment the experience over just using text which becomes an incidental form of storage. Tools might also use AST’s, objects, data flows, constraints, and so on. They might use anything from direct representation to templates to synthesis.
I think the parent’s point was just text by itself is far more limited than that. Each thing I mentioned is available in some programming environment with an advantage over text-driven development.
I think it’s wrong to say that the text storage is incidental. Line-oriented text files are about the lowest common denominator way we have to store data like this.
For starters, it’s effectively human-readable – you can lift the hood up and look at what’s underneath, understanding the effect that each individual character has on the result. Any more complicated structure, as would be generally required to have a more machine-first structured approach to program storage, is not going to have that property; at least not to the same extent.
If this thread demonstrates anything, it’s that we all have (at times, starkly!) different preferences for software engineering tools. Falling back on a textual representation allows us to avoid the need to seek consensus on a standard set of tools – I can use the editor and code manipulation tools that make sense to me, and you can stick to what makes sense to you. I think a lot of the UNIX philosophy posturing ends up being revisionist bunk, but the idea that text is a pretty universal interface for data interchange isn’t completely without merit.
The under-the-hood representation is binary-structured electricity that gets turned into human-readable text by parsing and display code. If already parsing it and writing display code, then one might just as well use a different encoding or structure. Text certainly has advantages as one encoding of many to have available. Plugins or input modules can take care of any conversions.
Text does often have tooling advantages in systems like UNIX built with it in mind, though.
I think it’s a reductionist argument for the good-enough, hard earned status quo. I think it can be valid, but only within a very narrow perspective - operational and short term.
To my mind, your position is equivalent to this: we should only have ASCII images, and we don’t need any of that PNG/JPG/PSD stuff with complicated specialised tools. Instead, we can use generic text tools to make CAD drawings, diagrams, paintings - whatever. All of those things can be perfectly represented in ASCII, and the text tools will not limit us in any way!
I want to search my code like a database, e.g. “show my where this identifier is used as a parameter to a function” - the tooling for text doesn’t support this. Structured tooling would be super useful.
Many things can be “queried” with grep and regular expressions. Which is also great to find “similar occurrences” that need to be checked but are only related by some operators and function calls following another. But on the other hand I’d definitely argue that IDEs at least have a tiny representation of the current source file for navigation or something and that you can click some token and find its uses, definitions, implementations … But it only works if I disable the low power mode. And with my 8Gb RAM MacBook I sometimes have to kill the IDE before running the program to make sure I can still use it at the same time.
Maybe if it wasn’t parsing and re-parsing massive amounts of text all the time, it would be more energy efficient…
Exactly. And it could extend beyond search; code could be manipulated and organised in more powerful ways. We still have rudimentary support for refactoring in most IDEs, and so we keep going through files and manually making structurally similar changes one by one, for no reason other than the inadequate underlying representation used for code.
I could be wrong and maybe this is impossible to implement in any kind of general way beyond the few specific examples I’ve thought of, but I find it strange that most people dismiss the very possibility of anything better despite the fact that it’s obviously difficult and inconvenient to work with textual source code.
The version of cscope that I use does things of that nature. The list of queries it supports:
Find this C symbol:
Find this global definition:
Find functions called by this function:
Find functions calling this function:
Find this text string:
Change this text string:
Find this egrep pattern:
Find this file:
Find files #including this file:
Find assignments to this symbol:
I use Find functions calling this function a lot, as well as Find assignments to this symbol. You could conceivably add more query types, and I’m certain there are other tools that are less to my admittedly terminal-heavy aesthetic preference that offer more flexible code search and analysis.
The base structure of the software being textual doesn’t get in the way of this at all.
Software isn’t textual. We read the text into structures. Our tools should make these structures easier to work with. We need data structures other than text as the common format.
Can I take cscope’s output and filter down to “arguments where the identifiers are of even length”?
Compilers and interpreters use structured representations because those representations are more practical for the purposes of compiling and interpreting. It’s not a given that structured data is the most practical form for authoring. It might be. But what the compiler/interpreter does is not evidence of that.
I would also be interested on your thoughts about Lisp where the code is already structured data. This is an interesting property of Lisp but it does not seem to make it clearly easier to use.
but it does not seem to make it clearly easier to use.
Sure it does: makes macros easier to write than a language not designed like that. Once macros are easy, you can extend the language to more easily express yourself. This is seen in the DSL’s of Common LISP, Rebol, and Racket. I also always mention sklogic’s tool since he DSL’s about everything with a LISP underneath for when they don’t work.
Sure, but all of these tools (including IDEs) are complicated to implement, error-prone, and extremely underpowered. cscope is just a glorified grep unless I’m missing something (I haven’t used it, just looked it up). The fact that you bring it up as a good example attests to the fact that we’re still stuck somewhere near mid-twentieth century in terms of programming UI.
I bring it up as a good example because I use it all the time to great effect while working on large scale software projects. It is relatively simple to understand what it does, it’s been relatively reliable in my experience, and it helps a lot in understanding the code I work on. I’ve also tried exuberant ctags on occasion, and it’s been pretty neat as well.
I don’t feel stuck at all. In fact, I feel wary of people attempting to invalidate positive real world experiences with assertions that merely because something has been around for a long time that it’s not still a useful way to work.
Have you noted, that the Luna language has dual representation? Where each visual program has an immediate and easily editable text representation, and the same is true in the other direction as well? This is intended to be able to keep the benefits of the text interface, while adding the benefits of a visual representation! That’s actually the main idea behind Luna.
What about the power users who use things like Excel or Salesforce? These are GUIs perfectly tailored to specific tasks. A DJ working with a sound board certainly wouldn’t want a textual interface.
Textual interfaces are bad, but they are generic and easy to write. It’s a lot harder to make an intuitive GUI, let alone one that works on something as complex as a programming language. Idk if Luna is worthwhile, but text isn’t the best user interface possible imho
DJs use physical interfaces, and the GUIs emulation of those physical interfaces are basically all terrible.
I’ve never heard of anyone liking Salesforce, I think that must be Stockholm Syndrome. Excel’s primary problem in my opinion is that it has essentially no way of seeing how data is flowing around. If something had the kind of ‘reactive’ nature of Excel while being text-based I’d much prefer that.
Textual interfaces are excellent. While there are tasks that benefit from a GUI - image editing for example - in most cases GUIs are a nicer way of representing things to a new user but are bad for power users. I wouldn’t expect first year computer science students to use vim, as it’s not beginner-friendly, but it’s by far the best text editor out there in the hands of an experienced user.
I wouldn’t expect first year computer science students to use vim, as it’s not beginner-friendly, but it’s by far the best text editor out there in the hands of an experienced user.
I’d call myself an “experienced user” of vim. I’ve written extensions, given workshops, and even written a language autoindent plugin, which anyone who’s done it knows is like shoving nails through your eyeballs. About once a year I get fed up with the limitations of text-only programming and try to find a good visual IDE, only to switch back when I can’t find any. Just because vim is the best we currently have doesn’t mean it’s actually any good. We deserve better.
(For the record, vim isn’t beginner-unfriendly because it’s text only. It’s beginner-unfriendly because it’s UI is terrible and inconsistent and the features are all undiscoverable.)
Most people don’t bother to learn vimscript properly, treating it much like people treated Javascript for years: a bunch of disparate bits they’ve picked up over time, with no unifying core. But once you actually learn it, it becomes much easier to use and more consistent. The difference between expressions and commands becomes sensible instead of seeming like an inconsistency.
I never get fed up with the limitations of text-only programming, because I don’t think they exist. Could you elaborate on what you are saying those limitations are?
And I totally, 100% disagree with any claim that vim’s UI is bad or inconsistent. On the contrary, it’s extremely consistent. It’s not a bunch of little individual inconsistent commands, it’s motions and text objects and such. It has extensive and well-written help. Compared to any other IDE I’ve used (a lot), it’s way more consistent. Every time I use a Mac program I’m surprised at how ad-hoc the random combinations of letters for shortcuts are. And everything requires modifier keys, which are written with ridiculous indecipherable symbols instead of ‘Ctrl’ ‘Shift’ ‘Alt’ etc. Given that Mac is generally considered to be very easy to use, I don’t think typical general consensus on ease of use is very instructive.
Bret Victor explains the persistence of textual languages as resistance to change, drawing an equivalence between users of textual languages now and assembly programmers who scoffed at the first higher-level programming languages. But this thread is evidence that at least some people are interested in using a language that isn’t text-based. Not everyone is fairly characterized by Bret Victor’s generalization. So then why hasn’t that alternative emerged? There are plenty of niche languages that address a minority preference with reasonable rates of adoption. With the exception of Hypercard, I can’t think of viable graphical programming language. Even Realtalk, the language that runs Dynamicland (Bret Victor’s current focus), is text-based, being a superset of Lua. I keep hearing about how text-based languages are old-fashioned and should die out, but I never hear anything insightful about why this hasn’t happened naturally. I’m not denying that there are opportunities for big innovation but “make a visual programming language” seems like an increasingly naive or simplistic approach.
I think it has to do with the malleability of text. There’s a basic set of symbols and one way to arrange them (sequentially.) Almost any problem can be encoded that way. Emacs’ excellent org-mode is a testament to the virtue of malleability.
Excel also has that characteristic. Many, many kind of problems can be encoded in rectangles of text with formulas. (Though I might note that having more ways to arrange things allows new kinds of errors, as evidenced by the growing cluster of Excel features for tracing dependencies & finding errors.)
Graphical languages are way less malleable. The language creator decides what elements, relations, and constraints are allowed. None of them let me redefine what a rectangle represents, or what relations are allowed between them. I think that’s why these languages can be great at solving one class of problem, but a different class of problem seems to require a totally different graphical language.
My suspicion is that it’s because graphical languages merge functionality and aesthetics, meaning you have to think very, VERY hard about UI/UX and graphic design. You need to be doing that from the start to have a hope of it working out.
When I worked on projects using the old phase gate and waterfall approach, it was common to skip creating unit tests. In fact I don’t recall ever being on a project where we did more than a token stab at it. Of course the plan said we would, but the plan said a lot of things and by the time we were a couple months into the work, reality and “on paper” were very different things. This divergence was allowed to stay hidden till almost the end of the projects. Then it would all fall apart. Non-compliance to the plan was not an isolated thing, but was the norm for those projects - even ones that were safety-critical.
Well that’s horrifying.
I’m hosting my first TLA+ workshop this week. Beyond excited and beyond nervous.
I write about software history, formal methods, and Weird Niche Topics that catch my interest.
@hwayne Calling the pony developers smug for that is a bit misleading with the missing context… Pony has checked exceptions, calls that can fail need to be annotated with ‘?’ . If division could fail a whole bunch of functions would need annotations defeating the value of ‘?’ as a documentation tool.
This means 1/0 == 0 is practical in pony to avoid typing ‘?’ everywhere.
It’s not 100% clear but I’m referring to the original tweet being smug: the person who made the “we men of industry” tweet. I cut off their username to anonymize them.
Oh ok, btw, your twitter account is a lot of fun. thanks for the laughs - everyone reading this follow him :)
This is ill-advised.
You cannot define 1/0 and still have a field. There’s no value that works. Even when you do things like the extended real numbers where x/0 = infinity, you’re really just doing a kind of shorthand and you acknowledge that the result isn’t a field.
You can of course define any other algebraic structure you want and then say that operating on the expression 1/0 is all invalid because you didn’t define anything else and no other theorem applies, but this is not very helpful. You can make bad definitions that don’t generalise, sure, definitions that aren’t fields. But to paraphrase a famous mathematician, the difficulty lies not in the proofs but in knowing what to prove. The statement “1/0 = 0 and nothing else can be deduced from this” isn’t very interesting.
I want to make an attempt to clarify the discussion here because I think there is some substance I found interesting. I don’t have a strong opinion about this.
The article actually defines an algebraic structure with three operators: (S, +, *, /) with some axioms. It happens that these axioms makes it so (S, +, *) is a field (just like how the definition of a field makes (S, +) a group).
The article is right in saying that these axioms do not lead to a contradiction. And there are many non-trivial such structures.
However, the (potential) issue is that we don’t know nearly as much about these structures than we do about fields because any theorem about fields only apply to (S, +, *) instead of (S, +, *, /). So all the work would need to be redone. It could be said that the purpose of choosing a field in the first place is to benefit from existing knowledge and familiar expectations (which are no longer guarantteed).
I guess formally adding an operator means you should call it something else? (Just like how we don’t call fields a group even though it could be seen as a group with an added * operator.)
This has no bearing on the 1/0 = 0 question however, which still works from what’s discussed in the article.
As I understand it, you’ve only defined the expression 1/0 but you are saying that /0 isn’t shorthand for the multiplicative inverse of 0 as is normally the case for /x being x^-1, by definition. Instead, /0 is some other kind of magical non-invertible operation that maps 1 into 0 (and who knows what /0 maps everything else into). Kind of curious what it has to do with 0 at all.
So I guess you can do this, but then you haven’t defined division by zero at all, you’ve just added some notation that looks like division by zero but instead just defined some arbitrary function for some elements of your field.
If you do mean that /0 is division by zero, then 1/0 has to be, by definition, shorthand for 1*0^-1 and the arguments that you’ve already dismissed apply.
The definition of a field makes no statements about the multiplicative inverse of the additive identity (https://en.wikipedia.org/wiki/Field_(math)#Classic_definition). Defining it in a sound way does not invalidate any of the axioms required by the field, and, in fact, does define division by zero (tautologically). You end up with a field and some other stuff, which is still a field, in the same way that adding a multiplication operator on a group with the appropriate properties leaves you with a group and some other stuff.
The definition of the notation “a / b => a * b^-1” assumes that b is not zero. Thus, you may define the case when b is 0 to mean whatever you want.
That people want to hold on to some algebraic “identities” like multiplying by the denominator cancels it doesn’t change this. For that to work, you need the assumption that the denominator is not zero to begin with.
In what way, whatever it is you defined /0 to be, considered to be a “division”? What is division? Kindly define it.
And when b is zero, what is division? That’s the whole point of this argument. What properties does an operation need to have in order to be worthy of being called a division?
Indeed, it is the whole point. For a field, it doesn’t have to say anything about when you divide by zero. It is undefined. That doesn’t mean that you can’t work with and define a different, but still consistent, structure where it is defined. In fact, you can add the definition such that you still have the same field, and more.
edit: Note that this doesn’t mean that you’re defining a multiplicative inverse of zero. That can’t exist and still be a field.
In what way is it consistent? Consistent with what? As I understand it, you’re still saying that the expression 1/0 is an exception to every other theorem. What use is that? You still have to write a bunch of preconditions, even in Coq, saying how the denominator isn’t zero. What’s the point of such a definition?
It seems to me that all of this nonsense is about not wanting to get an exception when you encounter division by zero, but you’re just delaying the problem by having to get an exception whenever you try to reason with the expression 1/0.
I mean that the resulting structure is consistent with the field axioms. The conditions on dividing by zero never go away, correct. And yes, this is all about avoiding exceptions in the stack unwinding, programming language sense. The article is a response to the statements that defining division by zero in this way causes the structure to not be a field, or that it makes no mathematical sense. I am also just trying to respond to your statements that you can’t define it and maintain a field.
It really doesn’t make mathematical sense. You’re just giving the /0 expression some arbitrary value so that your computer doesn’t raise an exception, but what you’re defining there isn’t division except notationally. It doesn’t behave like a division at all. Make your computer do whatever you want, but it’s not division.
Mathematical sense depends on the set of axioms you choose. If a set of axioms is consistent, then it makes mathematical sense. You can disagree with the choices as much as you would like, but that has no bearing on the meaning. Do you have a proof that the resulting system is inconsistent, or even weaker, not a field?
I don’t even know what the resulting system is. Is it, shall we say, the field axioms? In short, a set on which two abelian operations are defined, with two distinct identities for each abelian operation, such that one operation distributes over the other? And you define an additional operation on the distributing operation that to each element maps its inverse, except for the identity which instead is mapped to the identity of the distributed-over operation?
It’s a field where the definition of division is augmented to include a definition when the divisor is zero. It adds no new elements, and all of the same theorems apply.
You are correct. The field axioms are all still true, even if we extend / to be defined on 0.
The reason for this is that the axioms never “look at” any of the values x/0. They never speak of them. So they all hold regardless of what x/0 is.
That said, even though you can define x/0 without violating axioms it doesn’t mean you should. In fact it seems like a very bad idea to me.
That doesn’t make it not a field; you don’t have to have a division operator at all to be a field, let alone a division operator that is defined to be multiplication by the multiplicative inverse.
zeebo gave the same answer I would give: a / b is a multiplied by the multiplicative inverse of b when b is not zero. This article is all about how a / 0 is not defined and so, from an engineering perspective, you can define it to be whatever you want without losing the property that your number representation forms a field. You claimed that defining a / 0 = 1 means that your numbers aren’t a field, and all I’m saying is that the definition of the division operator is 100% completely orthogonal to whether or not your numbers form a field, because the definition of a field has nothing to say about division.
What is an engineering perspective?
Also, this whole “a field definition doesn’t talk about division” is a bit of misunderstanding of mathematical idioms. The field definition does talk about division since “division” is just shorthand for “multiplicative inverse”. The reason the definition is written the way it is (excluding 0 from having a multiplicative inverse) is that giving zero a multiplicative inverse results in contradictions. When you say “ha! I won’t let that stop me! I’m going to define it anyway!” well, okay, but then either (1) you’re not definining a multiplicative inverse i.e. you’re not defining division or (2) you are defining a multiplicative inverse and you’re creating a contradiction.
(I had a whole comment here, but zeebo is expressing themselves better than I am, and there’s no point in litigating this twice, especially when I feel like I’m just quoting TFA)
Bad idea, it should error or give NaN.
1/0 = 0 is mathematically sound
It’s not mathematically sound.
a/b = c should be equivalent to a = c*b
this fails with 1/0 = 0 because 1 is not equal to 0*0.
Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.
There is a subtlety because some people say (X) and others say (Y)
(X) a/b = c should be equivalent to a = c*b when the LHS is well defined
(Y) a/b = c should be equivalent to a = c*b when b is nonzero
If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.
It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.
This is explicitly addressed in the post. Do you have any objections to the definition given in the post?
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values
That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.
Those truthy/falsey values are an often source of errors.
I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.
It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.
https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff
That section still needs work. For example, I’m still not sure if Andrew Wright’s Simple Imperative Polymorphism was the impetus for implementing mutable references in MLs, or earlier MLs dealt with their type safety in some different ways. It also needs references to those papers in question. Any contributions to it are greatly appreciated!
Could you elaborate why the author should use a broader term? Refinement Types is an even broader category than dependent types, and do not necessarily include types which depend on a value. In this case the author is specifically referring to types which depend on a value.
In his example he’s not encoding the length of the arrays into the type, just adding a type contract that the length of the output of the function matches the lengths of the inputs. This is more in line with how refinement type systems work than dependent type systems.
I just finished writing Practical TLA+ so it’s time for a break hahaha no I’ve got a workshop to write and a tutorial to clean up
Outside of work I want to start revving up candymakimg again.
Nice ~! I came across TLA+ recently and realized I should be using it at work. Got started with TLA+ using this intro guide - https://learntla.com/introduction/ (My schedule is a chapter per week).
There’s a quote I like that I can’t remember from where:
Thirty years ago “it reduces to 3SAT” meant the problem was impossible. Now it means the problem is trivial.
I wrote something vaguely like that a few years ago, though I’m sure I wasn’t the first to observe it:
SAT, the very first problem to be proven NP-complete, is now frequently used in AI as an almost canonical example of a fast problem
We have faster hardware and better algorithms now, yes. But the real reason is because early theoretical results which emphasized worse-case performance had scared the entire field off even trying. These results based on complexity classes are true, but misleading: as it turns out, the “average” SAT instance for many real-world problems probably is solvable. Only when this was recognized could we make progress on efficient SAT algorithms. Beware sound theories mis-applied!
I saw SAT solvers as academically interesting but didn’t think that they have many practical uses outside of other academic applications. … I have to say that modern SAT solvers are fast, neat and criminally underused by the industry.
Echoing a good comment on reddit: The author didn’t list any practical applications!!! How can you then say they are criminally underused?
The only one I know of is writing versioned dependency solver for a package manager (in the style of Debian’s apt). However, very few people need to write such code.
What are some other practical applications? I think they are all quite specialized and don’t come up in day-to-day programming. Happy to be proven wrong.
EDIT: I googled and I found some interesting use cases, but they’re indeed specialized. Not something I’ve done or seen my coworkers do:
https://www.quora.com/What-are-the-industrial-applications-of-all-SAT-solvers
http://www.carstensinz.de/talks/RISC-2005.pdf
I can think of a certain scheduling algorithm that might have used a SAT solver, but without details I can’t be sure.
They see some use in formal methods for model checking. The Alloy Analyzer converts Alloy specs to SAT problems, which is one of the reasons its such a fast solver.
There’s also this talk on analyzing floor plans.
A previous submission on SAT/SMT might help you answer that.
I’ve used Z3 to verify that a certain optimized bitvector operation is equivalent to the obvious implementation of the intended calculation.
Just typed up the two variants as functions in the SMT language with the bitvector primitives and asked Z3 for the satisfiability of f(x) != g(x) and rejoiced when it said “unsatisfiable.”
I’ll just post it here. :)
(define-sort Word () (_ BitVec 64))
(define-fun zero () Word (_ bv0 64))
;; Signed addition can wrap if the signs of x and y are the same.
;; If both are positive and x + y < x, then overflow happened.
;; If both are negative and x + y > x, then underflow happened.
(define-fun
add-overflow-basic
((x Word) (y Word)) Bool
(or (and (bvslt x zero)
(bvslt y zero)
(bvsgt (bvadd x y) x))
(and (bvsgt x zero)
(bvsgt y zero)
(bvslt (bvadd x y) x))))
;; Here is a clever way to calculate the same truth value,
;; from _Hacker's Delight_, section 2.13.
(define-fun
add-overflow-clever
((x Word) (y Word)) Bool
(bvslt (bvand (bvxor (bvadd x y) x)
(bvxor (bvadd x y) y))
zero))
(set-option :pp.bv-literals false)
(declare-const x Word)
(declare-const y Word)
(assert (not (= (add-overflow-basic x y)
(add-overflow-clever x y))))
(check-sat)
Here’s you an example of SMT solvers used for stuff like that. I added some more stuff in comments. You might also like some examples of Why3 code which is translated for use with multiple solvers. Why3 is main way people in verification community use solvers that I’m aware. WhyML is a nice, intermediate language.
I found this library (and its docs) interesting. They go over some practical examples.
This looks like it’s about linear programming, not SAT solving. They’re related for sure but one reason it would be nice to see some specific examples is to understand where each one is applicable!
I have personally used them when writing code to plan purchases from suppliers for industrial processes and to deal with shipping finished products out using the “best” carrier.
Great article!
I’m a bit late to the party because I wanted to try this one myself by just eyeballing and refactoring.
Java concurrencyThe first thing that confused me was the concurrency model. The methods are supposed to be atomic because of synchonized but wait obviously make it non-atomic (the operation is broken between before the wait call and after). Then what does notify do? Does it pause the thread that called notify? Or is the woken up thread paused but next to execute?
Looking at the first few Google results didn’t get me a clear answer to this.
I assumed the former and this could make a lot of take thread wake each other up, block on notify and then all continue pushing occupied into the negatives and return invalid values.
I think your spec assumes the latter.
Concurrency and testingif random.randint(0, 10000) == 1234:
cause_error
would also be hard to test, need many runs and still have a chance of missing the error. This feels like a missing feature of the testing tool. For any non-deterministic part, the test should be allowed to pick the “random” values. In this case, the test should be allowed to pick which thread notify wakes up. And which thread runs next if there is a choice.
This is not saying anything about the difficulty in implementing such a testing tool but it also seems necessary if there’s going to be non-deterministic parts.
Then what does notify do? Does it pause the thread that called notify? Or is the woken up thread paused but next to execute?
I assumed the former and this could make a lot of take thread wake each other up, block on notify and then all continue pushing occupied into the negatives and return invalid values.
I think your spec assumes the latter.
To my understanding it’s neither: notify wakes up a thread but does not execute it, nor does it guarantee it’ll be executed next. It just adds the woken thread into the scheduled pool.
This isn’t just one paper, of course. Science code is much poorer quality and much less tested than engineering code. We expect it to have many, many more bugs. Any paper that relies on science code should be treated as suspect unless they share the code.