A backport like this (perhaps all backports) should come with a warning that says, “If possible, switch to version_N+1 before using this”. Allow me to explain:
This feature makes co-routines palatable in Python2. About 1.5 years ago, like @joelgrus I was stung hard by this missing feature in Python2. I guess I learned the hard way, and every project I have started since then is in Python3 (There is NO reason not to).
My point being, if this library was available back then, I would have used it. And I may not have switched to Python3 yet, and that would have been a sad thing, as I would have been missing out on the other goodies.
Vague but exciting ..
Perhaps through annotation?
Sigh. I visited the link for first example, an Apple backdoor.
https://truesecdev.wordpress.com/2015/04/09/hidden-backdoor-api-to-root-privileges-in-apple-os-x/
I mean, kind of interesting, but if it’s that what we’re calling malware these days, may as well just pack it up and go home.
Actually, that bug has quite a bit in common with this file permission “backdoor API”: http://www.openwall.com/lists/oss-security/2017/01/24/4
So on the bright side, conclusive proof that systemd is malware?
This is only the first link, but there are others, like http://www.telegraph.co.uk/technology/3358134/Apples-Jobs-confirms-iPhone-kill-switch.html
For me, this only means that only Android ROM’s like LineageOS are usable.
There is also something to be said when these ‘backdoor APIs’/bugs are being inserted behind the doors vs in the daylight.
I’m sure you are aware of the distinction, but not sure why you seem to condone it.
As a long time open source person, I’m increasingly frustrated by the partisan refrain that software which isn’t open source cannot be trusted to be non-malicious. This isn’t really true, there’s an entirely domain of expertise dedicated to reverse engineering what no-source-available code does, and folks who do this for a living are really good.
There’s plenty of good reasons to prefer open source, even related to trustworthiness, source availability facilitates easier reviews from people with different expertises (e.g. a cryptographer who is not an expert reverser), however the idea that closed source is just a total black box is a political argument, not a technical one.
there’s an entirely domain of expertise dedicated to reverse engineering what no-source-available code does
OK, but the cost of determining whether something is malicious is incredibly high for closed-source software compared to open source. Prohibitively so, for the vast majority of users.
Any technique you can use to audit closed source software, you can use to audit open source software, right? But you also have the source code, the commit history including who committed what, diffs between versions, code comments, etc.
Plus there’s the social factor. If somebody at Microsoft adds a backdoor to Windows, outsiders might notice the unusual network traffic, but they have no chance to see the code. All the committers to Windows are under NDA and can be pressured to be quiet. Whereas adding a backdoor to Linux would mean sneaking it past a bunch of people whose only unifying motive is to produce a good OS, and keeping them all from noticing it indefinitely. It’s a much harder task.
So open source makes it much harder to add back doors and much easier to find them. It’s not perfectly safe, but it sounds a heck of a lot safer.
You are technically right in saying that there are domains of expertise dedicated to reverse engineering closed source, and these folk tend to be really good.
Making this the argument towards trusting closed source is moot. There is plenty of closed source that isn’t actively reversed, and audited, which is used heavily. Tax software, Search engines, you name it …
As Ken Thompson showed, this argument is also wrong in that even if you can see the source, the binary that’s actually running could have been diddled in some way. And it’s not even that hard to get backdoors into source without people noticing, as Heartbleed and the Underhanded C Contest and so on show.
Still, you can’t really reverse engineer something as huge as Windows.
EDIT: Even if you were able to reverse engineer it, you can’t really modify it, unless you break the EULA. Even then, it’s a play of cat and mouse - you hack Windows to change its behaviour and then Microsoft patches it, so you need to find another hack.
Neither you can audit a huge open source project: the OpenSSL fiasco showed that “since it’s open, someone would have noticed” doesn’t work.
I suggest mentioning 3rd party evaluations instead of RE. People or organizations you trust get the source, vet it, and sign its hash. That was how security evaluations at government have been done since 90’s. It can be pretty cheap if software isnt humongous.
Interestingly, still necessary for FOSS since that gets so little review. Like in closed-source, most users just trust a 3rd party saying it’s good.
This isn’t really true, there’s an entirely domain of expertise dedicated to reverse engineering what no-source-available code does, and folks who do this for a living are really good.
This is like saying terrorism is okay because there’s an entire domain of expertise dedicated to stop people from blowing up a schoolbus with a martyr vest and these experts (at least some of them) are quite good at it.
Open source and free software are superior to closed source proprietary software. All else being equal, there is no reason to use a closed software over an open one.
Just because you can mitigate the awfulness does not make something good.
This is like saying terrorism is okay because there’s an entire domain of expertise dedicated to stop people from blowing up a schoolbus
That is ridiculous. The only point of terrorism is destruction to cause some reaction. Whereas, the point of proprietary software is to solve an actual or perceived problem for users in a way that works well enough. It usually does. Problems are usually recoverable. It almost never kills someone. Shady companies may optionally do a bunch of evil like lock-in on top of that. Don’t do business with shady companies & make sure you have an exit strategy if the supplier becomes one. Meanwhile, enjoy the software.
“Open source and free software are superior to closed source proprietary software.”
Like hell. You said all else being equal but it rarely is. In average case, it’s easily proven false in a lot of categories where at best open source has knockoffs of some proprietary app that suck in a lot of ways. In many other cases, there’s no open-source app available. Should be decided on a case by case basis which are better. Far as security, it was mostly proprietary in high-assurance sector steadily produced highly-robust solutions because they had money to put the necessary QA and security work into it. It’s basically unheard of in open-source unless paid pro’s or CompSci people are doing it with the open-sourcing being incidental. GEMSOS, VAX VMM, KeyKOS, Kesterel Institute’s stuff, OKL4, seL4, Caernarvon, MULTOS, CertiKOS, Eiffel SCOOP, SPARK Ada, CompCert, Astree Analyzer… all cathedral model by people who knew what they were doing being paid for it.
Closest thing in FOSS w/ communal development is OpenBSD with a mix of root-cause fixes and probabilistic mitigations whose effectiveness is unknown since top talent don’t focus on small, market share unless paid to. That vs competition using some of their methods plus formal proof, static analysis, exhaustive testing, covert-channel analysis, ensuring object code maintains source’s properties, SCM security, and 3rd-party pentesting. Open-source security is a joke on the high-end in comparison. Although NSA pentesting failed to break a few above, they certainly have many of those FOSS apps and FOSS-using services in the Snowden leaks with full “SIGINT-enabling.” They strongly encourage many of these FOSS apps to be used while making it illegal for companies to sell me Type 1-certified crypto or TEMPEST-certified devices. Kind of weird if FOSS quality is so good. ;)
Enough myths. Neither side is better by default. What matters are benefits for user at what cost. Sometimes it’s proprietary, sometimes not. Look for FOSS by default for many, good reasons. It’s not always the best, though. Just ask any hardware developer if we high-assurance people seem too fringe. Ask the hardware people to tell you which FOSS software is good enough to produce the chip you wrote that comment on. Which is “superior to closed-source proprietary software.”
I remember reading and watching this a few years ago. No math either, but I thought it’s worth mentioning.
Nice! I have also come across:
I’m in the market for tools that understand both verilog and a higher language. As in, be able to parse verilog and provide an API to its AST, and at the same time generate code from a higher level language down to verilog. I have tinkered a little bit with iverilog’s api.
There is something mind-numbing about HN that makes it difficult to be a part of that community. You almost know exactly how everyone is going to react to any given post and you have a pretty good idea of what’s going to get at the top of HN every day. It’s still useful as something similar to TechMeme, but the community just isn’t fun to be a part of.
There is something mind-numbing about HN that makes it difficult to be a part of that community.
It’s the size.
Lobste.rs feels better because it’s still small, but if we grow, this feel will dwindle. There’s nothing wrong about it, it’s just how communities evolve.
[Comment removed by author]
This is a thing I think Dan is right about: people who are concerned about capitalism see HN as a bastion of capitalism, and people who are concerned about socialism see HN as a bastion of left-wingism. It’s a cognitive availability bias thing.
At least, if there’s a bias towards “capitalism”, it’s one shared by most Internet sites, very much including this one.
(I don’t disagree that this site is often easier to engage with!)
I wouldn’t say that Lobsters is biased towards capitalism, actually–we shy away from posting the marketing materials and press releases and obvious attempts at self-promotion that plague HN. Growth hackers caught posting here tend to get mocked and downvoted.
HN has a clear bend towards capitalism, because it’s whole job is to attract talented people that might be thinking about startup stuff and to encourage them. It’s Valley agitprop with just enough tech lingering to not scare off the young folks people need to do engineering.
I think that the tone here tends to generally favor the developer and Labor, whereas over on HN you get an even split between some Labor, Capital, and people who want to one day be Capital. It’s neither good nor bad, but it is certainly different.
Yeah, I disagree with pretty much all of this, especially the YC-essentialism. I think HN does a pretty good job of not being a vehicle to get people into YC startups, all things considered.
Given their jobs listings specifically for YC startups, and the HN picks YC entries experiment, and the large amount of threads dedicated to YC-related activities (especially around the time people normally would hear back on their applications, I would be curious what other information you’re considering when drawing that conclusion.
You’ve been on there for a while–maybe you’re perceptions are based in your earlier experiences?
If anything I feel it’s changed in the other direction, though this is entirely a gut feeling. A much bigger part of HN when I first joined (about 8 years ago) felt like the startup-scene fan club, both the startup scene in general, and specifically the one oriented around Y Combinator and Paul Graham personally (back when he posted a lot). It was never exclusively that, or I wouldn’t have bothered joining, but it was kind of the underlying community norm. There are still a lot of startup fans there, but it now seems more like a general tech forum than it used to.
Most of the threads on HN are about tech, law, science, current event, etc. Very little of what you describe on front page. I think you’ve been gone a long time and it’s changed. It hasn’t been like that the year or two Ive been on it.
Now, YC certainly benefits from it, it has YC-oriented posts, and so on. It is their site with a good chunk of startup fans in the community. So, let those two groups get some benefit, too. Im cool with it long as it’s mostly not that stuff. Front page has just 2, non-YC, business articles.
[Comment removed by author]
:)
But nevertheless paradoxical. The larger the communities become less diverse, not more.
The solution would be to join many small communities, still giving access to many people, vs a few large communities.
I found that Parable of the Polygons, a short essay with fun interactive visualizations, made that intuitive to me. Larger groups will default to be less diverse than smaller ones without constant work to keep that from happening.
I love that visualization.
I am not sure how you arrived at that conclusion though. If I remember it correctly, the article arrives at a conclusion that demanding diversity lowers segregation. But this doesn’t necessarily mean larger groups are less diverse.
Do larger groups actually tend to be less diverse? I really don’t know, but looking at tropical rain forests, I wouldn’t bet on that. I guess it depends on the initial condition.
Yes, your summary is accurate. I guess what I’m bringing in addition to what the article says is my knowledge that demanding diversity is easier in smaller groups because there are few enough people hostile to it that it’s practical to engage each one directly and talk through things. The article seems to mostly be thinking about where people live, which is certainly a very important topic, but it’s doing so at an abstract level that can be applied in other ways also.
There’s further discussion needed to adapt the article’s thesis - that diversity will not happen unless people actively prefer it, even if weakly, over homogeneity - to online communities. Everywhere you look at polygons “moving” in the visualizations, imagine that what they’re specifically doing is focusing more of their attention on Lobste.rs instead of on Hacker News… Then think of it from the narrow perspective of us being Lobste.rs; we perceive these movements as our community growing.
People who actively prefer diversity are a tiny fraction of the general population. It’s not something that there’s consensus on on Lobste.rs, but certainly the fraction is larger here. So most times when the community grows, the growth brings us towards the larger world’s status quo.
In retrospect, I am still glad I cited the article because it’s very important background information, but I appreciate your questioning my reasoning, and I hope I’ve elaborated a bit.
I was thinking it would be awsome to have a reddit-esque platform that randomly creates the perfect size communities: large enough that there’s always discussion, but small enough that you begin recognize a lot of the posters.
That sounds exactly like how subreddits work! There are many small, active communities there if you are able to find them.
I agree with this. I’ve never seen a large community be surprising, change direction, or try to learn from its failures.
There’s still interesting stuff to be said about what makes it “large”, and how a large IRC channel is far fewer people than a large web forum. But I don’t have much in the way of insight there, so…
INT. LOBSTE.RS CONFERENCE ROOM
PERSON 1: As an OpenBSD developer, I—
PERSON 2: You’re an OpenBSD developer? That’s funny, so am I!
PERSON 3 (from under the table): Hey, I’m also an OpenBSD developer!
PERSON 4 (emerging from the ductwork): Me too!
POTTED PLANT IN THE CORNER: I, too, am an OpenBSD developer.
I don’t like the tone of the article. I understand there can be, and there are better practices than using assert.
I don’t like double inversion, this is confusing:
assert(!coins->vout[nPos].IsNull());
Assert is disabled with NDEBUG. The Debug vs Release issue the author is talking about seems more like an IDE thing.
Assert isn’t meant for error recovery (if it crashes the program, so be it. It’s better than running the program any further).
Bad Code, Bad Programmers, just seem like random insults thrown about.
The author of the article is pointing out genuine misuse of assert() in the program. Asserts are used explicitly to catch “impossible” program states, thus helping the debugging and development process. They are meant to be redundant. If an assertion fails, it shows that you have a bug in your code. The fact that the original authors of the code are using these as a way to check user input, as well as check if something that has side effects has succeeded, shows that they have a fundamental misunderstanding of what asserts are actually used for.
Assert is disabled with NDEBUG. The Debug vs Release issue the author is talking about seems more like an IDE thing.
This is more than an “IDE thing”. Programs depend on these compile-time definitions to include or exclude special debug information - like asserts or compile-time warnings. This source was abusing the use of asserts, which as a side effect, made DEBUG/NDEBUG symbols effectively useless, as the behavior of the program would depend on only ever being in “debug” mode.
shows that they have a fundamental misunderstanding of what asserts are actually used for
This is a microcosm of why I don’t like the style of the OP. Why do we need to make inferences about the programmer’s mental state? Why can’t we just talk about the code rather than leaping to conclusions?
bad programmers tend to use them badly
most programmers don’t really understand
That’s why they
the typically wrong ways bad programmers use asserts
More generally, though, it shows why there’s a difference between 1x and 10x programmers. 1x programmers, like those writing Bitcoin code, make the typical mistake of treating assert() as error checking. The nuance of assert is lost on them.
The technical content of this post is good. I agree with it. But there’s a lot of distracting material around it. Presumably, the goal of writing this post and sharing it with others is to educate others. But the post, while containing educational content, is also insulting the very people it’s trying to educate!
Now, I don’t claim to be an expert in pedagogy, but I can’t recall ever having success teaching folks while simultaneously insulting them. Maybe it’s some sort of “tough love” strategy? I don’t know, but I don’t like it.
Now, if the point of this post was not to educate, but rather, to snub one’s nose, then sure, this post is a home run. But I can’t stand superciliousness either, so posts like this are a lose-lose in my book.
Presumably, the goal of writing this post and sharing it with others is to educate others. But the post, while containing educational content, is also insulting the very people it’s trying to educate!
I agree with this, but this is coming from some Bitcoin source code (I’m still not sure I understand which source tree this is from) and it’s code that deals with financial transactions. The developers have the obligation to their users to ensure that their code is correct for that reason alone. If you can’t trust these devs to use assertions (which help in catching correctness of your program), would you trust them with sending your money through their system? I certainly wouldn’t.
Is the tone of the article harsh? Absolutely. But when you’re writing code that deals with other peoples' money especially, you need to scrutinize every last detail of the code. The quality and practices the authors employ are not exempt from that.
you need to scrutinize every last detail of the code
Please note that I didn’t contest this and very strongly agree with scrutinizing code. I’d be happy to rephrase my point if you like, but I’m not sure where the misunderstanding is.
Sure, I can understand it’s more than an IDE thing. Programs depending on it? .. seems more like a convention (abused?). I am willing to know more about that.
Is there a strict assert adherence code? When was it decided universally that asserts are bad in release code? I believe assert has it’s value even in release.
im probably in the minority here, but I regularly check the all comments link. much faster than trying to scan the front page and discern which thread counts have changed, and additionally helpful for spotting otherwise unnoticed comments.
I do this as well. The comments are like half the value of lobste.rs for me, so I appreciate being able to follow them.
Interesting, I am going to give it a try.
I am curious to know the comment rate. ~15-20 comments per hour (based on the last 3 hours), not so high for the previous hours.
Is lobste.rs site/traffic data accessible to users? I bet we could come up with a few interesting ways to curate our information based on some trends. And a few memes of course.
It’s usually fairly slow unless there’s a hot thread going. So Monday with the working on thread is usually much higher traffic than otherwise.
You have to first install a program on the computer, then you have to have a detector with line of sight to the HDD LED. I suppose this would make a good plot for a TV episode, but at this level of intrusion, why not monitor the user’s screen?
Years ago there were a lot of modems and network devices (routers, etc.) where the Tx and Rx LEDs (or equivalent) had sufficiently fast response times that you could actually read the data going through the device [cite]–no special software required.
That’s one of the neatest side-channel attacks I’ve seen. Intercepting data just by looking at the device.
The novelty is in leaking the information covertly with some sort of control. Observing a user’s screen isn’t exactly the same.
can someone clarify what exactly this implies? previously, one could compile Signal without GCM support but phone calls wouldn’t work, and there would be battery issues as well.. are those issues fixed?
Previously nothing would work in standard Signal without GCM, it required patches to get messaging working and even with the patches voice wouldn’t work. Standard Signal should now work without GCM and voice works too (via the new WebRTC code). Battery life will be worse as it can’t use a shared GCM connection for the push notifications, but how much worse depends.
I use Copperhead OS. There is a port of signal called Noise. It works without play services and is available on F-Droid. Its packaged and maintained by Copperhead.
Can’t promise it will work for you, but here is the link: Noise (Signal-compatible encrypted messaging app) - https://f-droid.org/app/co.copperhead.noise
This is a repost. https://lobste.rs/s/ibnwxa/essence_linear_algebra
The series is great. To top it off, he programmatically generates the animations [Python + AggDraw]
https://earth.nullschool.net Looks very similar to the above. I remember looking at the javascript there. It was based off of d3 I believe.
Technically UDP is 11 months older than TCP. :)
What you may mean is, TCP is more specific than UDP.
Quoting a colleague of mine:
You can potentially implement TCP over UDP, but not the other way round.
This seems like the start of an interesting puzzle game. How do you make the least amount of moves without making any of the polygons more upset? Or maybe what’s the smallest total distance polygons have to move to make the least amount upset?
As a social commentary it seems over-simplified and racist, though. What about polygons who don’t identify as their shape? Or polygons who move for career opportunities and don’t care about the shape of their neighbors?
It’s been a while since I’ve seen this, but looking over it now, it is talking about shapes and a social phenomenon. Obviously this is a metaphor for the influence of race on neighborhood selection, and some of those points might be racially uncomfortable, but it certainly isn’t encouraging anything problematic (in fact, I think it’s doing the opposite).
What about polygons who don’t identify as their shape?
If you’re being genuine, then the answer is this: you can’t really choose your racial identity, at least not in a social sense (have people see you/treat you as that identity). I’m white, I cannot identify as Black. Like, I can say the words “I identify as Black”, but I’m not really doing anything meaningful of the sort. Similarly, a Black woman can’t identify as non-black and suddenly have the world open up to her.
If you are making a “but why can’t I identify as <gender-i-perceive-as-made-up-or-inconsistent>” “joke”, then that’s transphobic. Trust me, I’d know.
Or polygons who move for career opportunities and don’t care about the shape of their neighbors?
As a social commentary you’re missing the point. Look up all of the different forms of housing discrimination that Black communities have had to face throughout the previous generations, especially how many polarized neighborhoods there are (The Case for Reparations for an overview with solid research and personal experiences). So really the article is saying that even small amounts of racial bias can have a large impact on how communities self-regulate to becemoe racially homogenous. And the fact that some people in otherwise segregated areas (ie, communities where more racial bias is present) would choose to live in a neighborhood otherwise entirely another race, doesn’t really effect that point.
Is it oversimplified? Maybe, but that’s only because of the way we are assigning it to a social stance. It is a powerful visualization of how far even small racial biases can go. You shouldn’t be so dismissive.
If you are making a “but why can’t I identify as <gender-i-perceive-as-made-up-or-inconsistent>” “joke”, then that’s transphobic.
Perhaps, but it is also a valid question if asked in earnest.
Is it oversimplified? Maybe, but that’s only because of the way we are assigning it to a social stance. It is a powerful visualization of how far even small racial biases can go. You shouldn’t be so dismissive.
I didn’t read the the GP as being dismissive so much as critical, for what it is worth.
For example, the simulation doesn’t really account for “no preference”. It also doesn’t explore, perhaps quite rightly, the influence of moving costs or property or jobs.
If taken more abstractly, it is a wonderful demonstration piece, but as-is it runs counter to the experience of anybody who has voluntarily lived outside their own group for purely economic reasons.
If you are making a “but why can’t I identify as <gender-i-perceive-as-made-up-or-inconsistent>” “joke”, then that’s transphobic.
Perhaps, but it is also a valid question if asked in earnest.
I’ve never heard it asked in earnest. I’m fond of this response; I feel that’s roughly the level of engagement that the question deserves.
but as-is it runs counter to the experience of anybody who has voluntarily lived outside their own group for purely economic reasons.
I am curious to know what is the experience of somebody who lives outside their own group, as seen by you? From my experience there couldn’t be one that fits all.
There is very much not a one-size-fits-all experience!
That said, in my observation (and without getting quite off-topic) here is that there are a lot of people for whom access to jobs is what matters, and if they have sufficient means they will move wherever they need to in order to get access to jobs. If they have kids, to schools. If they’re unattached, nightlife. Living with other folks of the same racial background is purely correlative and not causative at that point.
I have a friend who moved their family into the 3rd ward in Houston and lived there for a few years because it let them cycle to work. Only after having their bike stolen twice, car once, and shootings in the area have they considered moving. I’ve had another friend live for over a year in an apartment complex quite outside of their racial/socio-economic/sexual groupings simply because that’s where they ended up when January rolled around and they needed a cheap place that let them have pets.
I’ve had other friends move from Houston to less diverse areas and, despite getting to live with folks who are closer to them in group alignment, express annoyance at not having access to the food and culture that they left behind–so, they’re that other end of the spectra where diversity was their big concern.
All this is to say…we all pay a great deal of lip-service to diversity and whatnot, but when the chips are down people are gonna do what is most economically effective–if that means moving as a straight Catholic into the gayborhood, or building your townhouses in the middle of a Hispanic neighborhood, or eating at the Vietnamese place because it’s consistently half the price of any of the other food options (your own nominal heritage’s included), so be it.
Of course, it’s a lot harder to make convincing simulations of economic differences and a call-to-action because of a whole host of historical baggage.
Thanks for sharing your thoughts. I appreciate what you said.
I don’t really foresee a call-to-action (based on diversity statistics). It has been and will be something of a process where people settle in different behaviors based on their situation (in the article, say a behavior is a neighborhood). If your situation is dire, diversity is hardly the first thing on your mind. Basically, if you have an exam tomorrow, you may not be looking into alternate ways of solving the problem at hand after you found one that is in line with what the professor has taught you.
The way I interpret it: the article is begging you to keep an open mind to diversity whenever possible; by saying that if you are not aware of it, you are going to end up being that statistic that is causing division. (With something close to a proof from statistics)
If you are not pushing against that boundary in your head every so often (especially when you can), you are passively doing a disservice to humanity. :-p
What about polygons who don’t identify as their shape?
I’m gonna bite, despite agreeing with Irene’s comment that this is almost never asked in earnest, and I’m not convinced it is here.
The reality is, society at large treats you as they perceive you. If they perceive you as a square, you’ll get treated like a square. If they perceive you as a triangle, likewise. In all the respects that “identify as a shape” can actually map onto reality — this doesn’t apply to race, but does apply to gender — people usually make efforts to change the way they’re perceived. They’re often successful at this, and as a result (going back to the analogy), the shape that identifies as a circle (regardless of what shape they started as) is perceived as a circle and treated as such.
For shapes who identify as neither square nor triangle, it again comes down to how they’re perceived. Shapes tend to want to fit other shapes into either the ‘square’ or ‘triangle’ pigeonholes, even in spite of strong evidence that the pigeonholes don’t completely describe the shapes available.
Some shapes identify as the binary opposite to that which they started out at, but for many reasons — often class related — they can’t do everything others can to present as their identified shape. This doesn’t just mean the square identifying as a triangle gets treated as a square — they get perceived as a square “trying to be” a triangle, and the treatment Shape Society™ offers them is far, far worse.
In short, the social commentary is entirely adequate for what it’s describing, and your tangent is pretty tangential, but I hope maybe this helps elucidate something.
This shouldn’t be tagged javascript, because it doesn’t talk about javascript.
This shouldn’t be tagged math, because it doesn’t present any math.
This should be tagged visualization, because it’s a visualization.
I was really impressed with the javascript, and went for it, missed the visualization tag (probably lost focus by the time I got down to v). But fair point.
I am not going to argue over the math tag, but if people here think it doesn’t deserve a math tag, I will remove it. I thought it was somehow deserved.
Perhaps culture.
I definitely thought so.
So simple, yet profound.
Beaware of the local minima you are settling down in.
The killer features for Python 3, IMHO, are the clean Unicode/bytes distinction and the normalization of the object model.
The problem with Python 3 is, of course, that Python 2 is “good enough”.
I have spent the last couple of days wrangling with generators and co-routines in Python2. I wish the project I’m working on was started in Python3 (considering it’s 2 years old). Python3 is light years ahead in terms of async programming features baked in.
You can start calculating by looking at Python 3.0’s release date (in 2008) and then wonder why it hasn’t been adopted within 8 years. But I think of this differently.
The first viable candidate for a Python 3 migration is Python 3.4. It’s the release you pick, when you want to maintain a 2 and 3 compatible code base, which is what many libraries need to do. And it is kind of known in the community that 3.0 had a few regressions in terms of stability and speed over 2.7 (never used 3.0 so this is hearsay, nevertheless even that hearsay would keep me from migrating 2.7->3). Now Python 3.4 was released in 2012. which is basically 4 years ago. 4 years is a long time, but it definitely isn’t 10 years.
The problem with Python 3 is, of course, that Python 2 is “good enough”.
Indeed, and during Python 3’s development and release cycle, Python 2.7 also saw improvements, patches, etc. which means, that it didn’t immediately started to decay, like any other unmaintained software would.
Now, if you run Python on a Desktop, it was definitely possible to use Python 3.4 as it was released. But if you run Python on servers, you are bound by the release schedule of your distribution. So until you migrated from Debian wheezy to Debian jessie, you were essentially bound [$] to Python 2. Coincidentally, 2015 was the year when I started to hear many success stories at Python conferences on the hallway tracks: “We did it”, or “I was surprised how easy it was”. For many, now is the time that many passionate Python devs realize: Ah, I finally could use Python 3, let’s see if my code will work on it. And suprise, surprise it works.
[$] And since Python is often used in linux distributions, you are not as likely to manually roll out an official release but stick to the distribution-provided one. If you would run Haskell for example, you would probably have a independent update cycles for your linux distribution and Haskell.
clean Unicode/bytes distinction
The author argues it isn’t clean, and it’s actually obnoxious, counterintuitive, and frustrating.
In an attempt to make their strings more “international” they turned them into difficult to use types with poor error messages. Every time you attempt to deal with characters in your programs you’ll have to understand the difference between byte sequences and Unicode strings.
If we have a string and a bytes type then when we [concatenate them] we get an error, and [using .format()] we get a repr formatted byte string.
I agree with the author on this, even if the rest is classic Zed Shaw angry rambling. It’s a huge pain in the ass. Go has a clean unicode / bytes distinction, python3 does not. Or maybe it does, and I’m just a foolish peasant for wanting to write scripts in python without screwing around with unicode.
It’s a strange argument. I own a codebase that works in both 2 and 3, and by the far the most annoying errors I have to chase around are Python 2’s unicode/bytes confusion. Python 3’s hard distinction is a much better experience for me, and leads to cleaner code with fewer “just in case” calls to .decode() and .encode().
Or maybe it does, and I’m just a foolish peasant for wanting to write scripts in python without screwing around with unicode.
I have been writing scripts in Python 3 for years now and haven’t experience any issues like this. Can you show an example?
The first script I tried to write in python3 was working with hex, which appears to be a fairly pathological case. As such, I remember it vividly. This is approximately what happened:
$ python3
>>> 'text'.encode('hex')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs
>>> import codecs
>>> codecs.encode('hex', 'text')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
LookupError: unknown encoding: test
>>> codecs.encode('text', 'hex')
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/encodings/hex_codec.py", line 15, in hex_encode
return (binascii.b2a_hex(input), len(input))
TypeError: a bytes-like object is required, not 'str'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: encoding with 'hex' codec failed (TypeError: a bytes-like object is required, not 'str')
>>> codecs.encode(b'test', 'hex')
b'74657374'
>>> h = codecs.encode(b'test', 'hex')
>>> [h[i:i+2] for i in range(0,len(h),2)]
[b'74', b'65', b'73', b'74']
>>> ' '.join([h[i:i+2] for i in range(0,len(h),2)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sequence item 0: expected str instance, bytes found
>>> ^D
$ python2
>>> import codecs
>>> codecs.encode('text'.encode(), 'hex')
b'74657874'
End-to-end it’s not all that bad. What would it be like in any other language that actually distinguishes text and bytes?
$ python3 >>> 'text'.encode('hex')
It’s usual for encoding names to be character encodings: UTF-8, BIG5, ASCII. Does hex really fit there?
>>> codecs.encode('hex', 'text') Traceback (most recent call last): File "<stdin>", line 1, in <module> LookupError: unknown encoding: test
Seems like test and text got a little mixed up here in the transcript.
>>> h = codecs.encode(b'test', 'hex') >>> ' '.join([h[i:i+2] for i in range(0,len(h),2)]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sequence item 0: expected str instance, bytes found
>>> h = codecs.encode(b'test', 'hex')
>>> b' '.join([h[i:i+2] for i in range(0,len(h),2)])
b'74 65 73 74'
Beware the FUD.
It’s usual for encoding names to be character encodings: UTF-8, BIG5, ASCII. Does hex really fit there?
No, it did not. Python 2 also has a rot13 codec (for the this module I suppose), which was probably also axed.
Same here, I’ve spent a few days tracking down unicode related bugs in Python 2, so far Python 3 has been much better when it comes to unicode. (or maybe I just didn’t hit the edge cases yet)
I would divide Python unicode users into three groups:
IMO, Python 3 has been a huge failure, and while I’m sure it’s painful for the core development team, it should be dropped sooner rather than later.
As a Korean, I have a problem with this:
For non-English speaking beginners, Python 3 definitely wins, but probably not by all that much as most non-English speakers are used to painful internationalization issues (I know, this isn’t fair, but it’s true).
This is not true, at least not here. I really can’t imagine how it would be true elsewhere though. If someone is “used to” painful internationalization issues, how can that one be a beginner? To state this the other way, if there are such “know internationalization” beginners, there must be more “don’t know internationalization” beginners.
A beginner who’s used computer programmers is used to having to use an ascii-fied version of their name in popular computer programs. Eevee’s response post gives the example:
Hi, my name is Łukasz Langa. .agnaL zsaku�� si eman ym ,iH
In my experience any beginning-programming Łukasz who has used a computer for a while is used to Łukasz randomly turning into ��ukasz, and so won’t find it particularly surprising or puzzling that their program does that.
I agree, but I still don’t see how that makes Python 3’s definite advantage compared to Python 2 “not all that much”.
so won’t find it particularly surprising or puzzling that their program does that
Your argument is “it sucks everywhere therefore we shouldn’t even try to do better”? I think that is sad. And international names might also happen when you’re a native speaker: your form application might be used by a foreigner and then it breaks in surprising ways (kinda like Amazon fails to deliver diacritics). I don’t think that having a ticking time bomb bug in your program is a good state to be in and getting a Python 2 program to be all-Unicode is rather hard.
My argument is “it sucks everywhere so doing it better is clearly not something people are that bothered about, in a revealed-preferences sense”. In an abstract sense we should all make our programs better in all respects. Concretely, correct handling of non-ascii should probably be a low priority for language designers, since the available evidence suggests it’s a low priority for language users.
- For English speaking beginners and those writing simple scripts that don’t need unicode, Python 2 wins by a mile.
That is an argument, until you run into some bug because that piece of pure-English text contained some unicode ? symbols. Also, I don’t see how beginners would have a very much time in Python 3 because of the byes vs strings distinction.
A definitive change for the worse for beginners was the laziness-by-default changes for many functions. Really makes it harder for them to experiment in the Python prompt.
To be honest, handling emojis is not a beginner task. Running into bugs is not a beginner task. Teaching resource should be carefully prepared so that learners do not run into bugs.
With controlled “input data”, I don’t see how Python 3’s unicode handling would be more hostile to beginners.
But what do you think Python3 is killing?
It’s clearly killing python2, but programmers, instead of redeveloping their python2 knowledge inside python3 are considering other, more durable languages.
I’ve still got C and perl programs more than twenty years old that are running in production (and on the Internet).
I would actually wait until GDPR to kick in before deleting Facebook, or any other online account for that matter, so that keeping user information even after a user has requested deletion is simply against the law.
I don’t think the fines for violating GDPR are large enough to make Facebook think twice about ignoring it. Short of dissolving Facebook and seizing its assets under civil forfeiture, no civil or criminal penalty seems severe enough to force it to consider the public good.
Actually, they are very large:
Based on 2017 revenue [1] of $40B, that’s $1.6 Billion Dollars
But it’s not just the fines. The blowback from the stock hit and shareholder loss, as well as cascading PR impact, is a high motivator too.
[0] https://www.gdpreu.org/compliance/fines-and-penalties/ [1] https://www.statista.com/statistics/277229/facebooks-annual-revenue-and-net-income/
0.04 << 1 until you can quantify the cascading PR impact. It will not effect their day-to-day operations from an economic standpoint.
I would be curious to know how many people have actually taken action on their FB usage based on the recent CA news outbreak. I am willing to bet it’s miniscule.
1.6 billion dollars vs deleting the data of one user who wants to leave?
The fines are per distinct issue (not number of people affected). If Facebook breaches GDPR with multiple issues, then Facebook could get hit by a large percentage of their annual revenues.