Called it.
https://lobste.rs/s/w6d7t0/police_searches_homes_zwiebelfreunde#c_pct87d
It’s not like I’m clever here, this is a really frequent pattern in Germany, as illegal searches have really no repercussions for the police and “fruit of the poisonous tree” doesn’t exist. This pattern is standard in Germany.
It doesn’t address the core problem. Most OSS companies have a business model that revolves around support. If a large hosting provider like Amazon comes in and provides an “as a service” version, that cuts off a primary revenue stream. If said hosting provider doesn’t produce improvements to the codebase then AGPL doesn’t matter.
I thought AGPL is specially forged to prevent that. Or do you mean that Amazon recreated their own version from scratch?
AGPL says you have to release improvements. It doesn’t make you contribute to the community.
If a community is getting a lot of financial support from a company like Redis Labs paying for core open source work, a company like Amazon can come along and do an as a service version and contribute nothing. AGPL does nothing for that.
The issue is many projects are pushed forward by commercial offerings that rely on support/services as a means to provide financial support. Our open source licenses provide no protection for that model.
Perhaps the model is flawed and we need something better. But there is no protection from parasitic behavior in that case.
It extends further though, in general, there’s no way for an open source community to develop a means to financially support itself and not rely on free labor that is free of concerns. But that’s another topic.
The world has changed around free and open source. They haven’t adjusted to that change beyond AGPL being created to address some issues.
I personally don’t think that commons clause is the right solution but I understand the problem they are looking to solve.
Apologies for any typos. I answered this from my phone.
Companies dual-license under both GPL and AGPL. So, it could be done AGPL with cloud vendors paying a license. There’s a lot of FOSS developers that oppose the AGPL, though.
It absolutely is. Read the FAQ section on the AGPL, it’s very unclear. ‘Many features of the AGPL…’ kind of language. What features? It’s not the Linux kernel, it’s a license, it’s pretty small, just say what these supposed features are.
Of course the reason they don’t is that it’s a smokescreen: the AGPL is of course fine, but their goal isn’t to make the software free, it’s to profiteer off it.
Yes, Redis Labs is in the business of paying people to work on Redis and the Redis ecosystem and needs to make money to do that. The business model for companies such as that is based on support. If someone cuts off that revenue stream the money falls apart. We can as a community accept that such companies will need to build protections for themselves (licenses like common cause or having some closed source components) or accept a world in which there are no companies that exist to support specific products that could be turned into as “as a service” by a large player.
The AGPL does nothing to stop someone like AWS from taking what Redis Labs does and making money off of it and wrecking the Redis Labs business model (which is shared by a number of companies). I commend them for trying an approach that leaves the module source available and even “open” for some segment of the user base. The alternatives are “new business model”, “go out of business”, or starting to make more and more of their offerings closed source.
The AGPL does nothing to stop someone like AWS from taking what Redis Labs does and making money off of it and wrecking the Redis Labs business model (which is shared by a number of companies).
Nonsense. AWS wouldn’t touch an AGPL redis with a ten foot barge pole.
AGPL/commercial dual licensing is actually open source.
I commend them for trying an approach that leaves the module source available and even “open” for some segment of the user base. The alternatives are “new business model”, “go out of business”, or starting to make more and more of their offerings closed source.
Calling this open is literally telling a lie.
I haven’t had ads on my blog in over a decade. I’ve been meaning to remove the Facebook Page/Twitter widgets too when I get around to my redesign, since I’m pretty much giving both companies free information with them.
There’s a lot of implementations that load the widget once the user want to use them. They are pretty common in Germany and pretty much work by having the button “primed” with one click, which loads and activates the JS and the widget.
You might consider using these or similar social sharing buttons without javascript or tracking.
Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.
I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.
QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.
They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.
That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.
That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.
That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.
Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.
I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.
They failed in many ways:
Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.
So slow as a microkernel that people wanting microkernels went with other designs.
Less reliable than some alternatives under fault conditions.
Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.
Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.
So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.
All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.
Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU
This is nice! It’s sometimes a bit weird, though.
https://repodig.com/repositories/89
Rust-lang certainly has> 60 contributors.
I’m a bit surprised by the color-coding. 2 days of medium and 3 days of median time sounds like a good time for such a large project? Same goes for the open requests numbers… what would be needed to become “green”?
I’ll try to debug the contributors thing, this seems to be a problem for some (but not all) repos.
Color coding: agreed, it’s pretty dumb now. Basically, open=yellow, closed=green, regardless of the numbers. I’ll fix this asap.
Thanks!
Despite not having any need for Rust (just as I have no need for C++ in the things I work on - performance at that level just doesn’t matter for my projects), I am continually surprised at just how great Rust is as a language and as a community. It just seems to be filled with helpful friendly people. It’s certainly not the only community that’s friendly, I’ve found Python and Lisp to both be pretty friendly communities too, but reading stuff like:
There’s been quite a bit of noise recently about the amount of unsafe code in the actix-web framework. I won’t discuss the merits of grabbing the pitchforks as soon as someone writes code of a buggy and unidiomatic nature…
And then I click on the link and it’s a bunch of people calmly and rationally discussing some code. For the Rust community, that is grabbing the pitchforks: people saying that some code is concerning in its use of unsafe.
Compare to some communities (cough Javascript) where it’s not unusual to see pull requests and issues on GitHub and other platforms flooded with hundreds of comments absolutely dogpiling someone that wrote some silly code when they were a new programmer that, totally unbeknownst to them, was picked up and used by someone in a large company (with no oversight, clearly) and is now a dependency of a major library.
Also, cool article.
EDIT:
The code is a bit convoluted, because of the reason described in the comment. drop can panic 5, so the function must decrement the length before dropping an element. In case of a panic, the last element of the Vec will be a valid one.
I was under the impression a panic in Rust was unrecoverable. Does it matter if the data structure is left in an inconsistent state if drop panics? I thought that not having to reason about exception safety was considered a selling point of not having exceptions.
I was under the impression a panic in Rust was unrecoverable. Does it matter if the data structure is left in an inconsistent state if drop panics? I thought that not having to reason about exception safety was considered a selling point of not having exceptions.
Panics in Rust are unrecoverable within a task. You can catch them (for example, before unwinding into a piece of C-code, which is undefined) using catch_panic. Still, there’s guarantees around panic, and that being that all owned values affected are properly _drop_ped and especially, memory safety is not violated in the process. Now, imagine the panicing code has the data structure borrowed and triggers the panic. The original value continues to exist and must be in a consistent state.
Mutexes are a good example of a structure that has to deal with that problem. As Mutexes cannot reason about the operations on the data they contain, they become poisoned in such a situation.
Safe Rust allows you not having to reason about exception safety, unsafe on the other hand… is unsafe Rust ;).
Panics in drops are rare (and generally recommended against), but they may happen…
Another observation: in this case, they may lead to a memory leak, but leaking is safe in Rust. Dropping twice is illegal.
While you’re right about Rust community being friendly (even by force at times), the actix discussion wasn’t all happiness. The unsafety was first sort of ignored by the devs, which made some people claim that the developers of actix are completely untrustworthy, and shouldn’t be trusted in any future project either. That understandably made the actix devs a bit unhappy, probably on the verge of just dropping everything and walking away, but fortunately they decided to fix things instead.
So it was pretty good in the end, but the road there was a bit worse than you described, IMHO.
Kinda surprised that reddit - a site which hosts rougher parts of the internet - has not had a Head of Security until 2.5 months ago?
Their headcount has always been kinda small I think? You need to hit a certain size before carving out a specific position.
“Kinda small” is ~250 people. They have data of 330 Million users.
I wouldn’t attach the headcount to the position directly, the question is how much a security need you have.
That’s probably not a good way to measure it, but maybe the number of posts like this? But that’s true.
My point is, they could have been regularly infiltrated for years and they only noticed know thanks to new talent in house. There’s only so much a jack of all trades team can do while fire fighting all the needs.
I’ll add to mulander’s hypothetical that this happened in all kinds of big companies with significant investments in security. They were breached for years without knowing they were compromised. They started calling them “APT’s” as a PR move to reduce humiliation. It was often vanilla attacks or combos of those with some methods to bypass monitoring that companies either didn’t have or really under-invested in. Reddit could be one if they had little invested in security or especially intrusion detection/response.
Because reddit is not hosting financial data or (for the most part) deeply personal data that is not already out in the open, I would assume that they are not that interesting a target for hackers looking for financial gain, but more interesting for people script kiddies who are looking to DOX or harass other users.
Many subreddits host content and discussions that people don’t want to be attached to. The post even appreciates that and recommends deletion of those posts.
I find it telling that you go out of your way pushing people interested in gaining personal data in the script kiddie corner. Yes, SMS based attacks are in the range of “a script kiddie could do that”, which makes it even worse.
Criminals are using this type of information for targeted extortions and other activities. The general view that that this is mostly the realm of “script kiddies” detracts from the seriousness and provides good cover for their activities.
I made an assumption, but reading your reply and that of @skade you are right that there are lots of uses for the data from a criminal perspective, especially for a site the size of reddit.
Thanks for the writeup. However, I’m not sure I share your excitement. I can boil it down to two points:
This reminds me of all the hoopla around the JVM as it was coming out: one language to rule them all! Distributing bytecode instead of compiled binaries! Write once, run anywhere! I see how WebAssembly improves on some of this story, but it still has a sameness to it, except it’s dubbed ‘open.’ As a user, why do I care? As a developer, why do I care? HTML 5 displacing Flash has meant that ads are more invasive and harder to block, honestly.
The importance placed on not diverging from the Web platform doesn’t win any points from me. I get that people dislike crappy implementations of subplatforms that just get in the way, but I also want to see serious competition to HTML/CSS when it comes to laying out UIs. As someone who likes what the web stands for so much and yet loathes the actual workings of it, it feels like things like this subtly reinforce the “everything is web development” hegemony by continuing to prop these things up in the name of compatibility.
Mind you, I’m very happy that JS can finally be displaced as the lingua franca of webapps!
This reminds me of all the hoopla around the JVM as it was coming out: one language to rule them all! Distributing bytecode instead of compiled binaries! Write once, run anywhere! I see how WebAssembly improves on some of this story, but it still has a sameness to it, except it’s dubbed ‘open.’ As a user, why do I care? As a developer, why do I care? HTML 5 displacing Flash has meant that ads are more invasive and harder to block, honestly.
There’s a big difference between Java and WebAssembly. While you could totally imagine a bare JVM that just runs JVM bytecode, it always came with its standard lib, the TLS implementation and everything around it. And even then, the JVM needs to come with a garbage collector, which wants to take control of all memory. The teasing thing about WASM is that it only talks about portable code, but not about a portable API. It’s quite different from the JVM in the aspect that it isn’t a platform in itself.
I’m not sure if that’s a good thing. Or a bad thing. It feels like it might affect download sizes, but other than that it doesn’t really matter.
@algernon, thanks for your careful reply to my blog post. You’re right that I overlooked the Github API. I just haven’t yet used tools that hook into it. What are some of your favorite tools?
I think it’s still worth comparing the pros/cons of their proprietary API and the number and maturity of its tools with the ecosystem around email, but I would like to get better educated about the Github tools.
I live in Emacs, so https://github.com/vermiculus/magithub, https://github.com/sigma/magit-gh-pulls and https://magit.vc/ itself are my primary tools. They are the most powerful git tools I had the pleasure to work with.
I just haven’t yet used tools that hook into it. What are some of your favorite tools?
I can highly recommend githubs own hub: https://github.com/github/hub, https://hub.github.com/hub.1.html
Git via email sounds like hell to me. I’ve tried to find some articles that evangelize the practice of doing software development tasks through email, but to no avail. What is the allure of approaches like this? What does it add to just using git by itself?
I tried to collect the pros and cons in this article: https://begriffs.com/posts/2018-06-05-mailing-list-vs-github.html
I also spoke about this at length in a previous article:
While my general experience with git email is bad (it’s annoying to set up, especially in older versions and I don’t like it’s interface too much), my experience of interaction with projects that do this was generally good. You send a patch, you get review, you send a new, self-contained patch, attached to the same thread… etc, in parallel to the rest of the project discussion. It’s a different flavour, but with a project that is used to the flow, it can really be quite pleasing.
What does it add to just using git by itself?
I think the selling point is precisely that it doesn’t add anything else. Creating a PR involves more steps and context changes than git-format-patch git-send-mail.
I have little experience using the mailing list flow, but when I had to do so (because the project required it) I found it very easy to use and better for code reviews.
Creating a PR involves more steps and context changes than
git-format-patchgit-send-mail.
I’m not sure I understand. What steps are removed that would otherwise be required?
Simply, it’s “create a fork and push your changes to it”. But also consider that it’s…
In this workflow, you switched between your terminal, browser, mail client, browser, terminal, and browser before the pull request was sent.
With git send-email, it’s literally just git send-email HEAD^ to send the last commit, then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README. You can skip the second step next time by doing git config sendemail.to someone@example.org. Bonus: no proprietary software involved in the send-email workflow.
Also github pull requests involve more git machinery than is necessary. Most people, when they open a PR, choose to make a feature branch in their fork from which to send the PR, rather than sending from master. The PR exposes the sender’s local branching choices unnecessarily. Then, for each PR, github creates more refs on the remote, so you end up having lots stuff laying around (try running git ls-remote | grep pull).
Compare that with the idea that if you want to send a code change, just mail the project a description (diff) of the change. We all must be slightly brainwashed when that doesn’t seem like the most obvious thing to do.
In fact the sender wouldn’t even have to use git at all, they could download a recent code tarball (no need to clone the whole project history), make changes and run the diff command… Might not be a great way to do things for ongoing contributions, but works for a quick fix.
Of course opening the PR is just the start of the future stymied github interactions.
In my case I tend to also perform steps:
man git-remote to see how to point my local clone (with the changes) to my GitHub forkgit remote commandsman git-push to see how to send my changes to the fork rather than the original repoTo send email, you also have to have an email address. If we are doing a fair comparison, that should be noted as well. Granted, it is much more likely that someone has an email address than a GitHub account, but the wonderful thing about both is that you only have to set them up once. So for this reason, it would be a bit more fair if the list above started from step four.
Now, if I have GitHub integration in my IDE (which is not an unreasonable thing to assume), then I do not need to leave the IDE at all, and I can fork, push, and open a PR (case in point, Emacs and Magithub can do this). I can also do all of this on GitHub, never leaving my browser. I don’t have to figure out where to send an email, because it automatically sends the PR to the repo I forked from. I don’t even need to open a shell and deal with the commandline. I can do everything with shortcuts and a little bit of mousing around, in both the IDE and the browser case.
Even as someone who is familiar with the commandline, and is sufficiently savvy with e-mail (at one point I was subscribed to debian-bugs-dist AND LKML, among other things, and had no problem filtering out the few bits I needed), I’d rather work without having to send patches, using Magit + magithub instead. It’s better integrated, hides uninteresting details from me, so I can get done with my work faster. It works out of the box. git send-email does not, it requires a whole lot of set up per repo.
Furthermore, with e-mail, you have to handle replies, have a firm grip on your inbox. That’s an art on its own. No such issue with GitHub.
With this in mind, the remaining benefit of git send-email is that it does not involve a proprietary platform. For a whole lot of people, that’s not an interesting property.
To send email, you also have to have an email address. If we are doing a fair comparison, that should be noted as well.
I did note this:
then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README
Magit + magithub […] works out of the box
Only if you have a GitHub account and authorize it. Which is a similar amount of setup, if not more, compared to setting up git send-email with your SMTP info.
git send-email does not, it requires a whole lot of set up per repo
You only have to put your SMTP creds in once. Then all you have to do per-repo is decide where to send the email to. How is this more work than making a GitHub fork? All of this works without installing extra software to boot.
then you’re prompted for an email address, which you can obtain from git blame or the mailing list mentioned in the README
With GitHub, I do not need to obtain any email address, or dig it out of a README. It sets things up automatically for me so I can just open a PR, and have everything filled out.
Only if you have a GitHub account and authorize it. Which is a similar amount of setup, if not more, compared to setting up git send-email with your SMTP info.
Lets compare:
e-mail:
magithub:
The first two steps are pretty much the same, both are easily assisted by my IDE. The difference starts from step 3, because my IDE can’t figure out for me where to send the email. That’s a manual step. I can create a helper that makes it easier for me to do step 4 once I have the address, but that’s about it. For the magithub case, step 3 is SPC g h f; step 4 SPC g s p u RET; step 5 SPC g h p, then edit the cover letter, and , c (or C-c) to finish it up and send it. You can use whatever shortcuts you set up, these are mine. Nothing to figure out manually, all automated. All I have to do is invoke a shortcut, edit the cover letter (the PR’s body), and I’m done.
I can even automate the clone + fork part, and combine push changes + open PR, so it becomes:
Can’t do such automation with e-mailed patches.
I’m not counting GitHub account authorization, because that’s about the same complexity as configuring auth for my SMTP, and both have to be done only once. I’m also not counting registering a GitHub account, because that only needs to be done once, and you can use it forever, for any GitHub-hosted repo, and takes about a minute, a miniscule amount compared to doing actual development.
Again, the main difference is that for the e-mail workflow, I have to figure out the e-mail address, a process that’s longer than forking the repo and pushing my changes, and a process that can’t be automated to the point of requiring a single shortcut.
Then all you have to do per-repo is decide where to send the email to. How is this more work than making a GitHub fork?
Creating a GitHub fork is literally one shortcut, or one click in the browser. If you can’t see how that is considerably easier than digging out email addresses from free-form text, then I have nothing more to say.
And we haven’t talked about receiving comments on the email yet, or accepting patches. Oh boy.
With GitHub, I do not need to obtain any email address, or dig it out of a README. It sets things up automatically for me so I can just open a PR, and have everything filled out.
You already had to read the README to figure out how to compile it, and check if there was a style guide, and review guidelines for contribution…
Lets compare
Note that your magithub process is the same number of steps but none of them have “so I won’t have to figure it out ever again”, which on the email process actually eliminates two of your steps.
Your magithub workflow looks much more complicated, and you could use keybindings to plug into send-email as well.
Can’t do such automation with e-mailed patches
You can do this and even more!
You already had to read the README to figure out how to compile it, and check if there was a style guide, and review guidelines for contribution…
I might have read the README, or skimmed it. But not to figure out how to compile - most languages have a reasonably standardised way of doing things. If a particular project does not follow that, I will most likely just stop caring unless I really, really need to compile it for one reason or another. For style, I hope they have tooling to enforce it, or at least check it, so I don’t have to read long documents and keep it in my head. I have more important things to store there than things that should be automated.
I would likely read the contributing guidelines, but I won’t memorize it, and I certainly won’t try to remember an e-mail address. I might remember where to find it, but it will still be a manual process. Not a terribly long process, but noticeably longer than not having to do it at all.
Note that your magithub process is the same number of steps but none of them have “so I won’t have to figure it out ever again”, which on the email process actually eliminates two of your steps.
Because there’s nothing for me to figure out at all, ever (apart from what repo to clone & fork, but that’s a common step between the two workflows).
Your magithub workflow looks much more complicated
How is it more complicated? Clone, work, fork, push, open PR (or clone+fork, work, push+PR), of which all but “work” is heavily assisted. None of it requires me to look anything up, anywhere.
and you could use keybindings to plug into send-email as well.
And I do, when I’m dealing with projects that use an e-mail workflow. It’s not about shortcuts, but what can be automated, what the IDE can do instead of requiring me to do it.
You can do this and even more!
You can, if you can extract the address to send patches to automatically. You can build something that does that, but then the automation is tied to that platform, just like the PRs are tied to GitHub/GitLab/whatever.
And again, this is just about sending a patch/opening a PR. There’s so much more PRs provide than that. Some of that, you can do with e-mail. Most of it, you can build on top of e-mail. But once you build something on top of e-mail, you no longer have an e-mail workflow, you have a different platform with which you can interact via e-mail. Think issues, labels for them, reviews (with approvals, rejection, etc - all of which must be discoverable by programs reliably), new commits, rebases and whatnot… yeah, you can build all of this on top of e-mail, and provide a web UI or an API or tools or whatever to present the current state (or any prior state). But then you built a platform which requires special tooling to use to its full potential, and you’re not much better than GitHub. You might build free software, but then there’s GitLab, Gitea, Gogs and a whole lot of others which do many of these things already, and are almost as easy to use as GitHub.
I’ve worked with patches sent via e-mail quite a bit in the past. One can make it work, but it requires a lot of careful thought and setup to make it convenient. I’ll give a few examples!
With GitHub and the like, it is reasonably easy to have an overview of open pull requests, without subscribing to a mailing list, or browsing archives. An open PR list is much easier to glance at and have a rough idea than a mailing list. PRs can have labels to help in figuring out what part of the repo they touch, or what state they are in. They can have CI states attached. At a glance, you get a whole lot of information. With a mailing list, you don’t have that. You can build something on top of e-mail that gives you a similar overview, but then you are not using e-mail only, and will need special tooling to process the information further (eg, to limit open PRs to those that need a review, for example).
With GitHub and the like, you can subscribe to issues and pull requests, and you’ll get notifcations about those and those alone. With a mailing list, you rarely have that option, and must do filtering on your own, and hope that there’s a reasonable convention that allows you to do so reliably.
There’s a whole lot of other things that these tools provide over plain patches over email. Like I said before, most - if not all - of that can be built on top of e-mail, but to achieve the same level of convenience, you will end up with an API that isn’t e-mail. And then you have Yet Another Platform.
How is it more complicated? Clone, work, fork, push, open PR (or clone+fork, work, push+PR)
Because the work for the send-email approach is: clone, work, git send-email. This is fewer steps and is therefore less complicated. Not to mention that as projects become more decentralized as they move away from GItHub, the registration process doesn’t go away and starts recurring for every new forge or instance of a forge you work with.
But once you build something on top of e-mail, you no longer have an e-mail workflow, you have a different platform with which you can interact via e-mail. Think issues, labels for them, reviews (with approvals, rejection, etc - all of which must be discoverable by programs reliably), new commits, rebases and whatnot…
Yes, that’s what I’m advocating for.
But then you built a platform which requires special tooling to use to its full potential, and you’re not much better than GitHub
No, I’m proposing all of this can be done with a very similar UX on the web and be driven by email underneath.
PRs can have labels to help in figuring out what part of the repo they touch, or what state they are in. They can have CI states attached.
So let’s add that to mailing list software. I explicitly acknoweldge the shortcomings of mail today and posit that we should invest in these areas rather than rebuilding from scratch without an email-based foundation. But none of the problems you bring up are problems that can’t be solved with email. They’re just problems which haven’t been solved with emails. Problems I am solving with emails. Read my article!
but then you are not using e-mail only, and will need special tooling to process the information further (eg, to limit open PRs to those that need a review, for example).
So what? Why is this even a little bit of a problem? What the hell?
With GitHub and the like, you can subscribe to issues and pull requests, and you’ll get notifcations about those and those alone.
You can’t subscribe to issues or pull requests, you have to subscribe to both, plus new releases. Mailing lists are more flexible in this respect. There are often separate thing-announce, thing-discuss (or thing-users), and thing-dev mailing lists which you can subscribe to separately depending on what you want to hear about.
Like I said before, most - if not all - of that can be built on top of e-mail, but to achieve the same level of convenience, you will end up with an API that isn’t e-mail.
No, you won’t. That’s simply not how this works.
Look, we’re just not on the same wavelength here. I’m not going to continue diving into this ditch of meaningless argument. You keep using whatever you’re comfortable with.
Your magithub workflow looks much more complicated, and you could use keybindings to plug into send-email as well.
I just remembered a good illustration that might explain my stance a bit better. My wife, a garden engineer, was able to contribute to a few projects during Hacktoberfest (three years in a row now), with only a browser and GitHub for Windows at hand. She couldn’t have done it via e-mail, because the only way she can use her email is via her smart phone, or GMail’s web interface. She knows nothing else, and is not interested in learning anything else either, because these perfectly suit her needs. Yet, she was able to discover projects (by looking at what I contributed to, or have starred), search for TODOs or look at existing issues, fork a repo, write some documentation, and submit a PR. She could have done it all from a web browser, but I set up GitHub for Windows for her - in hindsight, I should have let her just use the browser. We’ll do that this year.
She doesn’t know how to use the command-line, has no desire, and no need to learn it. Her email handling is… something that makes me want to scream (no filters, no labels, no folders - one big, unorganized inbox), but it suits her, and as such, she has no desire to change it in any way.
She doesn’t know Emacs, or any IDE for that matter, and has no real need for them, either.
Yet, her contributions were well received, they were useful, and some are still in place today, unchanged. Why? Because GitHub made it easy for newcomers to contribute. They made it so that contributing does not require them to use anything else but GitHub. This is a pretty strong selling point for many people, that using GitHub (and similar solutions) does not affect any other tool or service they use. It’s distinct, and separate.
Not all projects have work for unskilled contributors. Why should we cater to them (who on the whole do <1% of the work) at the expense of the skilled contributors? Particularly the most senior contributors, who in practice do 90% of the work. We don’t build houses with toy hammers so that your grandma can contribute.
I’m not saying we shouldn’t make tools which accomodate everyone. I’m saying we should make tools that accomodate skilled engineers and build simpler tools on top of that. Thus, the skilled engineers are not slowed down and the greener contributors can still get work done. Then, there’s a path for newer users to become more exposed to more powerful tools and more smoothly become senior contributors themselves.
You need to get this point down if you want me to keep entertaining a discussion with you: you can build the same easy-to-use UX and drive it with email.
I’m not saying we shouldn’t make tools which accomodate everyone. I’m saying we should make tools that accomodate skilled engineers and build simpler tools on top of that.
I was under the impression that git + GitHub are exactly these. Git and git send-email for those who prefer those style, GitHub for those who prefer that. The skilled engineers can use the powerful tools they have, while those with a different skillset can use GitHub. All you need is willingness to work with both.
you can build the same easy-to-use UX and drive it with email.
I’m not questioning you can build something very similar, but as long as e-mail is the only driving power behind it, there will be plenty of people who will turn to some other tool. Because filtering email is something you and I can easily do, but many can’t, or aren’t willing to. Not when there are alternatives that don’t require them to do extra work.
Mind you, I consider myself a skilled engineer, and I mainly use GitHub/GitLab APIs, because I don’t have to filter e-mail, nor parse the info in them, the API serves me data I can use in an easier manner. From an integrator point of view, this is golden. If, say, an Emacs integration starts with “Set up your email so mail with these properties are routed here”, that’s not a good user experience. And no, I don’t want to use my MUA to work with git, because magit is a much better, much more powerful tool for that, and I value my productivity.
I’m not questioning you can build something very similar, but as long as e-mail is the only driving power behind it, there will be plenty of people who will turn to some other tool.
I’m pretty sure the whole point would be that the “shiny UI” tool would not expose email to the user at all – so the “plenty of people” wouldn’t leave because they wouldn’t know the difference.
So…. pretty much GitHub/GitLab/Gitea 2.0, but with the added ability to open PRs by email (to cater to that workflow), and a much less reliable foundation?
Sure. What could possibly go wrong.
I don’t think you can count signing up for GitHub if you’re not counting signing up for email.
If you’re using hub, it’s just hub pull-request. No context switching
If you’re counting signing up for email you have to count that for GitHub, too, since they require an email address to sign up with.
Using GitHub requires pushing to different repository and then opening the PR on the GitHub Interface, which is a context change. The git-send-mail would be equivalent to sending the PR.
git-send-email is only one step, akin to opening the PR, no need to push to a remote repository. And from the comfort of your development environment. (Emacs in my case)
As I read this I thought about my experiences with Diaspora and Mastodon. Pages like this one or this one (click “Get Started”, I couldn’t do a deep link because JavaScript) are, IMHO, a big part of the reason these services don’t take off. How can an average user be expected to choose from a basically random list of nodes? How can I, a reasonably “technical” person, even be expected to do so?
So then why not host my own node? First, I don’t have time and most people I know don’t either. If I was 15 again I totally would because I had nothing better to do. I also don’t want to play tech support for a good chunk of my social network, and providing a service to someone has a tendency to make them view you as the tech support.
Second, if I do that I’m now in charge of security for my data. As terrible as Twitter and Facebook are, they’re probably still a lot better at securing my data than I am (at the very least they probably patch their systems more often than I would). Even worse, if some non-technical person decides to bite the bullet and create a node for his/her friends, how secure do you think that’s going to be?
Further, what are the odds that I, or whoever is maintaining the node, basically gets bored of it one day and kills the whole thing? Pretty damn high (maybe I and all my friends are assholes, though, so whatever).
Anyway, this post really spoke to me because I’ve been trying to escape Evil companies for awhile now and “federated” just doesn’t seem to be the answer. I now believe that centralized is here to stay, but that we should start looking at the organizations that control the data instead of the technology. For example, if Facebook were an open non-profit with a charter that legally prevented certain kinds of data “sharing” and “harvesting” maybe I wouldn’t have any problem with it.
How can an average user be expected to choose from a basically random list of nodes?
How did they choose their email provider? Not be carefully weighing the technical options, surely. They chose whatever their friends or parents used, because with working federation it doesn’t matter.
what are the odds that I, or whoever is maintaining the node, basically gets bored of it one day and kills the whole thing?
Same as what happened with many early email providers: when they died, people switched to different ones and told their friends their new addresses.
Really, all this argument of “what if federation isn’t a holy grail” is pointless because we all already use a federated system — email — and we know for a fact that it works for humans, despite all its flaws.
How did they choose their email provider? Not be carefully weighing the technical options, surely. They chose whatever their friends or parents used, because with working federation it doesn’t matter.
In contrast to mastodon instances - which are very alike - email providers have differentiated on the interface and guarantees they provide and market that. People react to that.
In contrast to mastodon instances
While this was largely true in the beginning, many Fediverse nodes now do market themselves based on default interface, additional features (e.g. running the GlitchSoc fork or something like it), or even using non-Mastodon software like Pleroma. I suspect this will only increase as additional implementations (Rustodon) and forks (#ForkTogether) take off and proliferate.
How did they choose their email provider?
I think federated apps like Mastodon are fundamentally different than email providers. Most email providers are sustainable businesses, they earn money with adds or paid plans or whatever and have their own emails servers and clients with specific features. Self-hosted email servers are a minority. Please tell if I wrong, but I don’t think one can easily earn money with a Mastodon instance.
However I agree that both are federated.
You’re certainly not wrong, though I would argue that email, particularly as it was 20+ years ago when it went “mainstream”, is much simpler (for instance, it doesn’t require any long-term persistence or complicated access control) and therefore easier to federate successfully (in a way that humans can handle) than social networking.
AP style social network federation also doesn’t require long-term persistence or complicated access control.
Yeah, I listed them in my comment… “long-term persistence or complicated access control”. Admittedly I didn’t go into much detail. Email is a very simple social network, there isn’t much “meat” to it, particularly as it existed when it became popular.
email has very long term persistence, much longer than something like facebook because it’s much easier to make backups of your emails than to make backups of your facebook interactions.
i guess i don’t know what you mean by “complicated access control.”
Email is basically fire and forget. You download it to your computer and then you’ve got it forever (modern email does more, but also includes more of the privacy / data issues that come with other social networks). But most users can’t easily give other people on-demand access to their emails, which is the case with Facebook, Twitter, etc. Email is really meant for private communication (possibly with a large group, but still private), Facebook and company are for private, semi-private, and even public communication, and they require a user to be able to easily retroactively grant or retract permissions. Email doesn’t handle these other use-cases (this isn’t a fault of email, it doesn’t try to).
The ability for interested parties to interact without reply all. I can post a picture of a beautiful burrito, and people can comment or ignore at their leisure, and then reply to each other. I guess there’s some preposterous email solution where I mail out a link to an ad hoc mailing list with every update and various parties subscribe, but… meh.
something that handles a feature like that need not be email per se, but it could have a very similar design, or be built on top of email. something like what you suggested wouldn’t seem preposterous if the clients were set up to facilitate that kind of use.
In the case of Mastodon, which instance you pick does matter. Users can make posts that are only visible to others in the same instance. If you pick the “wrong” home instance, you’ll have to make another account in another instance to see the instance-private posts there. If you’re a new Mastodon user, you might not know that one instance is good for artists and another good for musicians, etc. In any case, this is as easily solvable problem by adding descriptions and user-provided reviews to each instance.
Second, if I do that I’m now in charge of security for my data. As terrible as Twitter and Facebook are, they’re probably still a lot better at securing my data than I am
Setting a price tag on your datas doesn’t secure them. There are enough scams and hoaxes on Facebook to share your information with other companies that I have to disagree with you. And since those social networks are collecing more data than necessary, it is easier to lose data.
Facebook and Twitter also present single valuable targets and are thus more likely to be targeted. A hundred mastodon instances may be individually less secure due to the operators having fewer resources or less experience, but compromising a single server won’t get you as much.
That’s a good point, although Wordpress vulnerabilities are still a big deal even though there are tons of small servers. The server might not be a monolith, but if the software is then it’s only slightly more work to attack N instances.
True, although it depends whether the vulnerabilities are in the application being served or in the web server or OS serving it.
About and contributing
This tutorial is maintained by DjangoGirls.
Following the link gets us this (emphasis mine):
Django Girls is a non-profit organization and a community that empowers and helps women to organize free, one-day programming workshops by providing tools, resources and support. We are a volunteer run organization with hundreds of people contributing to bring more amazing women into the world of technology. We are making technology more approachable by creating resources designed with empathy.
To me, it looks like they created high quality approachable documentation as material for their workshops, to further their goal of bringing women into tech. “Designed with empathy” probably means it avoids using exclusive language, anecdotes, analogies, and so on. So less “designed for female programmers” and more “doesn’t assume programmers are typically male.”
Otherwise, nothing. I don’t reckon a bunch of women were about to write the “Django Bros Tutorial.”
As others have said, the tutorial name just comes from the organisation.
It’s notable in that it is tutorial meant for people with 0 knowledge and possible contact issues with computers.
Also, FWIW, the “Girls” name for those organisations (coming from “Rails Girls” is widely regarded a mistake now. The chapter I help out with (Rails Girls Berlin) has recently renamed into “Code Curious”. Turns out that grown women don’t feel spoken to by “Girls”.
Turns out that grown women don’t feel spoken to by “Girls”.
This is a cultural thing, by which I mean there are women in the US, at least, who are older than I am (34) who wouldn’t be troubled by being called “girls” or would actively appreciate it as a sign of informality.
It’s not that people saw it insulting or something, we had a lot of people that just didn’t feel addressed at first contact! The amount of people we found passing on the project on first contact for the reason that they thought it was for people under 18 was notable.
Interestingly, the US version (and precursor) of Rails Girls is called RailsBridge for reasons of not typecasting.
It’s a thing to write books about :D. I’m quite interested how Code Curious turns out. RG is quite a successful brand, which is lost in the process.
The argument against succinctness seems odd to me. Yes, regular expressions (the example given) are notoriously succinct, but is
a(b{3,10}|c*)d
really harder to read than
("a" (or (repeat "b" 3 10) (any "c") "d")
or some other notation? The verbose one might be easier to read for someone unfamiliar with regex notation, but once you’ve learned regexes, it becomes a lot easier to see “the whole picture” with a succinct notation (IMHO).
It’s like saying we should write arithmetic like (to use an example that totally isn’t a real programming language ahem):
ADD 1 TO X GIVING Y
instead of
Y = 1 + X
Mathematical notation is notoriously succinct, and it has succeeded because it makes communicating mathematics much easier (yes, I’m intentionally echoing “Notation as a Tool of Thought” here). Standard mathematical notation is the world’s most common DSL and is so common as to be ubiquitous. It is also notoriously succinct.
Many of the arguments in TFA against regex notation seem to be at least partially answered by extended regex notation, which allows whitespace. To whit:
a
(
b{3,10}
|
c*
)
d
is just as if not more readable as the s-expr above (again IMHO) and still lets me see the trees and the forest.
I suppose the argument comes down to ease-of-use for beginners versus ease-of-use for experienced users. Experienced users want brevity and conciseness and beginners what code that is self-explanatory.
I think that there should be tools for converting from a dsl to the unsugared powerful syntax. As much as I love regexes, not everyone knows them and there are lots of subleties, complexities, and variations (is that perl, vim, shell?).
The tricky parts of regex don’t go away with verbalising the operators.
For example: is the statement above evaluated in a greedy or non-greedy fashion?
The problem with regular expressions isn’t so much that they are overly succinct but that the sub-expressions typically go unnamed. E.g. we might have a regular expression for IPv4 addresses (from https://stackoverflow.com/a/5284410):
re = /\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.|$)){4}\b/
but this would be much easier to read if we wrote:
octet = /25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?/
re = /\b(#{octet}).(#{octet}).(#{octet})\.(#{octet})\b/
and as a side benefit it does stricter validation and correctly captures all 4 octets.
The most important facility any language can provide IMO is the ability to give names to constructs we create.
Excellent point. Giving names and recursive ability to regexes gets you Parsing Expression Grammars, though there’s no standardized notation for them.
Once you can name subexpressions you have the question of recursion. If you support recursion this is essentially PEG. That’s why I call PEG “regex++”.
This is so important. We’re always telling that we are into making things more secure, but that Rust doesn’t make software (or memory) magically bullet-proof. unsafe makes things auditable, but it is still hard.
Love how the post also gives a good guideline for auditing.
This problem would have been avoided if Unreal Engine had fail-fast behavior in class remappings. Some discussion on Reddit. Seems that converting errors to warnings is popular in culture of gamedev (and was popular in web too in PHP era). Almost every game emits lots of creepiest warnings.
By looking at screencasts, this game reminded me of E.T. for Atari and non-bugged AI would not save it. Hollywood movie franchise games are almost always of extremely poor quality in all aspects, not only code, but gameplay, assets and art style too.
Hollywood movie franchise games are almost always of extremely poor quality in all aspects, not only code, but gameplay, assets and art style too.
While I agree with that in general, the Alien franchise is an exception, having multiple high-quality games like Alien Trilogy, Aliens vs. Predator (not related to the film, with all races playable and an incredibly great Alien movement system), AvP 2 and Alien: Isolation (with its extremely faithful level design modelled after the first movie).
Colonial Marines was an extreme let-down, especially with such an experienced shooter studio as Gearbox behind it.
As one insignificant user of this language, please stop adding these tiny edge case syntax variations and do something about performance. But I am one small insignificant user …
This is exactly the attitude that leads to maintainers’ burn outs.
Do realize this:
(None of this is aimed at you personally, I don’t know who you are. I’m dissecting an attitude that you’ve voiced, it’s just all too common.)
Python is not a product, and you’re not a paying customer, you don’t get to say “do this instead of that” because none of the volunteer maintainers owes you to produce a language for you. Just walking by and telling people what to do with their project is at the very least impolite.
I agree with the general direction of your post, but Python is a product and it is marketed to people, through the foundation and advocacy. It’s not a commercial product (though, given the widespread industry usage, you could argue it somewhat is). It’s reasonable of users to form expectations.
Where it goes wrong is when individual users claim that this also means that they need to be consulted or their consultation will steer the project to the better. http://www.ftrain.com/wwic.html has an interesting investigation of that.
Where it goes wrong is when users claim that this also means that they need to be consulted or their consultation will steer the project to the better.
Wait, who is the product being built for, if not the user? You can say I am not a significant user, so my opinion is not important, as opposed to say Google which drove Python development for a while before they focused on other things, but as a collective, users’ opinions should matter. Otherwise, it’s just a hobby.
Sorry, I clarified the post: “individual users”. There must be a consultation process and some way of participation. RFCs or PEPs provide that.
Yet, what we regularly see is people claiming how the product would be a better place if we listened to them (that, one person we never met). Or, alternatively, people that just don’t want to accept a loss in a long-running debate.
I don’t know if that helps clarifying, it’s a topic for huge articles.
I often find what people end up focusing on - like this PEP - is bike shedding. It’s what folks can have an opinion on after not enough sleep and a zillion other things to do and not enough in depth knowledge. Heck I could have an opinion on it. As opposed to hard problems like performance where I would not know where to start, much less contribute any code, but which would actually help me and, I suspect, many other folks, who are with some sighing, migrating their code to Julia, or, like me, gnashing their teeth at the ugliness of Cython.
Yeah, it’s that kind of thing. I take a harsh, but well-structured opinion any time and those people are extremely important. What annoys me is people following a tweet-sized mantra to the end, very much showing along the path that they have not looked at what is all involved or who would benefit or not knowing when to let go off a debate.
Adding syntax variations is not done at the expense of performance, different volunteers are working on what’s more interesting to them.
Regrettably, a lot of languages and ecosystems suffer greatly from the incoherence that this sort of permissive attitude creates.
Software is just as much about what gets left out as what gets put in, and just because Jane Smith and John Doe have a pet feature they are excited about doesn’t mean they should automatically be embraced when there are more important things on fire.
the incoherence that this sort of permissive attitude creates
The Haskell community would’ve just thrown PEP 572 behind {-# LANGUAGE Colonoscopy #-} and been done with it.
Sure, this doesn’t get us out of jail free with regard to incoherence, but it kicks down the problem from the language to the projects that choose to opt-in.
I find it hard to see this as a good thing. For me, it mostly highlights why Haskell is a one-implementation language… er, 2 ^ 227 languages, if ghc --supported-extensions | wc -l is to be taken literally. Of course, some of those extensions are much more popular than others, but it really slows down someone trying to learn “real world” Haskell by reading library code.
Of course, some of those extensions are much more popular than others
Yeah, this is a pretty interesting question! I threw some plots together that might help explore it, but it’s not super conclusive. As with most things here, I think a lot of this boils down to personal preference. Have a look:
https://gist.github.com/atondwal/ee869b951b5cf9b6653f7deda0b7dbd8
Yes. Exactly this. One of the things I value about Python is its syntactic clarity. It is the most decidedly un-clever programming language I’ve yet to encounter.
It is that way at the expense of performance, syntactic compactness, and probably some powerful features that could make me levitate and fly through the air unaided if I learned them, but I build infrastructure and day in, day out, Python gets me there secure in the knowledge that I can pick up anyone’s code and at the VERY LEAST understand what the language is doing 99% of the time.
I find that “people working on what interests them” as opposed to taking a systematic survey of what use cases are most needed and prioritizing those is a hard problem in software projects, and I find it curious that people think this is not a problem to be solved for open source projects that are not single writer/single user hobby projects.
Python is interesting because it forms core infrastructure for many companies, so presumably they would be working on issues related to real use cases. Projects like numpy and Cython are examples of how people see an important need (performance) and go outside the official language to get something done.
“If you want something to happen in an open source project, volunteer to do it.” is also one of those hostile attitudes that I find curious. In a company with a paid product of course that attitude won’t fly, but I suspect that if an open source project had that attitude as a default, it would gradually lose users to a more responsive one.
As an example, I want to use this response from a library author as an example of a positive response that I value. This is a library I use often for a hobby. I raised an issue and the author put it in the backlog after understanding the use case. They may not get to it immediately. They may not get to it ever based on prioritization, but they listened and put it on the list.
Oddly enough, I see this kind of decent behavior more in the smaller projects (where I would not expect it) than in the larger ones. I think the larger ones with multiple vendors contributing turn into a “pay to play” situation. I don’t know if this is the ideal of open source, but it is an understandable outcome. I do wish the hostility would decrease though.
Performance has never been a priority for Python and this probably won’t change, because as you said, there are alternatives if you want Python’s syntax with performance. Also its interoperability with C is okeish and that means that the small niche of Python’s users that use it for performance critical operations that are not already supported by Numpy, Numba and so on, will always be free to go that extra mile to optimize their code without much trouble compared to stuff like JNI.
If you want raw performance, stick to C/C++ or Rust.
I also observe the same tendency of smaller projects being more responsive, but I think the issue is scale, not “pay to play”. Big projects get so much more issue reports but their “customer services” are not proportionally large, so I think big projects actually have less resource per issue.
please stop adding these tiny edge case syntax variations and do something about performance.
There’s a better forum, and approach, to raise this point.
I guess you are saying my grass roots campaign to displace “Should Python have :=” with “gradual typing leading to improved performance” as a higher priority in the Python world is failing here. I guess you are right :)
Have you tried Pypy? Have you tried running your code through Cython?
Have you read any of the zillion and one articles on improving your Python’s performance?
If the answer to any of these is “no” then IMO you lose the right to kvetch about Python’s performance.
And if Python really isn’t performant enough for you, why not use a language that’s closer to the metal like Rust or Go or C/C++?
Yes to all of the above. But not understanding where all the personal hostility is coming from. Apparently having the opinion that “Should := be part of Python” is much less important than “Let’s put our energies towards getting rid of the GIL and creating a kickass implementation that rivals C++” raises hackles. I am amused, entertained but still puzzled at all the energy.
There was annoyance in my tone, and that’s because I’m a Python fan, and listening to people kvetch endlessly about how Python should be something it isn’t gets Ooooold when you’ve been listening to it for year upon year.
I’d argue that in order to achieve perf that rivals C++ Python would need to become something it’s not. I’d argue that if you need C++ perf you should use C++ or better Rust. Python operates at a very high level of abstraction which incurs some performance penalties. Full stop.
This is an interesting, and puzzling, attitude.
One of the fun things about Cython was watching how the C++ code generated approaches “bare metal” as you add more and more type hints. Not clear at all to me why Python can not become something like Typed Racket, or LISP with types (I forget what that is called) that elegantly sheds dynamism and gets closer to the metal the more type information it gets.
Haskell is a high level language that compiles down to very efficient code (barring laziness and thunks and so on).
Yes, I find this defense of the slowness of Python (not just you but by all commentators here) and the articulation that I, as one simple, humble user, should just shut up and go away kind of interesting.
I suspect that it is a biased sample, based on who visits this post after seeing the words “Guido van Rossum”
My hypothesis is that people who want performance are minority among Python users. I contributed to both PyPy and Pyston. Most Python users don’t seem interested about either.
For me that has been the most insightful comment here. I guess the vast majority of users employ it as glue code for fast components, or many other things that don’t need performance. Thanks for working on pypy. Pyston I never checked out.
Not clear at all to me why Python can not become something like Typed Racket, or LISP with types (I forget what that is called) that elegantly sheds dynamism and gets closer to the metal the more type information it gets.
Isn’t that what mypy is attempting to do? I’ve not been following Python for years now, so really have no horse in this race. However, I will say that the number of people, and domains represented in the Python community is staggering. Evolving the language, while keeping everyone happy enough to continue investing in it is a pretty amazing endeavor.
I’ll also point out that Python has a process for suggesting improvements, and many of the core contributors are approachable. You might be better off expressing your (valid as far as I can see) concerns with them, but you might also approach this (if you care deeply about it) by taking on some of the work to improve performance yourself. There’s no better way to convince people that an idea is good, or valid than to show them results.
Not really. Mypy’s goal is to promote type safety as a way to increase program correctness and reduce complexity in large systems.
It doesn’t benefit performance at all near as I can tell, at least not in its current incarnation.
Cython DOES in fact do this, but the types you hint with there are C types.
Ah, I thought maybe MyPy actually could do some transformation of the code, based on it’s understanding, but it appears to describe itself as a “linter on steroids,” implying that it only looks at your code in a separate phase before you run it.
Typed Racket has some ability to optimize code, but it’s not nearly as sophisticated as other statically typed languages.
Be aware that even Typed Racket still has performance and usability issues in certain use cases. The larger your codebase, the large the chance you will run into them. The ultimate viability of gradual typing is still an open question.
In no way did I imply that you should “shut up and go away”.
What I want is for people who make comments about Python’s speed to be aware of the alternatives, understand the trade-offs, and generally be mindful of what they’re asking for.
I may have made some false assumptions in your case, and for that I apologize. I should have known that this community generally attracts people who have more going on than is the norm (and the norm is unthinking end users posting WHY MY CODE SO SLOW?
Hey, no problem! I’m just amused at the whole tone of this set of threads set by the original response (not yours) to my comment, lecturing me on a variety of things. I had no idea that (and can’t fathom why) my brief comment regarding prioritization decisions of a project would be taken so personally and raise so much bile. What I’m saying is also not so controversial - big public projects have a tendency to veer into big arguments over little details while huge gaps in use cases remain. I saw this particular PEP argument as a hilarious illustration of this phenomenon in how Python is being run.
Thinking about this a little more - sometimes, when languages ‘evolve’ I feel like they forget themselves. What makes this language compelling for vast numbers of programmers? What’s the appeal?
In Python’s case, there are several, but two for sure are a super shallow learning curve, and its tendency towards ‘un-clever’ syntax.
I worry that by morphong into something else that’s more to your liking for performance reasons, those first two tenets will get lost in the shuffle, and Python will lose its appeal for the vast majority of us who are just fine with Python’s speed as is.
Yes, though we must also remember that as users of Python, invested in it as a user interface for our code ideas, we are resistant to any change. Languages may lose themselves, but changes are sometimes hugely for the better. And it can be hard to predict.
In Python’s 2.x period, what we now consider key features of the language, like list comprehensions and generator expressions and generators, were “evolved” over a base language that lacked those features altogether, and conservatives in the community were doubtful they’d get much use or have much positive impact on code. Likewise for the class/type system “unification” before that. Python has had a remarkable evolutionary approach over its long 3-decade life, and will continue to do so even post-GvR. That may be his true legacy.
Heh. I think this is an example of the Lobste.rs rating system working as it should :) I posted an immoderate comment borne of an emotional response to a perfectly reasonable reply, and end up with a +1: +4 -2 troll, -1 incorrrect :)
Next level: port electron to windows 95, run Slack on this.
Ironically, the maker of the win95-in-electron hack works at… slack. https://github.com/felixrieseberg
That would be quite a hack. I doubt Electron could even be made to run on Windows 95. Once Windows 98 came out, Win95 was all but forgotten by 99% of the computing world in short order. I would guess that most programs of the pre-Win7 era that are still actually useful have roughly this level of support:
Pre-Win7 would have been Windows Vista. Nearly all programs should have run on Windows XP that were being developed on Vista. Typically you’re going to want to target the current release and at least the last major release. I think you’re correct about 98 and 95 though. Even today with Visual Studio 2017 compiling C++ I can target Windows 7, although I think by default you only get to target Windows 10 and Windows 8.x