I used to use
go get for grabbing even non-Go repos but had to stop because of similar workflow breaks over the past few years. I finally wrote a small tool to replace it I call
grab https://github.com/jmhodges/grab to take over my “fetch the code and organize it into a tidy directory system” need
It actually uses the nice VCS library guts that
go get does!
This is not meant to be a gotcha, but should you update your own install instructions? I.e., tell people who are using go 1.16 or up to run
go install github.com/jmhodges/grab@latest rather than
go get github.com/jmhodges/grab. I ask—and this is why it’s not meant to be a gotcha—because I’m new to go, and I want to make sure I understand the new recommendations.
This is so great! A huge win. Here’s Schneier being stoked about it https://www.schneier.com/blog/archives/2020/02/firefox_enables.html
Kenneth Reitz has responded: http://journal.kennethreitz.org/entry/conspiracy
This response isn’t a denial. I think folks should notice that. He goes out of his way to diminish the author without actually denying it. He’s mad about the post, not that he’s falsely accused, because the post is true.
This response is also filled with red flags:
Saying Tom didn’t want money and then saying all future donations will be split with Tom is a weird contradiction. The quote “work on the advancement of requests” seems like a way to differentiate between maintenance (which he wasn’t really doing while others were) and “advancement” (which is whatever he’s doing). Including the news that the library will changing its backend sounds like one of those sudden, made-up decisions people do try and make their accuser seem unqualified. How interesting the timing on that! Talking about the small set of “real collaborators” excludes someone who he explicitly says he was collaborating with is gaslight-y. And the “just don’t fucking work with me” has such a long history of being said by people who really did awful things and don’t want to admit that.
Saying Tom didn’t want money and then saying all future donations will be split with Tom is a weird contradiction
But Tom is not njs.
Including the news that the library will changing its backend sounds like one of those sudden, made-up decisions people do try and make their accuser seem unqualified.
You make some good points. In terms of timing, feel like this was mentioned ahead of PyCon on an episode of Talk Python, but I was only half-listening to that the first time.
All that being said, I’m not sure why this person feels the need to attack my character, including curating a list of quotes (what?) from “collaborators”.
I’d just like to point out that Kenneth has lists of quotes…about himself…on his website.
Kenneth has lists of quotes…about himself…on his website.
While I’m in no way defending Kenneth or his actions, mocking someone for stating their opinions (in quote or any form) on their own website is not in the spirit of engineering or science. If you feel the need to be petty, please find another place to dunk on people.
I was highlighting the irony of the journal entry expressing incredulity about nj’s inclusion of a list of quotes from collaborators, as KR knows all about including quotes from “collaborators” (or sycophants, everyone can make up their own mind).
As far as “scoring on people”, I’d suggest that you are the one who is attempting to do so, with your virtue signalling and calling me petty.
It always amazes me (and scares me) how different people percieve reality (if that is even an achievable thing) and how the same situation can be read completely differently by two different brains. It is super scary to me. In this case I believe neither of them had anything malicious going on, and still, both of them have a completely different grasp of the situation.
Judging by the reception on reddit’s /r/python, it’s not that balanced: https://www.reddit.com/r/Python/comments/bklroc/why_im_not_collaborating_with_kenneth_reitz/ https://www.reddit.com/r/Python/comments/bku40b/njss_blog_post_kenneth_reitzs_journal/
This is why I think the original article was bad form. I know neither of the people involved. I’ve never even heard of them. I wouldn’t know who to believe even if I knew them.
Tag this one as “call out culture.” If there’s something to be done, it should probably be done within that community and with discretion, precisely because there are two sides to every story and people are biased toward the first/best expositor regardless of whatever actually happened.
I think it would be great if the mods banned personal call out articles on this basis. And, again, I know neither of these people. I’m not in the Python community.
it should probably be done within that community and with discretion, precisely because there are two sides to every story and people are biased toward the first, best expositor.
I don’t disagree in general, but how do you do that in the context of an open source community? There is no real central authority, and people can essentially just do what they want.
I actually can contextualize Nathaniel’s post with my own interactions with reitz (which were a lot less involved) but they verify my impression.
So I an glad Nathaniel posted this. It helps me stay clear of unproductive conflicts for the future.
A few helpful and engaged members of the python community have signaled that it matches some of their observations.
If you keep such things private and secretive is hard to go through with community actions (like removing someone from boards, etc). If you make it public discourse people complain about character assassination or whatever. At the end of the day I believe in a victims right to discuss their case publicly of they want to.
I do think that’s true, but I think that one of the author’s central points, and part of the reason I posted this, is that it’s important to be aware that when money is involved there is a whole different level of accountability that comes into play.
This is why the legal system exists. This is why scrupulously detailed contracts arbitrated by lawyers exist.
Moreover, this is why foundations like the PSF exist - they handle the ‘dirty’ work of distributing money in a way that’s free of legal entanglement and less likely to engender this kind of mis-understanding.
This is a system and choice of jargon designed to hide that it is controlled by central servers. It’s also run by fascists. (You can see the design goals coming in line with their politics, yeah?)
I don’t think we need this on lobsters.
The central servers bit I haven’t really seen before…how do you figure?
Also, there are plenty of technical reasons to pan this without getting into name calling. Let’s try to do better than outrage mobs when criticizing things on the site.
Can you please expound, or link to a blog post that does to (something resembling) your satisfaction?
This is great news and a much needed improvement. We’ve lost too many women, recently even, from the Go community because of the prior versions.
Just envision how complex would it be to deploy to mesos/kubernetes without the container abstraction…
Nice and complete article!
I was at Twitter when we deployed the first production deployment of mesos. It did not use “containers” as they are understood in the docker sense. We shipped binaries, there was a shared filesystem you could see if you logged into one of the machines your service was running on, and mesos set up service discovery to connect the local ports you were randomly assigned at boot time in an (iirc) env variable.
For mesos, and, I believe, borg before it, containers came after to make things easier.
containers came after to make things easier. That’s the whole point of it! Obviously you can do without, but like I said, imagine how complex it would be!
I’m not speaking about the configuration details but the whole concept. Can someone point me to a technology that correctly and easily describe the runtime convention such as port binding, volume mounting, but as the same time enable extensibility and security?
Sûre many tools exist to do things here and there, but none use them offer a coherent solution to build on top of.
Sure, containers enabled this paradigm. But it’s not as if networking, data storage/access, scaling, and security didn’t exist before containers. They were just handled differently with different technologies.
Does kubernetes make this easier/better? Probably depends who you ask. And for my take, you still have to run kubernetes on something. That something still has the same needs/requirements as always. (This is where you introduce the cloud layer, or someone else’s computer.)
Does kubernetes make this easier/better?
Similarly to how containers apply the benefits you can easily get from the Maven model to any runtime, I’ve always thought that kubernetes was just Greenspun’s Tenth Law but about the benefits of Erlang.
Honestly, I think it wouldn’t be too bad.
Artifacts would be shipped around as e.g. .zip files downloaded from HTTP servers, instead of containers pulled from container registries. Those artifacts would be tagged with metadata to indicate which runtimes they require, and nodes in the cluster would be tagged to indicate which runtimes they provide; the scheduler would schedule jobs based on those constraints. Kubernetes, at least, already supports this. On balance, I think dropping containers for this aspect of things would actually make the overall system simpler.
Resource limits and namespaces would need to leverage underlying OS primitives directly, rather than going through the container abstraction. Probably a proto-container spec (like OCI?) would arise from these requirements naturally. This is where things would get more complex.
In terms of a framework that would allow you to programmatically deploy (virtual) machines, get applications installed then configure those and start them, it’s basically what I built at my last job. I didn’t get onto any advanced features like automatic load distribution and automatic machine setups when the environment needs more resources but it was completely possible on the back of the basic features as well as the fact that the system did set up distributed (peer to peer and mastered) services together with their own encrypted virtual networks.
I don’t know what other advanced features are possible in Kubernetes/Mesos that wouldn’t work without containers but AFAIK the security isolation is still better for VMs than containers and the networking should be easier since it’s simplified by not having a machine effectively be a router for the containers.
It would be very neat if the author included links to libraries that implemented this model of regex.
The author’s library re2 uses this technique.
Rust’s implementation is also based on RE2: https://doc.rust-lang.org/regex/regex/index.html
JWZ’s angry dickhead schtick is pretty funny when he’s right. I recognized those as buttons immediately. I bet lots of people did, honestly. But what I didn’t recognize is what clicking them would do.
Do they add just the corresponding tickets in the row you click? Then why do you have all the possible ticket options prepopulated with a 1?
Do they add all the tickets to your cart? Why not just have one “order” button?
I had a guess but I had to pull up the ordering page to see if I was right. I will say it’s not nearly as confusing if you pull up a show that doesn’t have multiple ordering tiers - but this looks like another JWZ “I’m right and everyone else is a moron” special.
Honestly, I stared at that screenshot for a good 20 seconds before I realized that those were buttons at all…
I couldn’t see it from the screenshot either so I clicked on the DNA Lounge shop page and figured it out, but still I think the UI is pretty awful.
Making fun of people for not being able to use your shitty UI is a rather dickish thing, in my opinion.
So that’s the part I find most interesting. jwz types <button> into his html and the user agent renders something which apparently is not recognizable as a button. Is jwz responsible for that?
If I ran across that form, and given the overall styling and color scheme, I could see myself thinking that the person who made that page didn’t know what they were doing, and either styled labels to look like buttons, used button-looking images as kinda-labels, or had somehow confused buttons with labels….because why would there be so many buttons?
Regardless, I would still have tried to click on them anyway, just to see what happened.
From the captioning is not obvious what these buttons do, and that’s something jwz could certainly do something about. But instead of improving better make fun of your potential customers, I guess?
If it’s not jwz’s fault, but the user agents, is the user responsible?
I would love the story if jwz would have just stated his amazement without insulting the client.
Does it really matter if you don’t know what the button does? It’s obvious you can’t buy tickets by not clicking a button. I think the Sherlock Holmes principle applies. When all other options are eliminated, the one that’s left is correct.
So there’s some confusion over which button to click. But why would anyone click the button for any tier other than the one they want? Again, you’ve got five buttons, four of which cannot possibly be the right one, so what does that suggest about the remaining button.
It’s not exactly how I’d design the form (I’d at least add the word “buy” to the label), but it’s actually pretty good at reducing friction. Lots of people probably only want one ticket, so it’s all filled in and ready for them to click a single button.
Does it really matter if you don’t know what the button does? It’s obvious you can’t buy tickets by not clicking a button. I think the Sherlock Holmes principle applies. When all other options are eliminated, the one that’s left is correct.
I think this kind of reasoning is why people hate programmers.
You need to appreciate that people still see this when they look at a computer. Reading takes some time, building a mental model takes time, and all this effort builds some fatigue that if too great will cause the user to fall back on panic. They know it won’t delete all their files, but puzzling things is hard work for someone who doesn’t do it every day. They might abort and leave. Sometimes they might ask for help and you can write a funny blog post about how stupid your users are.
Honestly, programmers are so mad sometimes. You want “good clean specifications” that cover every detail unambiguously, but you are also quick to defend interfaces like this that are an 11 on the autism scale.
Here’s a good trick: Just add a comment. Try to document the process, the whole process. If writing your comment is tortured or difficult, it is probably a bad interface:
This is pretty hard, so we know it is a bad interface. We might be able to fix it by inventing stories. This requires empathy, so sometimes spitballing with a friend can help “How do I order two of each type?” Why would anyone want that? Certainly people will want to be with their group! “Maybe I’m ordering for a friend?” Hmm….
Or just look at Amazon. Amazon sells tonnes of stuff, so copying their interface is probably the smarter thing to do. Especially if you don’t want to think about it much. Also: People know how to use Amazon.
Lots of people probably only want one ticket, so it’s all filled in and ready for them to click a single button.
You don’t know what lots of people will do unless you watch them: You can’t even ask them, because users lie. I think Knuth had something to say about optimising for some behaviour when you don’t even know what’s going on yet.
As somebody that’s read a bunch of password-hashing threads on HN, here’s my understanding:
PBKDF2 is acceptable, but bcrypt is slightly better designed, so prefer bcrypt if it’s available. scrypt is better still, Argon2 might be better again but perhaps wait a while to see how it goes.
The article says “Scrypt requires about 1000 times the memory as bcrypt for the same security against GPU attacks” which makes it sound like bcrypt is secure against GPU attacks while scrypt has to eat a lot of memory to catch up. As I understand it, bcrypt lets you crank up the computation cost of cracking a password, but in this modern age when GPUs contain thousands of independent computation units, computation isn’t the scarce resource it once was. On the other hand, scrypt lets you crank up the computation cost and the memory cost, and memory is still scarce even on GPUs and custom-made password-cracking chips (like bitcoin miners), so scrypt lets you trade off space and time to match your use-case.
One thing that these articles rarely mention: Let’s say your website is configured to use a password hasher configured to spend 250ms hashing. A user registers, you hash their password, and store it in your database. Eighteen months later, attackers can now get computers twice as fast, so it only takes them 125ms to hash the user’s password. Your password hashing is half as strong as it was, even though you haven’t changed anything. The answer: Whenever anybody logs in successfully, you (by definition) have a plain-text copy of the correct password, so rehash it with the current security settings. That way, whenever anybody logs in, their password storage is automatically upgraded.
Eh, scrypt is way over-hyped, bcrypt is better:
Also, bcrypt is much better than PBKDF2. You have to crank up PBKDF2 to zillions of iterations to even get close. PBKDF2 and scrypt are both more work to manage. In my opinion, crypto should be as simple and foolproof as possible.
I wouldn’t say it’s bad or wrong to use scrypt, but you have to be more careful and think more about that decision. People should have a certain amount of knowledge to use scrypt, and be able to vet the implementation they use, at least trivially so. For Joe Shmoe web programmer who just wants to authenticate users, I always recommend bcrypt, since it’s so hard to screw up.
TL;DR: use bcrypt, it Just Works.
Whenever anybody logs in successfully, you (by definition) have a plain-text copy of the correct password, so rehash it with the current security settings
I have never done authentication before, but I thought part of the trick was you did some Diffie-hellman thing with the JS so you only sent the hashed version of the data so the password never went plaintext anywere. Is that not true? I guess that screws over everyone that disable JS…
You send the plaintext password over a secure channel (like TLS), which is fine when you’ve authenticated that the remote server is the one you care about. If it isn’t, then even sending the hashed password wouldn’t be a significant win, since that would essentially become the new credential.
Generally, if you send a salted, hashed version of the password over the wire (say, with HTTP’s Digest Authentication, or some JS-based moral equivalent), you need a plaintext copy of the password on the server to compare it to, which is worse than sending the user’s password over an HTTPS connection.
If you sniff the hashed password as it travels over the wire, you can use it to log in as that user in future, so you’re still sending an authentication token in the clear, even if it doesn’t happen to be exactly what the user typed in.
As @kivikakk suggested, this isn’t generally done, though I have seen banks and other high profile entities attempt it. There’s the fundamental problem that you can’t really trust your web browser, though.
Whenever anybody logs in successfully, you (by definition) have a plain-text copy of the correct password, so rehash it with the current security settings
Changing the auth mechanism every 2-3 years is not a menial task, and can let users unable to login if you hit a bug. Would you actually recommend this or is it just a theoretical measure?
Changing the auth mechanism entirely would be a big change, yes.
The idea is that in your application’s config file you just have a single “work factor” setting, and you use it when hashing new users' passwords and re-hashing existing users. Since there’s absolutely no code change involved, it should be a very menial task to update.
Also, grandparent poster, note that bcrypt hashes include the work factor (“cost”, in bcrypt parlance) in the hash so that changing that global setting doesn’t break comparing against the already hashed passwords
This was a really good response from Eric Mill: https://cabforum.org/pipermail/public/2015-December/006435.html
This comment section makes me sad about the state of lobste.rs. Lot of terrible anti-empathy hot takes and I’m so not here for that.
This thread is filled with the exact same privileged nonsense that ruins communities for anyone who isn’t from its background or confirming its myopic worldview. Congrats on your general inability to see why hundreds of years of unjust treatment might leave folks with a distaste for certain words. I’m disappointed because I really wanted this place to be better than that. There are plenty of other places folks can find to have those terrible opinions and I believe we need better moderation.
Downvoting this comment of mine because of that is totally on-brand for that type of myopia. I do wish we had a place for meta discussion that was not in the main threads, as MetaTalk is for MetaFilter.
Speaking only for myself, the reason you were downvoted was because you failed to elaborate on why this made you sad. You’ve since tried to add elaboration (which, if done in an edit, would’ve been enough to at least remove my downvote), so thank you for that.
As for that elaboration, you’re conflating the positions of “why would anybody take offense at this word choice” and “this word choice is correct and it doesn’t matter if it offends some”. The former position is both ignorant and (more crucially) not represented so far in the comments here, the latter is technically accurate at the expense of feelings.
And again, please stop characterizing lobsters in such broad strokes–there are any number of users who would gladly take a different position, and would do so with gusto, and with whom you’d probably agree. That they haven’t done so yet is probably because they have better things to do on a Sunday than argue on the Internet.
These aren’t “terrible opinions”, they don’t need to be swept away and moderated (censored) out: they’re just different value judgements. As an exercise, try to restate my position.
Here, if I may restate your position on this topic (which you haven’t explicitly listed out, so I am at somewhat of a disadvantage):
“I, jmhodges, believe that history has shown slavery to be distasteful, and furthermore that references to the same can cause distress or at least distract from the code at hand for current and potential community members. I believe that this pull request was The Right Thing To Do, because it cleared up some language that at least one person found bothersome. I additionally believe that any claims that this language is valid are superseded by the requirement that software source should be as inclusive as possible. I believe inclusivity is important because I believe that it is important to have diverse viewpoints when designing and implementing software, and because I believe that software projects who do not put effort towards this sort of inclusivity are bound to eventually stagnate in talent and to lose members until they dwindle away.”
Would you say that the above is a fair wording of your position?
“hundreds of years of unjust treatment” that was made illegal in the US 160 years ago and was never legal in many other countries, e.g. UK
You are being obtuse–there is a great deal of documented quasi-legal or outright legal mistreatment of minorities in the United States. Unfair and unreasonable discrimination absolutely exists, everywhere, and claiming “but we made it illegal so how can it?” shows a lack of imagination in thinking how to trivially circumvent such rules.
If the existence of a few people with whom you disagree is enough to cause you to paint all of the community the same way, you may be falling into the same problematic thinking patterns as a lot of the folks I think you may disagree with politically.
Just a point of reflection:
“the rhetorical style is incorrect” is something that can be addressed in a reply to the comment, and presumably over time the poster fixes their style. If they continue doing so, they’re a troll, and we’ve already got a flag for that.
I’m a bit skeptical about having a “this is mean” downvote button, because it tends to conflate “this post is incorrect but fine” with “this post is correct but hurts somebody’s feelings” with “this post is crafted malicious garbage”.
I think it might be nice to have a way of flagging mean without causing the actual number of votes to decrease. Thus, we can decouple the idea of “this post is poor in rhetorical style” from “this post is poor in quality”.
As a proof: a mean post by, say, Torvalds or Drepper or de Raadt may be downvoted to oblivion because it’s “mean”, but still have the best technical content in the thread!
There is an unstated assumption here: that it’s more important to preserve 100% of posts with high-quality technical content than to maintain a friendly environment. I disagree with this assumption. I am happy to miss the occasional informative-but-jerky comment if it means that the lobste.rs community remains an inviting place for people to talk in good faith. That seems a very small price to pay to me. I already have a reading list a mile long. I will never be able to read everything interesting that I want to read before I die. So who cares if I miss a lobste.rs comment that might have had some value hidden beneath the bile?
Agreed 100%, and this is a point that could stand to be added to every single discussion of discussion in a technical context.
We’ll have to agree to disagree. :)
I tend to be concerned with what I’ve observed to be the endgame of these noble intentions.
I agree – “mean” can be objectively deserved in some cases; I’m thinking more about the case of “unreasonably mean”, or “excessively mean”. All of that’s in the eye of the beholder obviously. What I was thinking about was a kind of safety valve – giving people the ability to downvote a jerkish comment without starting a flamewar and clogging threads with commentary about acceptability of tone every time it happens.
I’m a fan of adding “excessively mean” as a downvote option. It captures the concept of “unsportsmanlike conduct” while being appropriate for a website.
I think the safety valve idea has merit. People WANT to give feedback that they felt someone was mean – but currently don’t have a place to do so. That said, I agree with angersock that it doesn’t feel like something that should go into the score. Leave the scores as is, I honestly WANT to see brilliant mean comments. Mean is sometimes an appropriate and reasonable way to communicate.
It almost feels like this would be a new feature if implemented: tone feedback. Something that the author of the comment (even if highly upvoted) could always see, and it could go well beyond “mean” – because different people dislike different things “arrogant”, “bullying”, “condescending”, “demeaning”, … I can’t think of one with E, so I will stop there. Not sure I want such a feature, but is interesting to think about if it would change poster (and therefore community) behavior at all – if on an upvoted post (positive feedback) they got a bunch of “arrogant” tone feedback, would that possibly cause a behavior change?
I honestly WANT to see brilliant mean comments. Mean is sometimes an appropriate and reasonable way to communicate.
I definitely don’t want to see such comments, and I don’t think it’s an appropriate way to communicate, at least on lobste.rs. I’m happy if someone corrects me if I posted something incorrect. Why should they yell at me, call me names, tell me I’m so stupid I should commit suicide, or otherwise try to do their best imitation of a 1980s Usenet discussion? How does that improve anything? I don’t want those kinds of comments here, and I think they should be marked “flame” and the score downranked accordingly.
Do you have examples of comments on lobste.rs that were egregiously mean to the person they were replying to, and yet “brilliant” and a good example of the kinds of comment we want in the community?
While certainly one can write blatantly uncivil comments (“you should commit suicide”), there are a lot of ways of phrasing something civilly and “meanly” that are both entertaining and informative to read…and crucially, which can contain the additional emotional content that would be lost in a more formal expression.
The sort of practical issue that often occurs is that the threshold for what’s mean or not invariably gets stricter and stricter. Humor is usually the first casualty, because most human humor is rooted in the misfortune of others.
This can still be a tolerable state of affairs, but as we loosen our standards and let in non-pure-tech articles (say, news or politics or whatever), the conversations start to become less and less fact-based and more and more rooted in feelings. Eventually, there is neither the technical content or the entertainment value, and the site loses its initial members.
I agree there’s a risk of broadening the focus so much that it becomes “geeks chat about random stuff”, which tends to not be always very high-quality discussion (arguably a popular web forum run by a certain startup incubator fills that role). I don’t see that as that strongly tied to the civility thing, though. In principle, it could be related: non-mainstream community norms can help keep a community from being diluted, by serving as a kind of in-group mechanism. But it’s really tricky to get right, and you can easily drive away a lot of interesting technical people and end up with less good discussions, because they also don’t feel part of the in-group (or just don’t like it). It can become really tedious when you have a number of people who are very abrasive and always flaming, and people can lose patience for it and go elsewhere. The old comp.lang.lisp was in that category for me, bad enough that I think it served as an active negative for the overall CL community.
I have never seen anything approaching what you just mentioned on lobste.rs – but maybe I just missed it or it was already downvoted to oblivion for being troll (telling people to commit suicide, WTF, is this League of Legends?). Are people seriously telling each other to commit suicide on here?
I have seen people be “mean” which is generally saying stuff like “I find you arrogant and condescending” – which is a bit “mean” – but is also completely sincere. There often is a nice way to say something, but sometimes their isn’t, and “mean” is appropriate. When I think of “mean” in terms of lobste.rs – I think of Linus posting biting comments on LKML.
Ah sorry, I’m getting current-situation and hypothetical-norms parts of the discussion kind of muddled in my comment. You’re right, lobste.rs doesn’t have anything like that, which is also why I don’t feel the need to use the “troll” downvote much here. The “commit suicide” example was a kind of oblique reference to some Linus Torvalds mailing-list posts, which some people seem to think is an ok style of discussion, but which I would rather not see here.
If a Linus post got downvoted because of its antisocial style, that would probably be valuable feedback to the author on how to be taken more seriously in the future.
If they’re truly a Linus, we have empirical proof that their methods work and work well enough. Unless you have some genuine concern that Linus Torvalds isn’t being taken seriously?
We also have empirical proof that it drives people away.
Discussing the symptoms is not very interesting, “what would linux be with a more approachable maintainer” is a more interesting one.
It might be more interesting, but it’s also quite hypothetical.
We can ask the question “Can a massively influential and beneficial software project be run by a cranky git and with a culture of meanness and still be successful?”, and see (because history) that the answer is yes.
We can’t ask the stronger question “Is Linux successful because of it’s abrasive leadership?” because of any numerous factors…but we similarly can’t assume that the nicer approach is automatically valid.
We can’t claim that the project is worse off (or better off) for the people it drove away.
It might be more interesting, but it’s also quite hypothetical.
Sure, it is. And that makes it the more interesting one.
There’s a an ample amount of projects that live well with very nice leadership, so we do have material to talk about.
Eh, that seems like way too many distinctions to me, especially in a smallish community that I’d like to enjoy participating in. A good post participates in the community constructively, a bad one doesn’t. There are a number of ways of being bad, and being a huge jerk who’s needlessly insulting your fellow community members ranks pretty highly there for me, maybe at the very top.
“Never attribute to malice that which can be adequately explained by a relaxed consistency model.” – https://twitter.com/cdaylward/status/655959816335069184
The word “malice”, I think, is inappropriate here. Few people censor with malice, and I doubt Twitter censors with malice.
We do know:
So, there’s plenty of reasons to think that some non-malicious (“for the sake of national security”-type fallacious reasoning) censorship is going on.
I agree that we shouldn’t assume we know everything about the situation, but it’s enough to raise serious questions.
Appelbaum is certainly a plausible target, though we need to be aware of the sample bias because of course people are going to notice issues with his tweets that they wouldn’t with somebody who isn’t politically at odds with the US.
I do have to say that I find it unlikely Twitter’s consistency model could delay a tweet by four hours (which is how long it had been when I checked), given that their entire platform is about global communication, and that it manages to get millions of tweets across the Atlantic within a couple minutes at the outside. And given that the rest of Appelbaum’s timeline had propagated fine.
I worked at Twitter, and, knowing what I know about how timelines are stored, it’s totally plausible. The consistency guarantees between the Redis replicas is not strong. Could it be something else? Sure, but the plausibility is in favor of the replica missing a tweet.
Please don’t violate any NDAs or anything, I don’t think the situation warrants that risk - but it’s useful to hear that.
The consistency guarantees between the Redis replicas is not strong. Could it be something else? Sure, but the plausibility is in favor of the replica missing a tweet.
Here’s what I know:
These events fit the model of “list-based censorship” (as opposed to geo-based censorship), where a tweet is visible when it’s initially posted, someone notices and for whatever reason it gets shadow-banned (perhaps a bunch of bots click “Report”). People who are on a list of some sort are not able to see the tweet. The additional censoring of seemingly irrelevant tweets along with important tweets makes the case for “bug” look stronger than it is.
How would those facts fit the Redis replica notion?
Very interesting. I also backed the typed clojure project back in the day. We made a similar judgment call a couple of years ago when figuring out where to go from Clojure, which was giving us a bunch of headaches, despite being a very fun language to program in. We were already successfully using Schema at the time, and core.typed didn’t feel sufficient for our needs. A year and a half later we’re all pretty happy with our decision to fully transition to the Haskell master race.
Yo, the use of “master race” here is extremely offensive. Please consider your words more carefully.
Using “master race” like this is deliberately tongue-in-cheek. It may have started with the PC master race.
Yes, that’s the problem. It is not okay to use a term meant to justify mass murder in a way that makes it seem trivial and unimportant.
Interesting to see some history on Wikipedia. I’d only heard it on Reddit, which hosts a big white supremacy community, so I didn’t realize it was supposed to be tongue-in-cheek. I guess that’s the problem with “ironic racism”: if the listener doesn’t already expect otherwise from you, you just look racist.
a big white supremacy community,
For more context on this, https://www.splcenter.org/hatewatch/2015/03/11/most-violently-racist-internet-content-isnt-stormfront-or-vnn-anymore
Reddit has since banned some of these subreddits, but the users still stick around, and regularly ‘market’ themselves on the defaults, coordinated over IRC.
When you say ‘here’ are you meaning where you are, or on this particular website?
I have checked with a few people nearby at my University’s CS deptartment (who know about the “PC Master Race” thing - that I had never heard of), they do not see it as anything warranting offense because it is clear it has nothing to really do with white supremacy.
I read it as “here in your post”.
Also, this thread is where I learned anyone thinks “PC Master Race” iss not a racist thing. Given where it was popularized and the fact that, uh, I don’t know how better to say it, that it’s a callback to awful super-racist evil shit, it’s not at all clear to me that it has nothing to do with white supremacy. Seriously. It’s just fucked up.
I don’t like the phrase, and find it pretty weird, but I hadn’t considered its main use overtly racist. Closer to a tone-deaf quasi-political reference, playing on the perception that PC gamers believe their platform to be “superior”, with an absurd political analogy. Maybe along the lines of the Stalin scheme compiler (tagline: “Stalin brutally optimizes”), which I don’t think was written by an actual Stalinist.
I agree that it can easily “callback to awful super-racist evil shit”, but isn’t that like saying that all uses of the the swastika do the same? I realize the comparison is not perfect, the swastika has had a long history of use and continued use - despite its very evil use in “the Western world”, while the ‘master race’ thing has always been racist - just not always genocidal.
I personally don’t find it offensive, but I also wouldn’t use it either.
I happened to notice this Reddit thread from last year get recirculated recently. The author, a highschool student, talks about suddenly being disabused of the notion that everyone interprets this phrase as innocent.
In general, when someone doesn’t see why particular language is offensive, that is a pretty clear indication they aren’t in the group it affects.
The scenario you reference is of a symbol of good luck and auspicious things that’s been tainted by evil abuse. I don’t think that’s the history of the term “master race” at all, no.
I was going to write more, but at this point I think I have not only killed the horse, but shot it a few more times and am obligated to apologise to the horse.
I appreciate the civil discussion here, and further apologise for dragging this discussion further than was really warranted.
Mozilla is being great by giving folks 12-18 months for a totally crucial change. Plenty of time for developers to experiment with the new API and ask Mozilla for the things you need when you run into them.
The browser is the most used app with the most secret and personal data flowing through it on any system. Stoked.
What the author doesn’t acknowledge is these massive repos have a lot of tooling behind them to make them work. Google and Facebook have whole teams devoted to these tasks, afaik, and it’s non-trivial to get everything to work smoothly. If you cannot afford to get all the tooling needed, monolithic repos might be more problematic than separate repos.
Well, I guess I have to point out something.
Any org of those sizes need build tool teams. That’s a cost of doing business for any org of more than a few handfuls of engineers and something most companies underspend on by a great deal.
So, you might respond, they are being made more necessary by building inside of and for a monorepo. Okay, well, let’s talk about that some.
Building common tools for code review, dependency building, distributed testing, and so on is important. It provides a base in which all of your engineers can communicate and move freely between project to project without having to constantly grapple with whatever strange build is broken and so on.
If anything, small (or large!) companies using a monorepo will have an easier time creating good build tools inside of a monorepo. Those build tools only have to be created, distributed, and maintained in one repo with one common way of doing things, instead of having to integrate across many repos with varying levels of support for the common code review, dependency building, and generation tools. That means less overhead on communication, on integration, on convincing folks to please migrate to a new, better version of a tool. Instead, you add it to the repo, and can easily, slowly move folks to it because the tooling was consistent. Communication gets so hard so much faster than most any software scaling problem and the tools we build have to reflect that.
Building tools for a monorepo instead of a graph of repos is also easier. It’s easier to build stuff for a monorepo because you cut down the state space you have to operate in. Instead of a distributed systems problem of repos in varying states of their histories, you get one repo with one totally ordered history. A lot of the folks who build tools to operate on many repos (dependency awareness, cross-repo checkins, subtrees) are usually just re-building a monorepo but with even more work required to traverse the graph of, not just files on disk, but of repos at various states.
Now, of course, I and the author both agree that the current state of distributed version control systems make it really hard to scale out a big monorepo but that’s not where most of a build tool team’s time is spent! In fact, at Google, they just used perforce (a non-DVCS) for a heck of a long time. For small orgs, sticking with git or mercurial works pretty well for a really long dang time because, like lots of problems at small scale, you can do pretty much whatever isn’t the dumbest thing and it works. But it is a total bummer to have to think about at all and I look forward to narrow clones being more of a thing!
So, yeah, orgs have build tool teams, and that’s because you need them no matter what! And building tools in and for a monorepo means a huge reduction in problem space. That reduction in the problem space simplifies the actual creation of the tools and their integration into the org. Which means you can spend less time, proportionally, on those build tool teams by setting them up for success and spend more time making great products.
For the most part I agree with what you said, but you are responding to a point I didn’t make. The author’s point is that monorepos are good. The end. My point is that they can be good but you will have to invest in them. The author is missing an important part.
It’s easier to build stuff for a monorepo because you cut down the state space you have to operate in. Instead of a distributed systems problem of repos in varying states of their histories
I disagree with this. With multirepos you can have a repository that is pinning to versions of other repos, which makes the state space only its history. In a monorepo things can change out from under your feet. It takes a fair amount of tooling to give guarantees. I’ve worked at places with monorepos that didn’t have that tooling and it was painful. My team moved to a multirepo solution with pinned dependencies and no extra tooling and it made our issues tractible. It wasn’t solved but you at least could trust the world wasn’t going to fall out from under at any moment.
I agree it’s quite complex, and I believe the author would also agree: he now works on Mozilla’s Developer Services team, which includes scaling Mercurial, improving code review, etc.
Other posts on his blog describe some of the work being done, much of which is not really specific to Mozilla, so hopefully it could be reused by other organizations.
I think it’s due to the absurdly low (and increasingly lower) barrier to entry to be a “developer” (i.e. You can be a developer in two weeks, here’s how!)
People who spend a dozen years in medical school and residential programs do not send emails telling people they jerked off to their conference talks.
Well, hm, these terrible folks get in because we support them by looking away, by ignoring our women colleagues concerns, or outright disbelieving them, and by ignoring our systemic gender biases in hiring.
There is nothing in a long professional filter handles those concerns and removes men that are bad to women. Many of those filters, in fact, are “boys' clubs” that have terrible amounts of attrition w.r.t. women. Google “sexual harassment doctor” or “lawyers” or whichever professional field and you’ll see plenty of evidence that filters are not the fix.
While this doesn’t solely exist in our industry, it is ours to fix! And it’s going to be fixed by listening to and amplifying women’s voices, addressing our systemic biases in hiring, and taking on the emotional labor we tend to put on women.
And a lot of those men probably didn’t have positive interactions with women in their formative years, and probably were pandered to by media and games who took advantage of that fact, and luckily for them they found a career field that promised “meritocracy” and that it would turn a blind-eye to their social issues as long as they got shit done.
How many of them were called creeps or left alone when just a little bit of compassion could’ve changed things? How many of them were hit with harassment charges or teasing when a patient “Now, man, that’s not a polite thing to say, we don’t talk about women like that”?
Any discussion about these things needs to acknowledge the entire pipeline.
Thanks for the troll flag. I’m suggesting a bit of compassion here, same as you. It’s a lot easier to wring our hands about the evil men dominating the work force and literally oppressing women by their mere existence than it is to realize that hey, men are people to, and that if we don’t want to just write off an entire generation of people whose views and actions we deplore than we need to try and engage with them and show them the right way to behave. And part of that is showing the empathy that we ourselves claim to require so much.
Why would the barrier to entry to being a developer change anything? Let’s say we start doing credentials and certificates and licensing like other fields. So? People can still send garbage over the internet.
I’m surprised that no one points out the obvious difference between software engineering and medicine. Doctors have to face each other in person, whereas programmers often do not. It’s a lot easier on the internet to forget that these are real people, and it’s a lot harder to see the damage you’ve inflicted when you can’t see in their face the sorrow & pain caused by your words. Unfortunately, I think this is going to get worse with our latest push for remote working.
Have you ever seen the backstabbing and infighting bigwig or self-important doctors get into?
Many of them are pleasant in person, but that’s just because they’re quite practiced at being two-faced.
For what it’s worth, my remote jobs have had healthier cultures than than my on-site jobs. Arguably the flexibility of remote work makes it especially appealing to programmers with family responsibilities, and it certainly makes it easier for those of us who can’t or don’t want to live in the Silicon Valley echo chamber, both of which I think help to diversify the industry beyond the SF-unattached-twentysomething-male monoculture which has been so toxic.
More broadly, I don’t think there is much similarity between the dynamics of a remote or distributed team of coworkers and the dynamics of a message board of anonymous strangers, and I don’t think it is valid to draw conclusions about one based on the other.
If only because those programs taught them the importance of at least not expressing misogyny in ways they could face public backlash for.
Regarding the points in “Why serial IDs can be a problem”: Wouldn’t having proper authentication prevent users from cycling through ids to get records? What the author proposes is simply security by obscurity.
No, it’s not security by obscurity. Understanding how the app generates IDs will not allow anyone to sequence them.
And most of the time “proper authentication” looks like “knows HTTP”.
If you have a public facing API, then automated scrapes are something you have to handle, regardless of your underlying schema design. To have failures in the design and operation of the API drive the underlying schema into unperformant and tricky workarounds is, in my personal opinion, not optimal.
There are other concerns. Like competitors guessing how many users you have from your ids, bad guys using how many of certain objects exist to tune an attack or using ids associated with objects to guess when they were created. It’s pretty rich.
Or knowing that a row has been deleted, because there’s a gap in the numbering. There are very few scenarios where that’s a big deal, but…
oh for sure. On the other hand, it’s pretty easy to have two indices, and use one for one purpose, and another for another purpose. Making your primary key a UUID has subtle rich tradeoffs as well.
Disclaimer: I can make relational databases work fast but I often try a few designs before I settle on one which performs as required.
Why not leave it as one table and put an index across those four fields?
Maybe the index would be ‘too large’ and that itself would cause a performance problem?
The context here is that MySQL and the hardware available in 2002 was much less capable. I could see this normalization with particular cardinalities and string sizes for certain columns performing better than the indexes available at the time. (The max width for varchars in indexes was much smaller and machines were still around the low hundreds MB of RAM max, for two possible reasons)
Of course, that’s not really the point of the post
If the index b-tree used prefix compression, the index might not even be that big.
This. I would imagine the typical query still has to do a string comparison for the email address. Also, not sure how MySQL fares now but it used to be pretty slow when joining tables.