Here’s a question for the ages: are there any actually-existing good hosted CI providers out there?
Not if you need speed: http://bitemyapp.com/posts/2016-03-28-speeding-up-builds.html
I would honestly pay good money for reliable, tested deployment automation that stood things like CI up.
Approximately this with NVMe RAID: https://www.ovh.com/us/dedicated-servers/infra/173eg1.xml
tbqh, most the time we saved on compilation was lost to the GHCJS build later on. I was very sad.
We use buildkite at my company. One nice aspect is that we get an agent to run on /our/ “hardware” (we just use large vm instances). It works pretty well.
Another vote for buildkite here - their security posture is markedly better and you have much more control over performance.
It’s probably worth mentioning here that GitLab offers similar functionality with their GitLab CI offering. You can use their infrastructure or install runners (their equivalent of agents) on as many machines as you like. Disclaimer: I haven’t used either yet but attended a meetup event where somebody praised them highly and ditched their Atlassian stack for that single reason.
Their website looks intriguing could you elaborate on their security posture? Is it just an artifact of the on-premise build agent, or is there more to it than that?
If you happen to run on Heroku, Heroku-CI works quite well. You don’t wait in a queue—we just launch a new dyno for every CI run, which happens while you blink. It’s definitely not as full features as Circle, or even Travis, but it’s typically good enough.
At $WORK we run some things on Heroku but we can’t or don’t want to for most things — it’s either too expensive or the workload isn’t really well-suited for it.
What do you need? I like Travis, they also get vastly better when you actually use the paid offering and they offer on-premise should you actually need it.
I need builds to not take 25-30 minutes.
Bloodhound averages 25 minutes right now on TravisCI and that’s after I did a lot of aggressive caching: https://travis-ci.org/bitemyapp/bloodhound/builds/286053172?utm_source=github_status&utm_medium=notification
Gross.
I was asking cmhamill.
But, just to be clear: your builds take 8-14 minutes. What takes time for you is the low concurrency settings on travis public/free infrastructure. It’s a shared resource, you only get so many parallel builds. That’s precisely why I referred to their paid offering: travis is a vastly different beast when using the commercial infrastructure.
I also recommend not running the full matrix for every pull request, but just the stuff that frequently catches errors.
I was asking cmhamill.
You were asking in a public forum. I didn’t ask you to rebut or debate my experiences with TravisCI. https://github.com/cmhamill their email is on their GitHub profile if you’d like to speak with them without anyone one else chiming in. I’m relating an objection that is tied to real time lost on my part and that of other maintainers. It is a persistent complaint of other people I work with in OSS. I’m glad TravisCI’s free offering exists but I am not under the illusion that the value they’re providing was brought into existence ex nihilo with zero value derived from OSS.
It’s a shared resource, you only get so many parallel builds. That’s precisely why I referred to their paid offering: travis is a vastly different beast when using the commercial infrastructure.
We use commercial TravisCI at work. It’s better than CircleCI or Travis’ public offering but still not close to running a CI service on a dedis (singular or plural).
I had to aggressively cache (multiple gigabytes) the build for Bloodhound before it stopped timing out. I’m glad their caching layer can tolerate something that fat but I wish it wasn’t necessary just to keep my builds working period.
That combined with how unresponsive TravisCI has been in general leaves a sour taste. If there was a better open source CI option than something like DroneCI I’d probably have rented a dedi for the projects I work on already.
You were asking in a public forum. I didn’t ask you to rebut or debate my experiences with TravisCI.
You posted in a public forum and received some valid feedback based on the little context of your post ;)
https://mail.haskell.org/pipermail/ghc-devs/2017-May/014200.html
That’s just build, doesn’t include test suite, but the tests are a couple more minutes.
Hm, that’s roughly the time your travis needs, too?
https://travis-ci.org/bitemyapp/bloodhound/jobs/286053181#L539 -> 120.87s seconds
Nope, the mailing list numbers do not include --fast and that makes a huge difference.
You are off your rocker if you think the EC2 machines Travis uses are going to get close to what my workstation can do.
Would you rather pay for a licensed software distribution that you drop in a fast dedicated computer you’ve bought and it turns that computer into a node in a CI cluster that can be used like Travis?
Would you rather pay for a service just like Travis but more expensive and running on latest-and-greatest CPUs and such?
Would you rather pay for a licensed software distribution that you drop in a fast dedicated computer you’ve bought and it turns that computer into a node in a CI cluster that can be used like Travis?
If it actually worked well and I could test it before committing to a purchase, probably yes I would prefer that to losing control of my hardware or committing to a SAAS treadmill but businesses loooooooove recurring revenue and I can’t blame them.
Would you rather pay for a service just like Travis but more expensive and running on latest-and-greatest CPUs and such?
That seems like a more likely stop-gap as nobody seems to want to sell software OTS anymore. Note: it’s not really just CPUs, it’s tenancy. I’d rather pay SAAS service premium + actual-cost-of-leasing-hardware and get fast builds than the “maybe pay us extra, maybe get faster builds” games that most CI services play. Tell me what hardware I’m actually running on and with what tenancy so I don’t waste my time.
Has anyone done this kind of dependency scan on Travis that this guy did on CircleCI? I suspect you will see much the same.
Travis does have one clear advantage here in that it’s OSS so you can SEE its dependencies and make your own decisions. See my note about CircleCI needing to be better about communication above.
Well… “scan”. They posted a screenshot of their network debugger tab :).
Travis (.org) uses Pusher, but not their tracking scripts. It integrates Google Analytics and as such, communicates with it. ga.js is loaded from google.
The page connects to:
All in all, it is considerably less messy then circle-ci’s frontend.
Also, Travis does not have your tokens or code in their web frontend, code is on Github, tokens should be encrypted using the encrypted environment: https://docs.travis-ci.com/user/environment-variables#Defining-encrypted-variables-in-.travis.yml
You have proven my point perfectly.
CircleCI’s only sin here is one of a lack of communication. There is nothing actually wrong with any of the callouts the article mentions, they just need to be VERY sure that their users are aware of exactly who is seeing the source code they upload. This should be an object lesson for anyone running a SaS company, ESPECIALLY if said SaS company caters to developers.
This is not an apples to apples comparison, in my post I cited Javascripts only (which can make AJAX requests and extract source code), @skade cites that Travis loads fonts, images, and CSS from third party domains, which don’t have those properties; a compromise in CSS might change the appearance of a page but generally can’t result in your source code/API tokens being leaked to a third party.
As far as I follow the only external Javascript run by Travis CI is Pusher. So, no, it has not proven your point perfectly, in fact it demonstrates the opposite.
I’ve taken to setting $XDG_CONFIG_HOME to ~/etc, which just seems obviously correct, in retrospect.
Why not $XDG_CONFIG_HOME=$XDG_DATA_HOME/../etc/?
I think that would look nicely with the pattern of having $XDG_DATA_HOME/../bin/ as an additional $PATH entry.
I’ve done that before, too, but I find that I reach for things in $XDG_CONFIG_HOME often enough that it got annoying. At this point, I feel slightly more likely to set $XDG_DATA_HOME to ~/share. I already have ~/bin, but I tend to make a Debian package out of most tools I write once they stabilize, anyway. I just need to find a good place for $XDG_CACHE_HOME, which I currently have set to ~/.local/cache. It’s the one XDG directory whose contents I never care to get at easily, but I also need to be on reliable storage. I might stick it somewhere in /var/tmp/$(uid) or whatever the one that the FHS says should persist across reboots is. I could just commit to my bull shit and use ~/var/cache, though. Or just ~/var, since it will be the only directory in there. ~/cache feels the most “correct” but also really aesthetically terrible. Oh well.
How is that correct? Everyone who logs in all get the same configs. The whole point of dot files in ~/ was so it would only affect you.
I like this comment because you could mean it three ways:
What is wrong with kids these days and their XDG crap? Dotfiles are a perfectly good solution and aren’t visible in ls output by default, anyway.
What has happened to Unix, that so many programs need to specify configuration data in $HOME? Well-designed Unix programs should be simple enough to need no configuration in the vast majority of cases.
Hiding files beginning with a . was a terrible idea totally out of whack with the rest of the Unix ethos. It’s up there with filesystem links and BSD sockets for worst additions to the system.
I vote for option three, but which one were you going for?
1 and 2. . is a hack, but filesystem links were and are a great idea, and bsd sockets is why you are reading this.
I mean, on the one hand, fair enough.
On the other hand, symlinks (in comparison to hardlinks, I should have been more specific) remove the guarantee that the filesystem can be modeled as an acyclic graph. In addition, every program has to handle symlinks: make a decision whether to treat them like the files they point to or like stub files that point to a path name; deal with symlinks to non-existent files; properly follow symlinks to resolve files when needed — it’s not even possible, because of this, to correctly display the current directory in Unix 100% of the time. Bash tries to track where you last were before you ran cd so it acts like you expect, but there’s logic in Bash just for this. The Fish shell, for example, does not do so and always resolves symlinks in the displayed path.
Sockets are truly terrible, despite being an enabling technology for the Internet. You need a special set of APIs just for sockets, instead of following the Unix precedent and exposing a device file with a standard file API. This argument is more controversial, and others have made it better than I, but BSD sockets siloed network programming off from other kinds of I/O programming in a way that was unnecessary and, I think, left a lot of gains on the table compared to a (virtual) filesystem approach.
Real Unix® operating systems distribute configuration across ~/.app/, ~/.config/app/ and ~/Library/Preferences/org.foo.bar.App.plist, so I’m quite happy with Linux converging on the XDG spec.
“Guns, Germs, and Steel”. Only read the prologue and intro so far. Will get some good reading time tomorrow night!
Just a friendly public service announcement from someone trained in anthropology: Diamond is great at suggesting stimulating solutions to grand historical questions, but he’s less great at being correct.
I’d recommended keeping your skeptical hat firmly on, and to follow up with some of the material from here after you finish.
Don’t want to spoil a good read, just feel obligated as a once-anthropologist.
Thanks for the heads up and the recommendation! I’ve read these kinds of books before and have learned to be cautious…most notoriously from the book “1421”, which has largely been debunked. I try to not let it spoil my experience of a good read though :)
I think it is a great book, and easy to read. If you’re not aware of it, there is a documentary that covers the same subjects, shares the name, and involved the author, James Diamond.
Link, if you’re curious: http://www.imdb.com/title/tt0475043/
Finally getting around to reading The Parable of the Sower by Octavia Butler, after it’s been sitting on my desk for a few months. It’s very good so far.
I read that a year or two ago and really enjoyed it. Thanks for the reminder that I’ve been meaning to read more of her.
We’ve actually found this to be very inconvenient. We have enough shared memory that every Irene generally does want to use the same cookie jar. We do know larger systems who have separate social-media accounts for individual headmates or subsystems, but they generally still use one cookie jar for all of them.
… Oh, wait, it’s not actually a feature intended for people with multiple personalities (which is dated terminology that doesn’t reflect contemporary understanding). It’s a shitty metaphor that makes for a snappy headline as long as you forget it’s about real people, whose existence is erased so that we can keep being a punchline. Right. Why didn’t I realize that immediately?
</bitter>
Anyway, this does seem useful for having a personal and a work container that share the same Mozilla account. It also seems useful for social media managers, but I haven’t done that kind of work so I’m guessing.
It’s interesting to compare it to how Chrome profiles work - you can have any number of them, and each one can sync to a different Google account. I do sometimes need to create empty profiles which don’t have their own Google accounts to test things, and it would be convenient to be able to sync them to multiple devices. So maybe the feature would help with that case, too.
Update: after some back-channel conversation (lobste.rs wasn’t mentioned), the blog post changed its title. I’ve pulled in the change here as well. I would do that for any other post that was changed upstream, although as noted I have personal feelings about this one.
There are other use cases:
Big corporate and state propaganda campaigns have been using tools like this for ages. I think the official term of art was “persona management”. We usually call it “sockpuppeting”. https://en.wikipedia.org/wiki/Operation_Earnest_Voice for example.
I’d love to hear more details. I also use nmh, but I don’t have a filtering system I’m happy with.
By our very own @whbboyd, I found this a fun exercise.
This is getting to be a dumpster fire of a thread, but I’d like to add something I haven’t seen brought up, with regards to the sentiment expressed by several crustaceans that the author inserted gender or identity politics everywhere while at GitHub.
Here’s the thing about identity: it’s not a thing. It’s an is. Identity already is, and it already is everywhere. There’s no inserting it, because it just is.
The only groups who don’t know this intuitively are those who are in the sociologically “default” groups. In the U.S., that’s straight, white, males.
I’m going to try my best to gingerly step around this and if I manage to make an oaf of it, I’m sorry.
Continuing along this line of thought– when trying to debug a technical issue, there are a lot of things that are that still remain irrelevant to developing a fix or better way forward. For instance: if some machine in AWS fails during traffic failover, the most relevant facts will tend involve the machine experiencing more traffic than expected. Total RAM, disk capacity, systems handling traffic, etc are all pretty safe bets to check out to make progress during either debugging or a post-mortem, while physical facts that can be identified about the machine are much lower on the scale of probable issues. However, it’s complete true that we cannot ever get away from facts about the machine. It will have some physical location, it will have some class of CPU with some memory capacity. You can enumerate any number of facts about the machine that form its identity– these things just are.
In the same way, to quote the author:
In the midst of my discussions with the editorial team, trying to reach a compromise, a (male) engineer from another team completely rewrote the blog post and published it without talking to me.
The core injury, as I can see it, is that another engineer rewrote the author’s blog post without consulting her. That sucks. In solving the core injury, I do not think it important that the engineer is male– one individual’s actions removed agency from another individual here. There may or may not have been good reasons for it, but either way it’s not something I’d like to happen to me or anyone else. To that end, it feels more important to both understand why and prevent that sort of thing happening in the future. Though the other engineer can be identified as male certainly adds insult to this injury, this would still feel be bad if they were female. Or transgender. Or an agender markov chain with a graph coloring problem, whatever.
This is, I think, where we start to see people describing that the author is inserting gender or identity politics. It’s not that identity ceases to exist, because that’s absolutely inescapable, but mentioning the package between another engineer’s legs isn’t likely relevant to fixing the core issue. It could very well be– going back to AWS failover example, all the machines in a specific rack may just be bad and that’s the real problem– but without significantly more data the mention of this other engineer’s gender serves only to bias the reader away from a deeper analysis of the situation.
I personally find it rather difficult to not become distrustful of the author’s stated intent when she uses identity in this way throughout her post rather than spending more care describing the motivations and rationale of other individuals she has written about. That’s not to say that Github is absolutely blameless, either– taking the latter parts of the post at face value, the author’s manager at least could have handled things better. Just that the situation is probably more complex than the author lets on and probably not so overwhelmingly related to the specific gender or commitment to diversity of any individuals she has written about.
First of all, thanks for being willing to engage sincerely. These kinds of topics are politically and emotionally charged, and they’re not easy to talk about. It’s very easy to dismiss these issues as the rantings of an “SJW,” a mentally ill person, or a hypocrite (all of which have been done in this thread), but I think Lobsters can (and should) do better than that. Thanks.
On to the point: you are, in principle, right. It’s entirely possible this specific incident did not involve anyone who was motivated by animus towards individuals with a particular gender expression.
However, that kind of argument stretches the credulity of folks who study this, and of folks who are on the receiving end of gender discrimination in our society. In keeping with the debugging analogy — which is really nice, by the way, can I borrow it? — an experienced debugger starts to get an extra sense for when there’s a bad block on a disk, or there’s a race condition in a piece of software, or a bit flipped in a big non-ECC memory array. These are based on patterns and heuristics that are hard-won over years of encountering real problems, finding enough evidence to conclusively decide upon a root cause, and then recognizing those patterns the next time they come around. If you’re right often enough to make a career out of it, or develop a reputation over it, then most people are comfortable saying that you’re correct in your assessments, and that when you smell smoke folks should get ready for fire, even if no one else has smelled it yet. Nonetheless, pick any one of those incidents, and the facts might not be convincing to an observer brought in to examine that incident alone.
The standard of evidence here is not that of the courtroom or the laboratory, though — it’s that of the water cooler (or the blogosphere). This person is posting their interpretation of events that happened to them, in order to offer a warning to others who might be in a similar situation, with similar concerns. Nothing more, nothing less.
You might notice, however, if you’re on the lookout for these patterns, or have had someone spit on you because of your gender presentation a few times, that when someone writes a blog post called “Amazon Burned Me Out and Takes Advantage of College Students,” we end up with discussions about what reasonable labor expectations are, but when someone writes an article called “My Year at GitHub,” talking about their experiences with gender discrimination, we get discussions about how “SJWs are hypocrites,” “this person is mentally ill,” or “aren’t they just looking for gender discrimination and finding it because they want to?” To your credit, you asked the last question, which is by far the most reasonable of the three. But perhaps you can see why it rings hollow to someone who has dealt with this so often, and for their whole life: to them, you’re like the junior sysadmin asking “shouldn’t we be calling support,” while the guy in the corner, with a tube of thermal paste, is mumbling about how he can hear the heat buildup from the loose heatsink on the 9th processor core.
Yeah, thanks! It seems to be really difficult to engage with this topic in good faith, so I deeply appreciate your responses as well. As far as the debugging analogy goes, words are pretty thin air; borrow as you’d please. :)
To get to the meat of your reply, it’s… we’ll obviously have different heuristics we can match against the situation the author described here. Even in just our conversation, I don’t feel confidently equipped with a reasonable standard of evidence for water cooler discussions. In my experience these kind of informal conversations carry significantly more weight than expected, but that doesn’t seem to be the case for others. Or maybe I don’t have enough information to make that kind of judgement. And as far as things ringing hollow…
Let me back up a bit and lay out a few assumptions I try and operate with that are maybe(?) relevant.
The author’s experience qualifies as an ongoing catastrophe. For her, for Github, and for the wider technical community regardless of race, creed, gender, sex, ability, age, etc, etc
Any system with more than two components (be they individuals, management systems, technological systems, machines, et al.) is a complex system
My piddly meat-brain does not have the computing power to fully model complex systems of any stripe
It’s that last part that I want to emphasize– particularly (from the linked pdf):
- Catastrophe requires multiple failures - single point failures are not enough.
- Post-accident attribution to a ‘root cause’ is fundamentally wrong.
- Views of ‘cause’ limit the effectiveness of defenses against future events.
- Safety is a characteristic of systems and not of their components
(and basically all the other ones, too. It’s a good read, highly recommend it if you have the time ¯\_(ツ)_/¯)
My difficulty with the author’s work, and a lot of similar work, is that it suggests that these catastrophic experiences are the result of a singular category of root cause. Call it sexism, racism, general discrimination, patriarchy, systematic oppression¹, etc, these all pattern match to me as “individuals of this outgroup inherently don’t like people in my ingroup and because of this treat us badly”. Which is a huge problem! It sucks, it’s counterproductive, and I really wish I knew the words to express that without marginalizing it with the inevitable, “and also…”. Because to me, even with the grossest delineation of components possible, we still wind up with interactions between the author’s past self, Github the sociotechnical organization, and the community discussing the author’s work. Considering I am nowhere near intelligent enough to model three things in my head, I’m comfortable describing it as a complex system. And because complex systems fail in complex ways, there’s significantly more unique faults here than Github’s poor behavior as an organization as written.
At the end of the day, I can’t solve discrimination or stereotypes or the million billion of microaggressions the technical community lavishes onto anyone who isn’t easily type-classed as cis-white-hetero-male-college-graduate-under-thirty-enjoys-social-drinking-reads-hacker-news-check-out-this-cool-framework-docker-docker-mesos-startup-docker without fundamentally solving the disease that we call the human condition². I don’t even know where to begin with that, but the author’s work puts a lot of emphasis on the selfsame identity of individuals they interacted with. To be clear: I think it’s important that we accept this as a candid reflection of her subjective experience without significant evidence otherwise.
And then also, there are other factors that contributed to this catatrophe. Many of which are easier to solve and significantly more productive to discuss than the ways in which identity interacts with bias. For example, we could talk about what respectful, workplace feedback between individuals of any identity looks like– the author’s written communication style seems rather blunt to me, and I can understand why the data scientist described early in the post was upset. A Crustacean elsewhere in this thread had feedback on the survey question itself that I found surprising; it would be interesting to read other opinions on that, too. What kind of tradeoffs should we be making between factually correct and emotionally sensitive feedback? When is it possible to get both, when is it not? Another Crustacean brought up that PIPs were not for improvement, despite the goals clearly stated in the name. Is this a widespread practice and/or can we avoid working for companies who have such policies?
Discussions along these lines lead to contributing factors that are easier to solve or work around than the widespread mistreatment of classes of people by classes of other people³. Above all, it frustrates me (and likely many others) that much of what good, actionable work we can source to make things better feels like it immediately gets tossed out the window when we start discussing identity in these contexts.
¹ - That *-isms are an emergent property of the complex web of social systems we engage with is the most interesting view to me in that it at least acknowledges the wider context in which we all interact. Unfortunately, it also seems like it’s easy to short-circuit on that description and throw up your arms in learned helplessness at the idea, too. Don’t know of any good solutions though, only tradeoffs.
² - Poe’s law warning.
³ - This is a defining and unfortunate trait of humanity as a whole. My fear is that it’s not entirely maladaptive, either.
Hey, sorry for the late response, but I didn’t want to just let this hang, because your response is thoughtful and worthwhile.
I’d like to give a point-by-point response, but time is unfortunately short, and I just don’t have the time to do it well. Instead, I’ll focus on two broad points you raised, which I hope might help you to see where the author (and I) are coming from.
First, on the topic of complex systems, I agree that it’s a great read — as an operations guy, it’s basically required reading for me — and I also agree that, fundamentally, every particular interaction has a huge number of variables, and it’s unlikely only one or two of them contributed to the incident. I don’t think that’s actually what’s being argued here, however. In addition, I also think some of your premises are incorrect. In support of that, I’d offer a few points.
While our piddly meat-brains are not good at modeling most complex systems, they’re existentially good at modeling human cultures. It’s literally the condition of their existence. Homo sapiens is to the extent it is cultural, and culture is how we’ve managed to become a technological society. So while we’re certainly not perfect at modeling complex societies in our minds, I suspect we’re very good at kind of principal social component analysis of our existential threats. And while the author’s case was probably not existential, systematic forms of oppression are, in the general case. A lifetime of living at the end of one of those barrels will, of course, make you gun shy.
I also think you underestimate how much we, as a species, know about this stuff. Which is normal, and essentially the fault of academics for being quiet geeks. Nonetheless, a ton of research goes into the study of society, and as a result of reality being such as it is, a ton of research goes into the study of oppressive structures. There’s a tendency among a certain milieu (computer nerds, like me) to dismiss the fields that study this (sociology, anthropology, history, political science), as “not really science,” but I think this is an impoverished (and incoherent) view of what science entails. As someone who studied the anthropology of liberal democracies (yes, we do that!) intensively in the past, with the intent to make a career of it, I am very comfortable in saying that the evidence these systematic forms of oppression are real, and that they contribute to these smaller incidents (“microagressions”) in a real way, is overwhelming. To my ear, the insistence that the debate is somehow wide open on this sounds similar to the suggestion that anthropomorphic climate change is under serious debate. I’m not suggesting that of you, to be clear, but much of that research produces actionable results, which of course is basically stuck in journals that only universities can afford.
A couple things indicate to me that you (like most people) are thinking about the whole situation differently than the author or I. This struck me in particular:
Call it sexism, racism, general discrimination, patriarchy, systematic oppression¹, etc, these all pattern match to me as “individuals of this outgroup inherently don’t like people in my ingroup and because of this treat us badly”.
No one is suggesting this is “inherent,” and it’s really not even a matter of “like,” and moreover, not of “individuals.” I can only speak for myself, but most of the “lefty” persuasion would say that this is a structural issue, which has expression in individual interactions, but it doesn’t necessarily mean that the individual doing the expressing harbors any dislike of the person on the receiving end. Moreover, that’s essentially beside the point. Even if the individual harbored no ill intent, and no feeling of dislike, the structural issue remains. If you’re interested in dealing with problem, you have to deal with those individual expressions, too.
Now, the classic, Marxist answer to this issue is that you should not deal with the individual expressions, you should seize state power and end oppressive relations once and for all. Aside of the difficulty of doing so successfully and without becoming the abyss, as it were, the objection to this that came out of critical theory and identity politics is this: as people, we’re born and stewed in this society, and we’re shaped by it. Just because the workers seize the means of production, it doesn’t mean the white people will want to hang out with the black people, and it doesn’t mean trans folks will stop being murdered at a higher rate than other groups. Besides, the objection continues, did you notice 1970, and the neoliberal “Washington consensus?” Did you watch them deregulate the markets, destroy unions, and have a democrat dismantle welfare? There’s no working class consciousness anymore, we’re not going to seize power in this century, we need to do something in the meantime.
So we try to confront individuals, and we try to get private corporations to put some pressure on the structural issues. This is a ridiculous, almost farcical task, because we know individuals hate being told they’re hurting someone when they didn’t mean to, and we know the corporations don’t really care. We also know (despite some of the more absurd things that have been asserted in this thread), that we don’t have much power. Just look at the demographic distribution of presidents, legislators, judges, local politicians, or corporate leaders and you can confirm it. A five year old could see it. So when the author is critiquing people directly, and critiquing GitHub directly, it’s coming from a place of being cornered, of being threatened, and of having to fight for the right to be treated like everyone else for your whole life. You develop a shorter, direct tone. You don’t preface every criticism with, “I know this person was trying very hard,” or “I’m sure the person at GitHub” who started this program really cared.”
The final point I want to make, and it’s harder to swallow if you feel like you don’t have skin in the game, is that for me, and probably for the author, this is all more than a question of “is (say) GitHub a nice place to work,” or even “is GitHub the kind of place I’d like to work at?” It’s part of a broader question: “which side are you on?” For the author, the social structure chose the side already. For me, it’s a political and moral commitment based on my religious beliefs. But in both cases, for us, the answer to the kind of questions you posed at the end of your comment — like “what kind of tradeoffs should we be making between factually correct and emotionally sensitive feedback?” — are the wrong question in these cases. The right question for us, is “which side are you on, the side of the weak, or of the powerful?” In an ideal world, I would love if the best question we could ask was always about the individual case. But so long as the last is last and the first is first, I feel that I must always be with the last.
I get that my conclusion here is not something everyone is on board with, but I think it’s important to note that an article like this is coming from a different place than most people. It’s shop talk — “hey fellow civilization-destroying SJWs, got a new job this weekend, just FYI they don’t get it, they’re not approaching hiring underrepresented groups as countering structural issues, it’s basically tokenism, probably stay away. Okay, see you next time.”
So, UNIX’s predecessor, MULTICS, had a stack where the data flows away from the stack pointer where an overflow drops incoming data into new memory if available. Then, CPU-enforced segments to isolate memory regions. I’m not sure on the heap but imagine it was isolated/segmented as well. Secure kernels such as GEMSOS did something similar on custom and Intel CPU’s since it worked before. Imagine my shock when I first read a paper saying the stack and heap flowed toward each other… in systems using unsafe languages, ignoring security features on CPU’s such as segments, and possibly doing this in privileged code. I predicted problems since they’re always there by default when different things smack into each other not designed strongly for security.
And here they are in a nicely-written report. Apparently, they’re still doing weak mitigations against problems whose root causes were eliminated by 1960’s designs. Be interesting to see if cleverness + keeping root causes works this time or penetrate-patch-and-pray game continues. :)
Apparently, they’re still doing weak mitigations against problems whose root causes were eliminated by 1960’s designs. Be interesting to see if cleverness + keeping root causes works this time or penetrate-patch-and-pray game continues. :)
Unfortunately, “penetrate, patch, and pray” has some kind of evolutionary durability worked into it, given the constraints of late capitalism.
That plus Richard Gabriel’s Worse is Better seems to describe the situation quite well. The other thing is the bandwagon effect where people are usually jumping on one to solve their problems. If it worked well, they’ll defend it. If some properties were incidental, they might also believe they were designed that way on purpose for some advantage. Prior advantages might also be a detriment later one with main example being some aspects of C and UNIX came straight from PDP-11’s limitations. The low-level nature of the language + huge piles of code needing to be rewritten meant they kept the old approaches when new hardware came out.
I wonder what is going to happen to operating systems like Secure64’s SourceT micro OS, more info available https://secure64.com/secure-operating-system/ here, now that the Itanium2 has been EOL’d. As mentioned, the OS takes advantage of Itanium-specific features such as independent read/write/execute privileges per page, protected stack architecture, and 4 ring levels.
It seems that many of the design decisions relegated many of the “enterprise” features to Itanium chips only and they were left out of x86_64 specifically to keep the higher-end systems locked into the more expensive CPU. For example, only 2 ring levels and differing implementations of memory protection keying.
It seems that many of the design decisions relegated many of the “enterprise” features to Itanium chips only and they were left out of x86_64 specifically to keep the higher-end systems locked into the more expensive CPU. For example, only 2 ring levels and differing implementations of memory protection keying.
That’s not really consistent with history. X86 came out long before itanium and does have four rings. X86-64 was developed by AMD as a direct competitor to itanium. They weren’t leaving features out in a deliberate attempt to gift Intel more market share.
I’ve also heard, but not investigated - so consider these unsubstantiated - claims that many of the Itanium specifIc features that are unlike x86 were made to keep parity with and this ease transition from PA-RISC chips. This seems likely as PA was very popular with large enterprise, and supported HP-UX, etc.
The other take on that history I’ve heard is that Itanium was used as a testing ground for weird features, prior to potentially bringing them into x86, without risking getting the huge customer base hooked by backwards compatibility on something that might accidentally turn out later to not be useful enough to justify the expense in transistors, mm², design effort, power or whatever. I have no clue about whether that’s true.
Seems unlikely, given the target audience, big enterprise, cares about back compact more than anybody.
The marketing said it had RAS features x86 didn’t plus speed boost from piles of registers and their VLIW or whatever stuff that didnt pan out. On top of security improvements. It was advertised as better than x86.
Then, backward compatibility prevailed and high-end x86 achieved parity on Itanium stuff. Now it’s a liability to Intel. OpenVMS is also getting hit since they got on Itanium bandwagon. SGI did too in past. Those two had other problems though haha.
I’ve wondered that exact thing. They chose it due to advanced security and fact that CTO (or some high up) had helped design those chips. Knew them well. Now, it’s EOL’d. I didn’t see any news release on it. Their contact page has no email, a toll-free number that hung up, and a contact form. I sent them a request for comment about current plans to deal with Itanium EOL plus a few options. I’ll message you if they reply. The “thank you” screen was blank, too, but might have been NoScript’s fault. (shrugs)
I currently know 3 employees and have know Dr. Gersch personally for a couple years. Let me shoot some emails and see if I can get a response.
EDIT: I also noticed that they have some x86 devices in the Datasheet section: https://secure64.com/resource_library/data-sheets/
Yes, I’d be very interesting in hearing the response as well, thank you.
I know that most operating systems that take advantage of IA64 features are moving to x64 with help from the compilers, and that NonStop and OpenVMS specifically are taking this approach.
I’m rewriting my collection of tiny, personal scripts in Rust (envangelion notwithstanding).
I’m hoping to see how difficult it is to write a standard Unix-style “read bytes”, “munge bytes”, “print bytes” utility in Rust, and I’m so far feeling positively about it. It’s not as effortless as shell (duh), and there’s more ceremony to it than in C, but it’s also very easy to get to the point where you’re reasonably sure the code is correct. I’ll be exploring property-based testing and/or fuzzing once I’ve completed some of these, mostly as an excuse to learn them.
Notably, the strong type-level distinction between UTF-8 strings and “OS” strings (&[u8], basically), has been really nice. For the scripts I wrote in shell, things were basically fine, since it’s just bytes all the time. The scripts I wrote in Python, however, were definitely incorrect with respect to handling input and output encoding properly. I just never bothered to fix them, because, uh, Python.
Given that slack is irc++, I’d like to see a comparison with something like pidgin connecting to a slack channel too. How much memory does that use?
Colloquy launches one process and weighs in at 70MB, thereabouts. When in the background, it mostly idles at 0% CPU utilization.
Because I am old, I think 70MB is way too much memory for an IRC app to use, but compared to Slack, it’s a drop in the bucket.
I’m using WeeChat with the wee-slack plugin, and the whole process tree is using about 77MB of resident memory. That includes a bunch of IRC connections and several other plugins, though (which translates to at least one instance each of Python and Perl).
It’s not particularly impressive, but it’s at least reasonable on a modern machine.
While the fact that everything is mutable in Python is useful for mocking purposes during testing, it is generally considered extremely bad style to override builtin functions, constants or class methods in production code.
I haven’t used Lua heavily, but from what I understand it also has the ability to do all that too. Does anyone know why Lua is (apparently) so much easier to jit than Python? Other than Mike Pall being a conglomerate of robots from the future, I mean.
I don’t know for sure, but two things come to mind:
LuaJIT is the primary runtime for Lua, which means that almost all community effort goes into it, and decisions are made about the language with LuaJIT in mind. Note: I’m way wrong, see mjn’s comment.
Lua is a much simpler language than Python, and that almost certainly translates to more room for optimization.
Re #1, I always viewed LuaJIT as an “alternative” optimized runtime if you needed maximum speed and could live with being behind on features, compared to the primary implementation. It appears it’s still quite a bit behind the main Lua language, compatible only with 5.1, which was released in 2006 and EOLed (in the main Lua implementation) in 2012.
Lua is a much simpler language than Python, and that almost certainly translates to more room for optimization.
But that doesn’t tell me anything. I still have no idea what features it excludes that are so problematic in Python.
In Python you can do some odd things like:
import sys
def abomination():
# get the stack frame of the function which called me
fr = sys._getframe(1)
# create a global variable visible to caller
fr.f_globals['z'] = "z wasn't defined, but it is now."
# spy on caller's local variable
print ("Supposedly hidden value of x: %s" % (fr.f_locals['x'],))
# spy on caller's name
return "I was called by a function called %r" % (fr.f_code.co_name,)
def victim():
x = "Some hidden secret, never to be revealed."
y = abomination()
print ("y = %s" % (y,))
print ("z = %s" % (z,)) # note the lack of 'z' in scope!
if __name__ == '__main__':
victim()
Which will output:
Supposedly hidden value of x: Some hidden secret, never to be revealed.
y = I was called by a function called 'victim'
z = z wasn't defined, but it is now.
Lots of code in real-life libraries that people depend upon in production (transitively) depends on code that does all of these things, and more. (Very little code does this directly, but projects have dependencies which have dependencies which have…).
Making odd things like these work (and programs that sometimes do them run fast) requires either a somewhat naïve execution strategy that materialises a lot of these objects all the time and leaves a lot of performance on the table (e.g. touching global variables is noticeably slower than local variables in CPython), or a really complicated execution strategy that materialises them only when precisely necessary.
I don’t know how well luajit actually optimizes such code, but it certainly doesn’t prevent luajit from optimizing other code.
local debug = require "debug"
local function leaker()
local n, v = debug.getlocal(2, 1)
print(n, v)
end
local function foo()
local x = "not a secret"
leaker()
end
foo()
luajit t.lua
x not a secret
I suspect some of the following:
Python has a much richer class system with multiple inheritance, static methods, class methods, class attributes, metaclasses, etc. When you do self.foo it has to look the object and then the class, and superclasses, at least. It also has the __dict__ attribute on every class which probably has some implications for the representation.
Python has more extension points. It has __eq__ and __iter__ and so forth. It has “properties” and I think a more complicated version of __get__ and __set__. And __getitem__ and __setitem__ and __slice__, etc.
It’s true everything is mutable in Lua too, but Python just has more stuff that’s mutable. You can look at type.__bases__ etc. to get the base class, etc.
The bigness probably makes a difference. Python has dicts, tuples, and lists, while Lua just has the table. Python 3 strings are unicode by by by default.
A lot of things like for x in y and len(x) are polymorphic in Python; they either aren’t or are less so in Lua.
I think you are underestimating the extensibility and mutability of things like metatables in lua. I see the “python is just too complex; you don’t understand” argument a lot, but mostly from people who don’t seem to know lua all that well either. My apologies if you’re a seasoned Lua developer, but I don’t think you’ve really demonstrated that python is intractably more complex.
One reason that few people understand is that it is possible to write a fast Lua interpreter. LuaJIT with JIT off is already very fast, the interpreter is written in assembly for that purpose.
A fast interpreter is a prerequisite for a good tracing JIT compiler, because the code must be interpreted from time to time. That is one of the reasons why attempts at pure tracing JavaScript JIT compilers have failed. Current JS JIT compilers typically compile methods instead.
If you haven’t read this LtU thread and are interested in this topic, go do it now :) http://lambda-the-ultimate.org/node/3851
I’ve read it, but I’m still not sure why all of the attempts at making python fast seem to run into walls.
Given the benefits and that it’s so easy to migrate from Apache to Nginx I’m surprised this isn’t way more divergent by now. Other than legacy support slash too lazy to change, are there any pros to Apache I’m unaware of?
Apache just works fine and is supported by Red Hat? I guess that covers also some of the “not switching” crowd.
Personally, I never saw the need to change to nginx, seems just as big and bloated as Apache, and not that easy to configure if you want to have a secure php-fpm deployment. If I’d switch then to something like OpenBSD’s httpd.
Nginx did not support loading modules at runtime until recently. In addition, there exists an Apache module for basically anything you could possibly want to do.
I’d love to hear any tactics other operations folks use to deal with this issue in places where running ZFS is not an option.
Uuh, what’s the issue with using pg_dump for backing up? Aside from it not being incremental, of course.
I’d also be interested in the answer to this, in addition to why one shouldn’t put a database and it’s follower on separate hardware?
I think he means things like potentially different architecture but also probably things like different sized disks or amounts of ram.
I think what it means is use different hardware architectures? I’m not sure how else to interpret that.
This is one of those project ideas I’ve though about on and off for years, interleaved with the uneasy feeling that Vim (and my Vim configuration) is help together by shoestrings, and that maybe I should start using Sam or Acme.
The object before verb approach strikes me as potentially brilliant, though I’d need to try the editor for a bit before being sure. I could imagine running into mental issues with the Yoda phrasing, as @jibsen mentioned. It might be enough to think of it as ‘subject - verb’ instead and imagine the command to take the form of “this selection until the next ‘f’ deletes itself.”
The largest issue, of course, is the standard worse-is-better one: VIm keybindings are widely available for everything from web browsers to shells, and having to keep a second Vim-proximate language in my head for just my test editor seems unwieldy.
Have you tried spacemacs? Has all the good of strong vim bindings and quite extensive vim power features, but doesn’t feel quite as hacked together and fragile as vim+plugins does. The discoverability of the chords and the well thought out mnemonic combinations are a real win.
It’s changed a bunch but here is a decent video overview and here is the homepage. Install is quick and easy.
This is an excellent article, with a deeply important premise.
There’s some legitimate criticism to be made surrounding the article’s discussion of whether “passion” is the right word for some behaviors. For example, the argument that employees lacking “passion” might be more useful falls flat for me: the use of “passionate” and “dispassionate” is probably divorced enough from the use of “passion” in contemporary U.S. English to rob the comparison of usefulness. This is the kind of argument that might make someone throw a “well that’s just semantics” line at you.
Interestingly, there’s a linguistic point not made in the article that strikes me as somewhat bizarre: the source of the word “passion” in modern usage is in explicit reference to the suffering of Jesus on the cross. In light of the analogy between the prosperity gospel and the use of “passion” in Valley-speak, I’m surprised this wasn’t brought up.
Indeed, the religious connection is actually fundamental to the issue. The prosperity gospel has its roots in the Reformation — the canonical work here is Weber’s The Protestant Ethic and the Spirit of Capitalism. The prosperity gospel is an elaboration of the notion of a “calling” that Weber discusses.
Interesting, the prosperity gospel is regarded as an out-and-out heresy by the Catholic Church and most evangelical denominations.
I passed this around a few friends in my professional network who have used AWS extensively in the past, and some of whom still use it now. Results ranged from 3/20 (me, anti-bragging rights, lol) to 7/20. AWS’s visual and interface design is truly in a league of its own in terms of utter hostility to users.
I’m an operations professional who set up the AWS infrastructure for a “hip, well-funded startup,” and I’m here to join the 3/20 club.
I did get the color of the Node.js SDK right, which I’m proud of, having never noticed the logo consciously.